This is an entire specialism, requiring years of training. A brief overview follows. Diagnostic work requires low energy, short lived isotopes. When the objective is therapy, i.e, to irretrievably damage cells such as the rapidly growing cells found in tumours, high energy beams are used, either high beta, X or gamma, energies in excess of 1MeV, usually, whose energy/penetrating power characteristics are well understood. The objective is to irradiate as much of the affected area as possible without damaging healthy tissue. Rapidly growing tissue is much more radiosensitive (able to be damaged more easily) than normal tissue, which is why diagnostic X rays are prohibited for pregnant women since the rapidly growing cells in the foetus are highly sensitive.

A high dose of I-131 can be given which accumulates in the thyroid and destroys the cancerous cells at the expense of a very high healthy thyroid dose. Cs-137 needles can be implanted into small plastic eggs and inserted for a predetermined period at the head of the cervix to knock out cervical cancer cells. Co-60, is artificially produced by slow neutron capture of Co-59 in a reactor, has a half life of 5.3 years – a long half life is preferred so we don’t have to replace the source very often.  It is a beta emitter , decaying to Ni-60 and can be used in preference to X-rays to focus, or better, collimate a beam on to a particular area of interest with lead collimators – often  to treat areas just under the skin and often from several different angles to maximize dose on the area of interest while minimizing it elsewhere. The activated nickel nucleus emits two gamma rays with energies of 1.17 and 1.33 MeV.  A Co-60 source with an activity of 2.8 GBq, which is equivalent to 60 micrograms of pure Co-60, generates a dose of 1 mSv at one metre distance within one hour, a very significant dose for therapy.

As with all treatment machines, this modern X-ray treatment machine is clearly rotatable. Calculating dosage is highly complex and in this example, the machine orientation and exposure times will be calculated with an onboard computer.


Radiotherapy is frequently used with chemotherapy to provide a cocktail of mechanisms to destroy cancers. It should be pointed out that cells are not “killed”. Their functions are impaired because their proteins are damaged by ionisation, hence they fail to fulfill their metabolic functions. Enzymes are misshapen hence cannot bind to substrates properly, nucleic acids can no longer accurately replicate and so on.

Biological Effects of Radiation. Dose and Dose Equivalent.


Are bananas radiologically harmful?
Are bananas radiologically harmful?

Here’s the rules. When handling IR (ionising radiation)

keep exposure short

get as far away as you can

get behind something dense

If exposed to ionising radiation, macromolecules which rely on precise conformations are damaged, change shape and don’t work. DNA and other nucleic acids can’t repair themselves and replicate nonsense proteins. Irradiating water produces highly reactive free radicals which have biological implications since water is a universal solvent.

Absorbed dose D is defined as the energy absorbed per unit mass or tissue or absorber, so D = E/m of irradiated material in J/kg

where D is 1J/kg = 1 GRAY (Gy) or the old unit, the rad, 100 rad = 1Gy

Damage H produced is in “dose equivalents” since the ionising damage is dependent on radiation type. Q is a quality factor , a dimensionless integer, 1 for betas and gammas, 20 for alpha particles.

Screen Shot 2014-04-19 at 9.16.15 PMH is measured in SIEVERT (Sv) – also J/kg,  where 1Sv = 100 rem – the old unit.

A dose of 0.01mSv is received from

  • an average year of TV watching
  • an airline flight from New York to San Francisco
  • a year living next door to a normally operating nuclear power plant

Maximum permissible dose is a dose level applied to workers in the radiation industries, including hospitals, and is about 50mSv per year from a variety of background sources, chest X-rays and so on. Workers are regularly monitored in hazardous environments.

The loss in life expectancy from a 0.01mSv dose is about 1.2 minutes, equivalent to crossing the street three times or three puffs on a cigarette. Eating a banana contributes 1 uSv (microsievert)

A dose of 5Sv is huge – causing massive tissue breakdown, consequent internal bleeding, and death within about six weeks. Some Hiroshima victims may have received as much as 100Sv. It was said that if you survived the first three weeks, you might just pull through.

Look at Q1 and 2 on P 714

Sometimes, an alternative unit, “exposure”, E, can be used. If you are sheltering behind something dense, your dose is less than if you were “exposed”. The unit for exposure is charge-dependent and is in C/kg.  Put simply, we can calculate that D = 34E in J/kg ( one ion requires 34eV to be produced in air)

We can say without explanation that Screen Shot 2014-04-19 at 9.43.41 PM

where f is a quality factor – a dimensionless integer dependent either on photon energy or the material receiving the dose, and sometimes both.


Isotopes for Diagnosis

Normal thyroid scan. Equal uptake in both lobes 4h post-dose
Normal thyroid scan. Equal uptake in both lobes 4h post-dose

This subject is vast and growing, so we’ll just take a very quick look.

When introducing ionising radiation into the body there are a few rules of thumb. These are as follows:

Metabolically indistinguishable thus metabolized as if stable

Short-lived, both physically and biologically

Non-toxic especially if a non-metabolite.

For imaging, beta or gamma emitters are preferred of medium energy (hence range) so as they leave the body they can be detected

There are very many techniques routinely used and new developments happen frequently. Here’s a few examples. The ones in red are medical applications.

Two are popularly used – the following is expanded from the above link:

Technetium – 99m: The most widely used radioactive isotope for diagnostic studies in nuclear medicine. Different chemical forms are used for brain, bone, liver, spleen and kidney imaging and also for blood flow studies. M = metastable. Eluted (milked) as sodium pertechnetate from a molybdenum “cow”, it has a convenient half life of 6h and energy a useful 140keV. An injected dose of the radiopharmaceutical is then able to either be followed around the body (dynamic renal studies), uptaken into an organ which can be externally imaged with a gamma camera or blood protein metabolism when tagged to serum albumen studied by repeated sampling later and measurement of radiation content in a scintillation counter.

Screen Shot 2014-04-19 at 8.45.13 PM

Screen Shot 2014-04-19 at 8.46.13 PM

Screen Shot 2014-04-19 at 8.50.04 PMThe reduced uptake – cold spots – in this Tc 99m bone scan are clearly visible.

The later  image is on the left – notice the fuller bladder.

Iodine – 131: half life just over 8 days – a bit long for today’s clinical studies – and an energy of 360keV is used to diagnose and treat thyroid disorders. (Former President George Bush and Mrs. Bush were both successfully treated for Graves’ disease, a characteristic enlargement of the thyroid, with radioactive iodine.)


A small quantity of the isotope is given orally as dilute sodium iodide. The thyroid uptakes the iodine exclusively to metabolize two hormones which regulate iodine metabolism in the body. A scintillation detector is used to measure the count rate over the gland after specific time intervals, from which important diagnostic information is obtainable. A deficiency or overactivity in production causes characteristic symptoms which require either surgical or biochemical intervention.

In much larger doses, iodine 131 is an effective agent for treatment – its ionizing properties are restricted to the surrounding thyroid tissue and renders cancer cells ineffective.

Work through  Q6, p710

In conclusion – questions needn’t necessarily be restricted to these two. For example, Xe 133 is completely inert and has a very short half life. Patient takes a deep breath infused with a small quantity of the isotope, stands in front of the gamma camera for a few seconds and a lung perfusion image is created. Here’s an image of two healthy lungs with no cold spots.Screen Shot 2014-04-19 at 10.16.29 PM

 Effective Half Life, T

Assuming both decay and elimination from the body are exponential decay processes, we can write:

In calculations, make sure that both half lives are being measured in the same units
In calculations, make sure that both half lives are being measured in the same units

So, if iodine 131 has a radiological half life of 8 days and a biological half life of 24 days, the effective half life is 6d.


Ultrasonic Techniques in Medicine

Exam questions about this are sometimes comparative. X rays are more invasive since they cause ionization. Ultrasound (US) is a non-invasive procedure causing minimal local heating which relies on the fact that sound waves well above the range of human hearing are reflected at a boundary, such as that between bone and air, for example. The greater the density difference between the two media, the higher the percentage reflected back and vice-versa.

The speed with which an acoustic wave moves through a medium is dependent upon the density and elastic resistance of the medium. Media that are dense will transmit a mechanical wave with greater speed than those that are less dense. As an example, the acoustic speed of a mechanical wave through air is about 340m/s, through water, it is 1500m/s, through soft tissue, 1540 m/s and through bone, 4080m/s, twelve times faster than in air. It used to be though that ultrasonic resolution was poorer than X-rays, this is now not in fact the case; foetal heartbeat and retinal scanning requires a resolution of the order of 1-2mm and this is achieved by raising the US frequency. For obstetric work, between 2 and 7 MHz is fine, for retinal work, up to 15MHz is used which increases resolution substantially. A rule of thumb is that the organ under study should be about 200 wavelengths away from the transducer (or emitter/receiver)

When an AC source of very high frequency is applied to a piezoelectric crystal (even quartz works fine) it vibrates, creating an ultrasound wave at the same frequency as the AC. The sound produced is then directed at an object and then bounces back off the object under investigation. When the sound wave comes back to the   piezoelectric crystal, it has the reverse effect – causing the mechanical energy produced from the sound vibrating the crystal to be converted into electrical energy. By measuring the time between when the sound was sent and received, the amplitude of the sound and the pitch of the sound, a computer can produce images, calculate depths and calculate speeds.Image

We recall having previously mentioned specific acoustic impedance, Z, the product of material density and velocity in the medium.


Now consider the interface between two materials of specific acoustic impedance Zand Z2, having different densities and therefore speeds, we may write:


This is worth a moment’s thought, since it suggests the most reflection, hence largest signal received, will happen when the sound is reflected off the interface between two materials of very different densities.

In obstetrics, if a coupling gel were not used to move the transducer across the patient’s abdomen, apart from the lack of lubrication, all the sound would be reflected and none transmitted into the body. This “acoustic coupling” is vital to ensure that most of the sound isn’t reflected back and so gets past the air/skin barrier.

A Scan 

A scans can be used in order to measure distances. A transducer emits an ultrasonic pulse and the time taken for the pulse to bounce off an object and come back is graphed in order to determine how far away the object is. A-scans only give one-dimensional information and therefore are not useful for imaging.This is the simplest kind. A burst of ultrasound is passed into, for example, a newborn foetal skull. If the two halves of the brain are equally sized, the midline echo will occur exactly between the reflections on the opposite sides of the skull. If not, the echoes are skewed to one side (see diagram. T = transducer and we imagine looking down at the head from the top. Not great but you hopefully get the idea)


Retinal detachment can be detected using high frequency US, as shown.


In a healthy retina, the middle spike should be absent but because the retina (yellow line) is not attached to the eyeball surface and is floating in the fluid in front, an echo is seen as the US bounces off it.

B Scans

B scans are multiple A scans, produced by a moving transducer. They take many A images per second and an image intensifier – these days a computer – retains and displays the information in real time, building up a 2D slice across the transducer’s moving path. It is a routine procedure in obstetrics where the transducer is moved rapidly over the abdomen from right to left and back again a few times so as to collect enough data to build up an image. This image shows twin foetuses.


If the transducer can be placed towards an oncoming blood vessel, the changing velocity of the red blood cells, hence overall speed of blood can be monitored as the US signal bounces off oncoming red blood cells and the consequent Doppler shift measured. Notice the use of the cosine since the blood velocity along the detector line between source (S) and emitter (E) is

Screen Shot 2014-04-17 at 10.24.53 PM

where u=blood velocity,

c=velocity of sound in blood

and f0 is the incident ultrasound frequency

Screen Shot 2014-04-17 at 10.19.19 PM


You should study carefully the “Advantages and Disadvantages” Table I2.2 on p 709

IB HL Medical Imaging – More Advanced Newer Techniques

I think it likely that they will ask for comparisons rather than specific detail.MRI is the only one not using X rays so patient dose is the important factor.

Physics and Chemistry for IG and A level

Image intensifier screens (fig I2.7) allow the production of an image changing in real time, which can be advantageous, particularly when used with high contrast material like Ba which shows the passage of barium sulphate through the gut, for example. It does, however, subject the patient to long exposure times, hence high radiation dose. Since the objective is to minimize this, we look elsewhere for more sophisticated techniques.

Faster computing in the 1970’s yielded CAT scanning (Computerised Axial Tomography) Tomos is Greek for ‘slice’ and the technique involves imaging a slice through the body by rotating digital detectors and emitters, before moving on to the next slice, consequently reducing exposure time. It is to X-rays what digital cameras are to photography. A brain scan takes less than a second to image. (see p705) Resolution (remember Rayleigh) has much improved over the last three decades and kidney stones less than 2mm…

View original post 292 more words

IB HL Medical Imaging – X-rays

X-rays (short wavelength, high energy EM radiation) are produced when charged particles like electrons are accelerated around nuclei. Since they are negatively charged, they are accelerated towards a positive nucleus, emitting as they curve around a ‘braking radiation’ or‘bremsstrahlung’ in German. The degree of ‘bend’ determines the energy of the X rays produced, so the radiation emitted is over a continuum of wavelengths .

Screen Shot 2014-04-15 at 9.45.00 AM

Additionally, the electrons may directly promote other electrons in the metal atoms of the target from its lower to higher energy levels- when these decay back  they emit characteristic X-photons. Here two characteristic jumps are superposed on the bremsstrahlung background. The relative intensity is a measure of how likely this event is.

Screen Shot 2014-04-15 at 9.45.37 AM
Characteristic X peaks for Mo. Notice the short wavelength cut-off corresponding to the electron delivering all its energy to forming an X-ray.


We won’t go into details here, except the only real difference between these and gamma rays is that gamma rays are emitted spontaneously from excited nuclei.

X-rays pass through human tissue, being selectively absorbed by denser material. In 1901, Wilhelm Roentgen was the first person ever to win the Nobel Prize for Physics and his discovery revolutionised the medical world.

Some schools have a small version of one of these.Screen Shot 2014-04-15 at 10.49.37 AM

Electrons are accelerated in a vacuum towards a metal target, W in this case.  X rays are produced – notice they are not subject to a ‘law of reflection’ – and pass through and out of a collection window for use. At diagnostic voltages (140KV for a chest X-ray, lower for dental use) the anode gets very hot and has to be cooled, the target is often rotated otherwise the heat generated would destroy it. The X rays are partially absorbed by the area of interest in the patient and the data is collected on photographic film as a negative image where high absorption is seen as a light area and vice versa.

The mechanism for energy loss in the body is photoelectric.  Since this effect is strongly dependent upon Z, there is substantial difference between Z(bone), about 14 and Z(soft tissue) about 7, so X rays are good at contrasting bone and soft tissue but not very good at contrasting between different soft tissues since their Z values are too similar.

Screen Shot 2014-04-15 at 10.48.55 AM

Increasing the sharpness of the shadow gives better diagnostic information. We can’t ‘focus’ X-rays  using a glass lens like we can with visible light so we have to arrange for rays normal to the film only to fall on its surface since scattered rays will blur the image. There is a section on p703-4  which explains how this is achieved by  filtering and lead grids – a possible exam question.

How do Photons Interact with Matter? Simple scatter, Compton scatter, Pair Production

There are three ways. Which one depends on the initial photon energy.

You might remember that the photoelectric effect is concerned with the total absorption of an incident photon of visible or perhaps UV light in a metal surface (actually on an outer electron) with the consequent emission of this electron. This is a low energy phenomenon and obviously the energy acquired by the electron is dependent on the initial photon energy, rapidly diminishing in importance as initial photon energy increases. Here’s the equation:

At higher photon energies,  such as with X rays,  which penetrate further into the electron cloud an inelastic event may occur, called Compton Scatter. A free electron takes up part of the photon energy, the photon is scattered or re-emitted with a longer wavelength and the difference is in the kinetic energy of the electron scattered in a different direction.

By contrast to the photoelectric effect, Compton scatter doesn’t vary much with incident photon energy, but increases  linearly with atomic number. A simplified diagram shows the scattered electron as a black arrow. Image

At higher energies still we see Pair Production being dominant, close to the nucleus,  where the photon is converted into an electron-positron pair which then annihilate to produce two identical photons. NB: all other conserved quantum numbers (angular momentum, electric charge, lepton number) of the produced particles must sum to zero – thus the created particles have opposite values to each other. The lowest energy of the incident photon must have higher energy than the sum of the rest mass energies of an electron and positron (2 × 0.511 MeV = 1.022 MeV) for pair production to occur.

A chart might help to visualize… Comparing the interactions by atomic number Z and photon energy in MeV, we get…Image

So, at low Z and relatively high energy, mostly Compton scattering,  low energy and high Z means the photoelectric effect predominates and high energy and high Z means Pair Production. Notice the logarithmic scale on the energy axis.

Ionising Radiation. Absorption, Attenuation and HVT

Radioactive material used in medicine can be used for either diagnosis or therapy. In both cases the primary directive is to minimize patient dose to non-critical regions. This is best achieved using where possible isotopes with short half lives and fast metabolic or physical elimination. The second directive is to use material indistinguishable metabolically from the stable version.

First, a quick look at ionizing radiations and their properties. Any charged massive particle can ionize atoms directly by fundamental interaction through the Coulomb force if it carries sufficient kinetic energy. In living tissue, macromolecular damage can lead to dysfunction – enzymes won’t work if their structure is changed by ionization, for example.Screen Shot 2014-04-14 at 8.22.55 PM

Alpha particles are monoenergetic, massive but slow, stopped by thin paper or a few cm of air, giving up all their energy in a short range, 105 ion pairs mm-1

Beta electrons are emitted with a range of energies, just shy of 2000x lighter than alphas, but 100 x less ionizing ability. Useful for imaging because the particles are penetrating enough to escape from the body and be externally detected. Stopped by thin Al or a few cm/1m of tissue, 103 ion pairs mm-1

Gamma photons – zero rest mass and v=c, stopped by thick (10cm) lead, range several m in air. Only 1 ion pair in each mm of path.

An exponential law of intensity v absorber thickness (x) c.f, law of radioactive decay shows how the intensity decreases with absorber thickness.If the absorber is homogeneous, we can define its absorptive ability by the ‘linear absorption coefficient’ ‘mu’, analogous to decay constant. The larger the value of ‘mu’ the better absorber of radiation the material is. ‘mu’ is related, obviously, to the density of the material but multiplied by a quantity describing how much of a target its atoms present to incoming radiation. We can keep it simple, however and just say:Screen Shot 2014-04-15 at 12.00.30 PMScreen Shot 2014-04-15 at 12.04.59 PMAttenuation or the ability of a material to absorb therefore weaken the radiation, is an exponential function, as shown. We can think about attenuation as ‘partial absorption’.

The HVT or ‘half value thickness’ is therefore the value of thickness x such that I drops to I/2, or

Screen Shot 2014-04-14 at 8.57.42 PMHVT is energy dependent, as one might expect – see fig I2.2, but this is less important than the exponential principle. What’s the HVT of this material? In what units is it measured? Suppose you had an energetic beta source, a detector and several identically thin squares of Al. You might think how you’d measure HVT for Al. Do you see why the source/detector distance would have to be constant throughout?Screen Shot 2014-04-14 at 9.09.51 PM

You should be able to find the HVT for this material hence its attenuation coefficient.

Look at Qs 3 and 4 on p702. These are important – exactly the kind of thing they might ask, so go through them carefully.




IB HL The Functioning of the Ear (3)

Because I’m old, I can only hear up to 10kHz. Normal people hear from 20Hz-20kHz and the upper frequencies decrease with age because of decreased flexibility in the aural mechanisms, thickening eardrum, ossicle degradation and so on. A few minutes spent with p697 on ‘hearing defects’ might be valuable here followed by q9, p699.

The ear’s sensitivity is frequency-dependent and our previous value for 1pW per square metre is only true for a frequency of 1kHz – the most sensitive region being as we can see from the graph below is at about 3kHz. If the ear canal is about 3cm in length or a quarter of a wavelength, what frequency does this correspond to? (2.83kHz) Many other mammals have different ranges dependent on the configuration of the outer ear, the way the pinna collects the sound and their electrical responsiveness.

Threshold of Hearing – what would this look like for an older person with conductive hearing loss?

Pitch is what we usually mean by perceived frequency, but a better way to think about it is it’s related to both pitch and intensity. A soft sound at a particular frequency is perceived as being of lower pitch than the same frequency but louder or more intense.

The sound is converted to an electrical signal by this (grossly simplified) mechanism. The sound wave interacts with hairs of different lengths like a Barton’s pendulum with a hair cell somewhere along the Organ of Corti and receptors on the Basilar Membrane which decreases in stiffness along its length. A particular frequency causes one of the hairs to oscillate with large amplitude or resonate and hence send its electrical signal to correspond to this frequency. The longest hairs are furthest away from the oval window hence respond to lowest frequencies.

IB HL Sound Intensity and the dB scale

To prevent permanent hearing damage, it’s a bad idea to stand close to the speakers at a rock concert.

This looks about far enough away…

Rolling Stones in Hyde Park 2013


You’ve all seen inverse square rules like this before, here the sound Intensity in watts per square metre a distance r away from a sound of P watts is shown. Intensity is simply a measure of how much energy falls per second on a 1 metre square of surface.

Curiously, the increase in hearing sensation is proportional to the fractional increase in intensity and the ear’s response to intensity is logarithmic which can be exploited to define a scale of hearing based on the ‘bel’ or more usually, the ‘decibel’ where 10dB = 1B.

An increase of 10dB implies an increase in intensity by a factor of 10, similar to the Richter scale for measuring earthquake damage.

We can think of



We’re comparing the sound we’re listening to to the lowest perceptible intensity we can perceive with perfect hearing.

It’s phenomenally small, only 1 picowatt per square metre.


This is used in all comparative dB calculations – the threshold of hearing, which increases markedly with age. Libraries, even so-called ‘silent spaces’, are probably generating an intensity of 40dB and those close to the Stones’ concert speakers will in all probability be in pain and risking permanent damage at between 120 and 130dB.

Look at Q2 and 3 on page 694. This shows how to convert a sound intensity into dB. Q4 on page 695 reminds us that sound intensities must be added, not dB values. Try Q1-5 on page 698. Check with me if you have a problem.

IB HL Biomedical Physics – Specific Acoustic Impedance

Acoustic impedance (Z) indicates how much sound pressure is generated by the vibration of molecules of a particular acoustic medium at a given frequency. This frequency (f) dependence is useful when describing the behaviour of musical wind instruments. Mathematically, it is the sound pressure p divided by the particle velocity c and the surface area S, through which an acoustic wave of frequency propagates. This is a longwinded way of saying that the specific acoustic impedances must match for maximum transmission. If they don’t, there’s too much reflection and not enough transmission. This is one reason why during prenatal ultrasound scans, a gel is placed on the transceiver to provide good acoustic coupling as well as lubrication. Here’s the formula


Poor acoustic coupling. This is a problem  between middle and inner ear because they don’t match acoustically. z (air) in the middle ear and z for the cochlear fluid is very different, hence a lot of the sound doesn’t get transmitted into the cochlear fluid. This is why the middle ear needs to amplify the sound first.

Since density is involved, temperature changes affect z. Higher temperature means higher speed and lower density. Since speed predominates, the higher the temperature the smaller z.




IB HL Biomedical Physics – the Ear and Hearing


The ear consists of three basic parts – the outer ear, the middle ear, and the inner ear. Each part serves a specific purpose in the task of detecting and interpreting sound. The outer ear collects and channels sound -a  longitudinal pressure wave- to the middle ear. Because of the length of the ear canal, it behaves like a resonance pipe open at one end, with antinode at the open end and node at the eardrum. It is capable of amplifying sounds with frequencies of approximately 3000 Hz – you should be able to verify this using an ear canal length of a few cm = a quarter wavelength, and wave speed about 340m/s. The eardrum is a flexible membrane like a drum skin, oscillating at the same frequency as the incoming sound. The 3 middle ear bones or ossicles are inside a fluid filled cavity and are levers, amplifying the pressure wave. The ossicles mechanically convert the vibrations of the eardrum into amplified pressure waves in the fluid of the  cochlea or inner ear with a lever arm factor of 1.3. Since the area of the eardrum is about 17 times larger than that of the exit point, the oval window, the sound pressure is concentrated, leading to a pressure gain of at least 22. This enhances the ability to hear very faint sounds where the incoming force on the eardrum is very small. Study the worked example on p692.

The pressure wave at the oval window is then transformed into a compression wave through the fluid in the inner ear which converts this energy into nerve impulses transmitted to the brain.


The Hubble classification scheme

for galaxies is based on their appearance.

Spirals/Barred spirals

Spiral Halo
Spiral Halo
disc bulge in middle, spiral or barred halo arms
halo, mainly old stars none in halo, lots in disc spiral arms

No fine structure, ellipses can be almost circular
Ellipsoidal/spherical, uniform star
old v little nothing new for the last 10bn y

Often, very strange shapes
no structure young and old lots of both plenty

Our Galaxy, the Milky Way is part of the Local Group (LG) extending for 10m ly containing about 20 galaxies, the nearest being the Large Magellanic Cloud at 2m ly

It contains the Andromeda Spiral (largest member)

Beyond this – a supercluster ( diameter = 15Mpc) of which our LG is a member.

How far away? We calculate from Doppler shift measurements on known spectral lines (either H or He, usually)


as long as we realise that the velocity as calculated is as if the star is moving directly away from us.

Screen Shot 2014-04-09 at 10.54.18

Hubble’s Law

distance d and recession speed v are proportional in an isotropic universe, but there is controversy over its exact value. Notice the bubble in the middle of the (much simplified) graph.


The Particle data Group offer the best estimate of the Hubble constant (“per second”) at 72 km s-1 Mpc-1  by observing Type 1 supernovae whose distances are known to better than 5%. It implies that at some point in time the Universe was a point – indirectly implying a Big Bang at r=0.

Thus,  1/H approximates to the age of the Universe, currently thought to be 13.7 bn years. You might like to fiddle with the units to check this value.


Black Holes – a non-mathematical introduction

A black hole is defined as a region of spacetime from which huge gravitational attraction prevents anything, including light, from escaping. If space-time is like a trampoline, a star causes a gravitational dent in the trampoline bed – the heavier it is the more the bed is distorted. A black hole distorts the bed to a massive degree, so much so that the ‘hole’ made by the hugely dense body has no bottom – like a tube with no end – a singularity in other words. General relativity predicts that a sufficiently compact, dense mass will deform spacetime to form a black hole. Around a black hole, there is a mathematically defined surface called an event horizon that marks the point of no return. An event horizon is a boundary in spacetime beyond which events cannot affect an outside observer. As water drains down a plughole, there is a radius less than from which a small floating object on its surface can no longer escape, in the same way that a ball rolled diagonally across the trampoline bed cannot escape from falling down the hole no matter how fast it is going, or light cannot escape from the curved fabric of spacetime. The hole is called “black” because it absorbs all the light that hits the horizon, reflecting nothing, just like a perfect black body in thermodynamics. Similarly, any object approaching the event horizon from the observer’s side appears to slow down (time dilation) and never quite pass through the horizon, with its image becoming more and more redshifted as time elapses.

Event horizons emit radiation like a black body with a finite temperature. This temperature is inversely proportional to the mass of the black hole.

The Schwarzschild radius (sometimes historically referred to as the gravitational radius) is the radius of a sphere such that, if all the mass of an object is compressed within that sphere, the escape speed from the surface of the sphere would equal the speed of light. An example of an object smaller than its Schwarzschild radius is a black hole.

Black holes are expected to form when very massive stars (> 40 solar masses) collapse at the end of their life cycle. After a black hole has formed it can continue to grow by absorbing or accreting mass from its surroundings. By absorbing other stars and merging with other black holes, supermassive black holes of millions of solar masses may form. Supermassive black holes are thought to exist in the centres of most galaxies since many invisible but massive objects appear to co-rotate around a visible binary companion.

Screen Shot 2014-04-07 at 23.34.07

This black hole is located in the nearby dwarf galaxy IC 10, 1.8 million ly from Earth in the constellation Cassiopeia. We can measure the black hole’s mass because it has an orbiting companion: a hot, highly evolved star. The star is ejecting gas in the form of a wind. Some of this material spirals toward the black hole, heats up, and emits X-rays before crossing the point of no return.

Neutron Stars

 Screen Shot 2014-04-07 at 22.44.33Neutron stars are ancient remnants of stars that have reached the end of their evolutionary journey. They began as stars between four and eight times the mass of the sun before exploding in catastrophic supernovae. After such an explosion blows a star’s outer layers into space, the core remains, no longer producing nuclear fusion. With no outward pressure from fusion to counterbalance gravity’s inward pull, the star condenses and collapses in upon itself.

Despite their small diameters – 20 km, neutron stars are usually about 1.5 times more massive than the sun, and are thus incredibly dense. Think of a sugar cube 2cm in length having a weight of 10TN on Earth. The energy of  electrons will increase upon compression, so pressure must be exerted on the electron gas to compress it, producing electron degeneracy pressure. With sufficient compression, electrons are forced into nuclei in the process of electron capture, relieving the pressure. In brief, protons capture electrons forming neutrons—the process that gives such stars their name. The composition of their cores is unknown, but they may consist of a neutron superfluid or some unknown state of matter.

When they are formed, neutron stars rotate in space. As they compress and shrink, this spinning speeds up because of the conservation of angular momentum—the same principle that causes a spinning skater to speed up when she pulls in her arms. These stars gradually slow down but those bodies that are still spinning rapidly may emit radiation that from Earth appears to blink on and off as the star spins, like the beam of light from a turning lighthouse. This “pulsing” appearance gives some neutron stars the name pulsars.

After spinning for several million years pulsars are drained of their energy by gravitational drag from the outer layers and become normal neutron stars. Few of the known existing neutron stars are pulsars. Only about 1,000 pulsars are known to exist, though there may be hundreds of millions of old neutron stars in the galaxy.

The huge pressures that exist at the core of neutron stars may be like those that existed at the time of the Big Bang.

The Birth and Lifetime of Stars (2)

Imagine the protostellar material becoming denser and heating up, hence moving left to right on the HR diagram, eventually finding a place on the main sequence.  Where they end up is dependent on their original mass. After a lifetime on the main sequence, what happens to them then? It depends on their mass.

Reminder – the proton-proton chain is is the nucleosynthesis most commonly found in stellar processes. You might like to transpose this into a series of nuclear equations…

  1. Two mass-1 isotopes of H undergo a simultaneous fusion and beta + decay to produce a positron (immediately annihilated with an electron to form 2 gamma rays + 1.02MeV)), a neutrino, and a mass-2 isotope of H (deuterium) +0.42MeV
  2. The deuterium fuses with another mass-1 isotope of H to produce He-3 and a gamma-ray +5.49MeV
  3. Two He-3 isotopes produced in separate implementations of steps (1) and (2)  most commonly fuse to form a He-4 nucleus plus two protons and a further 12.6MeV, but there are other, temperature-dependent pathways.

Eventually, the main sequence star’s main store of H is used up. Without radiation pressure, it contracts under gravity. This contraction releases GPE, heats the core up, the remaining H at the periphery fuses, expands and cools. The star increases in size, becoming a Red Giant, huge and cool. What happens next depends on initial mass. Check this diagram out – it’s important for exam purposes.

Screen Shot 2014-04-07 at 20.05.55

An HR diagram is below. We recall that temperature increases from right to left. Notice how our sun, for example,  when it runs out of H will eventually loop off the main sequence on an HR diagram heading north towards the giants section, as the radiation pressure due to He fusion pushes the remaining H outwards, the consequent red giant’s orbit will at least engulf Mars and probably further, thereafter looping back down again below main sequence and ending up as a white dwarf

Neutron stars and black holes aren’t shown – this is why.

  •  Neutron stars are the collapsed cores of supergiants that have exploded as supernovae. They are about 6-20 km across with average densities of a million tonnes per cubic centimetre . With temperatures of the order of 1,000,000K, they would fall far off to the left of the diagram.
  • Superdense black holes, which may be created out of supernovae from the most massive stars, emit no light on their own and cannot be seen. Their surroundings may become visible if they accrete mass from a binary companion, but they still cannot be placed on an HR diagram.Screen Shot 2014-04-09 at 18.03.03

When somebody puts on too much weight there is an increased risk of heart attack; when a white dwarf star puts on too much weight (i.e. adds mass), there is the mother of all fatal heart attacks, a supernova explosion. The greatest mass a white dwarf star can have before it goes supernova is called the Chandrasekhar limit. About 1.4 solar masses is the limit.

Read more:

“Where do Stars Go When They Die?” is a podcast which might answer a few questions







The Birth and Lifetime of Stars (1)

There’s about 1 H atom in every cubic centimetre of space. You could work out the density.Screen Shot 2014-04-06 at 23.12.30

If the gravitational energy of a mass of gas is greater than the average kinetic energy of random thermal motion of the constituent, mostly hydrogen molecules the mass will tend to collapse in on itself. As it does so, the thermal energy occupies a smaller volume and the gas will tend to heat up.


This is the Jeans Criterion



If the energy emitted is enough for the temperature to rise so that the object glows it is called a protostar. Equalising the two sides and using the density above and 100K as the temperature, a simple substitution shows that a mass of gas equivalent to 1500 solar masses would be required. This could obviously fuel several stars.

As gravitational collapse is balanced by outward radiation pressure,  fusion then begins to occur at about 5-10 million kelvin, the object attains optimal size, releasing large amounts of electromagnetic energy and the star glows like a black body with a characteristically stable temperature profile for billions of years.

When on the main sequence, the relationship between mass and luminosity is Screen Shot 2014-04-09 at 19.59.23

where the power alpha is a number between three and four, dependent on star type.

We can use this to estimate the lifetime of the star on the Main Sequence


Universe – Open, Closed or Neither…


If the density of the Universe is greater than critical density, it’s closed. Think of a sphere which’ll stop expanding, then collapse.

If the density of the Universe is less than critical density, it’s open, continuing to expand forever at a slowing rate.

If the density of the Universe is equal to critical density, it’s flat or open since it’ll keep expanding but at a slower and slower rate until the rate is zero.This looks to be best option so far since dark energy which appears to exert a repulsive force larger than gravity is pushing the Universe apart and is three times more abundant than matter. Even the matter component is probably mostly dark (brown dwarfs and so on, other stuff we can’t for some reason see).

What is critical density? We can calculate it. Think of an expanding spherical dust cloud. An object of mass m moves away from the centre of the cloud, mass M with a speed v that satisfies Hubble’s Law v=Hr where H is Hubble’s Constant, currently thought to be 72km/s/Mpc

Adding the KE and PE, we get a total energy:Image

Replacing v with Hr and M with density x volume:


If the energy is positive (the kinetic energy term larger ) the mass will continue to move away. The mass will stop moving at infinity if the energy terms add up to zero and contract back if the energy is negative. It is the critical density term which thus defines what happens:




On a graph then, even though its density = critical density, it doesn’t actually look as if the size of the Universe is going to flatten out as time goes on any time soon, instead, it’ll get bigger more rapidly than before because the expansion rate is accelerating.Screen Shot 2014-04-06 at 20.20.21