## Resolvance of a Diffraction Grating

Illuminating a diffraction grating with monochromatic light from a He/Ne laser shows a typical pattern, out in the photograph to m=3 on both sides. The spots are equally spaced and we notice that the m=2 spot is hidden under the first single slit diffraction minimum – a “missing order”.

The geometry is identical to that for a double slit, d being the distance between the centre of one slit and the next. For a bright maximum:

Unlike two-slit interference, only at very particular angles do the contributions from each slit add constructively. Everywhere else, the contribution from one slit has a partner somewhere else down the grating which cancels its contribution out, hence the very bright spots and a lot of empty space.

You are strongly encouraged to go to the Wolfram Demonstrations Project, download the CDF player and experiment with this demonstration. 1, 2 or many slits -the choice is yours. With 15 slits the pattern is almost indistinguishable from a diffraction grating – screenshot below – the single slit diffraction envelope is clearly shown. Light intensity (y-axis) is proportional to amplitude squared.

A flame test for sodium displays a very bright yellow emission. This emission is due to the sodium D-lines – two lines very close together.

The diagram shows the absorption spectrum of the Sun by Fraunhöfer who labelled the lines. The sodium doublet is seen at wavelengths of about 589.0 nm and 589.6nm.

How could these be resolved using a diffraction grating? We recall that a diffraction grating gives sharp, clear orders.

More accurately, the D lines have wavelength1 = 589.592nm and wavelength2 = 588.995nm

We can find the resolvance or the resolving power required for the doublet to be resolved.

For N lines of the diffraction grating, we can write (without derivation) for the mth order:

So, in this case, for a required resolvance of about 1000, viewing the second order would need N=500 grating lines to be illuminated – even the coarsest of gratings manages this easily – a grating with 1800 lines per mm is quite common, if rather expensive. The larger N the better the resolution. If third, fourth or greater orders are visible, a coarser hence cheaper grating will do.

## Newton’s Laws of Motion

FIRST: ” A body continues in a state of rest or motion at constant speed in a straight line unless acted upon by an unbalanced external force.”

SECOND: “The applied force is equal to the rate of change of momentum of the body.” . A rather more modern interpretation is here. If cliff-diving appeals to you, watch this video. As long as you don’t scare easily…

the conveyor belt problem – if we are to keep a conveyor belt moving at a steady speed – for example in a coal mine where mass is being added to it all the time, we require a force to be applied to the conveyor belt.

THIRD: “For every action, there is an equal and opposite reaction”

Think about a jet propulsion system. Thrust is a mechanical force which is generated through the reaction of accelerating a mass of gas, as explained by Newton 3. A gas or working fluid is accelerated to the rear and the engine and aircraft are accelerated in the opposite direction.

The force on the working fluid is equal and opposite to the force on the engine and aircraft.

Look here for a very easy walkthrough of all of Newton’s Laws. This is a particularly good treatment so you can work through the videos yourselves.

## Doppler Effect

Imagine a Formula 1 car approaching the stands at 60m/s. The frequency of sound made by the engine as heard by a stationary observer in the stands is higher than the actual frequency as heard by the driver. The sound is squashed up – or better, the apparent wavelength is decreased and the apparent frequency increased.

As the car recedes away from the stands, exactly the reverse happens. the observer waves goodbye to the red line. Think of EEEEYOWWWW as the car approaches then recedes.

For a stationary observer and a moving source, we can write:

These will mostly do – but IB requires us to use these as well:-A quick calculation shows how the first equation works. Let the car be moving towards us in the stands at a speed us of 60m/s and emitting a frequency f of 800Hz.

Speed of sound in air is 340m/s. We can find the frequency f’ as heard by the stationary observer. Common sense tells us whether we add or subtract the velocities – in this case, we subtract and hear a higher frequency as it approaches us. (the EEEE bit)

As it recedes, we add, thus: (the YOWWW bit)

Police speed detectors bounce microwave radiation (about 10GHz) off a moving vehicle and detect the reflected waves. Because the car is moving towards the police observer, these waves are shifted in frequency by the Doppler effect and the difference in frequency between the transmitted and reflected waves provides a measure of the vehicle’s speed. Of course it works just as well for recession speeds as well.

Two Doppler shifts because of the reflection from a moving target. c is of course the speed of light

By observing distant galaxies, Edwin Hubble concluded that distance and recession speed were proportional – so galaxies further away are receding faster than closer galaxies. We know this because the atomic fingerprint or spectrum of atomic hydrogen or helium is shifted to the red (long wavelength) end of the visible spectrum. The degree of redshift can be used to find out how far away a galaxy is.

This absorption spectrum shot (idealised) shows what the spectrum of atomic hydrogen might look like from several distant objects like galaxies. The further away, the greater the redshift. Redshifts of up to 0.95c have been observed – the light having taken almost the lifetime of the Universe to reach us.

Finally, a medical use. Doppler blood flow is a technique whereby ultrasound waves (f about 800Hz) emitted from a piezoelectric transducer (transmitter/receiver) are reflected off red blood cells in an artery or vein as they are moving towards the stationary detector. The more occluded or blocked the artery is (think about a fluid in a pipe) the faster the cells are moving. It can also be used to find blood clots in deep veins – DVT – deep vein thrombosis – can be fatal.

The detector and the moving cells are at an angle hence the cosine term and, like the police car, the factor 2 accounts for the reflection from a moving source.

## IB: Engineering Science Option B: Torque, Angular Momentum and Moment of Inertia (amended)

THIS POST  WAS ORIGINALLY WRITTEN FOR MY OWN IB CLASS. THERE ARE HANDOUTS AND PROBLEMS HERE THAT WE DID IN CLASS BUT NEWCOMERS SHOULD FIND THEM HELPFUL. GO AHEAD AND TRY.

It’s useful to bear in mind that if you can do SUVAT problems, you should have no trouble with their circular equivalents.

There are 4 handouts in total to download; please make sure you work through them carefully. Any difficulty, get in touch.

The main arguments here are the idea of rotational motion and torque as force x distance from pivot x sin(angle between them)

Moments of Inertia need not be calculated for this course – if necessary, you’ll be given them. However, here’s a little problem to think about. You have 2 balls of identical diameter and weight. One is solid, one is hollow. You can’t tell which is which just by knocking on them. Devise a simple way of finding out which one is the solid one (hint: think about the balls rolling down an inclined plane from the same height. Now, compare the moments of inertia of the two balls. The rest is conservation of energy so quite easy.) If you can’t write out the solution, message me for help.

This little animation is quite fun to glance at – the angular displacement is, however, in degrees, not radians, so be careful.  Here’s a screenshot, showing displacement – time graphs for the ant and the ladybug – the constant time period implies constant angular velocity. Notice too that the ladybug leads the ant by 90 degrees

Notice, angular velocity is constant, but the linear speed of the ant and the ladybug are not the same. The ladybug, being closer to the axis of rotation has a smaller linear speed, because:(Notice the vertical displacements of the bugs execute SHM – AHLs will study this later)

1. A review of circular-motion-1
2. Key Ideas, torque and couple
4. From the Specimen Paper specimen-question-for-option-b

Moment of inertia I is defined as the ratio of the angular momentum L of a system to its angular velocity ω around a principal axis. Just as inertial mass is the ratio of linear momentum to speed – its resistance to acceleration, in other words: Angular momentum in a closed system is, just like its linear counterpart, conserved. The ice skater rotates faster when the arms drop to the sides because moment of inertia is reduced and thus angular velocity increases.

Look at the Wolfram demonstration

You will need to download the Wolfram CDF player in order to run the demo.

We might also notice that, for a body starting to rotate from rest:

Practically, we find I by imagining a flat sheet of any shape like this having an infinite number of mass elements m at their respective distances r from the pivot, each contributing  torque about the axis of rotation.

We have to add all the torques up, normally requiring integration. But, to keep it simple, we can write:

which is, as the handout shows, the basis for finding I for lots of other shapes and axes of rotation. Remember, you’ll be given I for a particular shape as required.

Bear in mind that when we do problems, the total energy of the system is the sum of the rotational and linear parts – important when we think about an object rolling (instead of sliding) down a hill, for example. Take a look at this solid-cylinder-rolling-down-an-inclined-plane. which runs through a few basic ideas plus some possible lab work.

Finally, for now, a use for all that stored energy.

The great flywheel on Richard Trevithick’s 1802 locomotive, used to level out the power supplied by a single cylinder. Rotational inertia kept the wheel turning.

## Charge Coupled Devices – Camera Basics

A CCD contains a capacitative array which individually photoelectrically converts incoming light photons into electrons, hence a voltage across each capacitor, whose magnitude is dependent on how many electrons were released by the incoming illumination. One pixel is one of these capacitors. These voltages are processed as an array, to yield a digital image. Clearly, the solid state capacitors are sensitive to the spectrum of visible frequencies, and electrons are emitted irrespective of the intensity of incident light. If a 9Mpx camera has a square CCD, measuring 3cm x3cm, each side has length 3000px, so each pixel is a square of side 0.01mm. If  two objects are to be resolved by the array, their images must be formed on the camera at least two pixels apart, achieved using the optics of the camera. Quantum Efficiency of a pixel is the ratio of number of incident photons of a particular frequency to the number of emitted electrons. Higher efficiency means that for a particular intensity of illumination, more electrons are emitted so the image is clearer and brighter. Low light images are clearer and signal to noise ratio is increased.

Ask yourself a question. If this camera has a magnification of 0.01 and is to capture an image of two sticks 5mm apart, will they be resolved, or not?

Answer – the image on the CCD is 0.05mm – much bigger than 2 pixels wide. So, the sticks will be resolved.

## How big is the earth?

Just for completeness, here’s a fun little exercise on finding the circumference using data from the space station.

Now we know how big it is, we can use Newton’s Law of Gravitation to find its mass. Imagine a 1kg mass anywhere on the circumference r, found because we now know its circumference. G and g are both known so its mass drops out nicely. Having found its mass we can use density =mass/volume to find the average density of the material of the Earth.

This is an entire specialism, requiring years of training. A brief overview follows. Diagnostic work requires low energy, short lived isotopes. When the objective is therapy, i.e, to irretrievably damage cells such as the rapidly growing cells found in tumours, high energy beams are used, either high beta, X or gamma, energies in excess of 1MeV, usually, whose energy/penetrating power characteristics are well understood. The objective is to irradiate as much of the affected area as possible without damaging healthy tissue. Rapidly growing tissue is much more radiosensitive (able to be damaged more easily) than normal tissue, which is why diagnostic X rays are prohibited for pregnant women since the rapidly growing cells in the foetus are highly sensitive.

A high dose of I-131 can be given which accumulates in the thyroid and destroys the cancerous cells at the expense of a very high healthy thyroid dose. Cs-137 needles can be implanted into small plastic eggs and inserted for a predetermined period at the head of the cervix to knock out cervical cancer cells. Co-60, is artificially produced by slow neutron capture of Co-59 in a reactor, has a half life of 5.3 years – a long half life is preferred so we don’t have to replace the source very often.  It is a beta emitter , decaying to Ni-60 and can be used in preference to X-rays to focus, or better, collimate a beam on to a particular area of interest with lead collimators – often  to treat areas just under the skin and often from several different angles to maximize dose on the area of interest while minimizing it elsewhere. The activated nickel nucleus emits two gamma rays with energies of 1.17 and 1.33 MeV.  A Co-60 source with an activity of 2.8 GBq, which is equivalent to 60 micrograms of pure Co-60, generates a dose of 1 mSv at one metre distance within one hour, a very significant dose for therapy.

As with all treatment machines, this modern X-ray treatment machine is clearly rotatable. Calculating dosage is highly complex and in this example, the machine orientation and exposure times will be calculated with an onboard computer.

Radiotherapy is frequently used with chemotherapy to provide a cocktail of mechanisms to destroy cancers. It should be pointed out that cells are not “killed”. Their functions are impaired because their proteins are damaged by ionisation, hence they fail to fulfill their metabolic functions. Enzymes are misshapen hence cannot bind to substrates properly, nucleic acids can no longer accurately replicate and so on.

## Biological Effects of Radiation. Dose and Dose Equivalent.

Here’s the rules. When handling IR (ionising radiation)

keep exposure short

get as far away as you can

get behind something dense

If exposed to ionising radiation, macromolecules which rely on precise conformations are damaged, change shape and don’t work. DNA and other nucleic acids can’t repair themselves and replicate nonsense proteins. Irradiating water produces highly reactive free radicals which have biological implications since water is a universal solvent.

Absorbed dose D is defined as the energy absorbed per unit mass or tissue or absorber, so D = E/m of irradiated material in J/kg

where D is 1J/kg = 1 GRAY (Gy) or the old unit, the rad, 100 rad = 1Gy

Damage H produced is in “dose equivalents” since the ionising damage is dependent on radiation type. Q is a quality factor , a dimensionless integer, 1 for betas and gammas, 20 for alpha particles.

H is measured in SIEVERT (Sv) – also J/kg,  where 1Sv = 100 rem – the old unit.

A dose of 0.01mSv is received from

• an average year of TV watching
• an airline flight from New York to San Francisco
• a year living next door to a normally operating nuclear power plant

Maximum permissible dose is a dose level applied to workers in the radiation industries, including hospitals, and is about 50mSv per year from a variety of background sources, chest X-rays and so on. Workers are regularly monitored in hazardous environments.

The loss in life expectancy from a 0.01mSv dose is about 1.2 minutes, equivalent to crossing the street three times or three puffs on a cigarette. Eating a banana contributes 1 uSv (microsievert)

A dose of 5Sv is huge – causing massive tissue breakdown, consequent internal bleeding, and death within about six weeks. Some Hiroshima victims may have received as much as 100Sv. It was said that if you survived the first three weeks, you might just pull through.

Look at Q1 and 2 on P 714

Sometimes, an alternative unit, “exposure”, E, can be used. If you are sheltering behind something dense, your dose is less than if you were “exposed”. The unit for exposure is charge-dependent and is in C/kg.  Put simply, we can calculate that D = 34E in J/kg ( one ion requires 34eV to be produced in air)

We can say without explanation that

where f is a quality factor – a dimensionless integer dependent either on photon energy or the material receiving the dose, and sometimes both.

## Isotopes for Diagnosis

This subject is vast and growing, so we’ll just take a very quick look.

When introducing ionising radiation into the body there are a few rules of thumb. These are as follows:

Metabolically indistinguishable thus metabolized as if stable

Short-lived, both physically and biologically

Non-toxic especially if a non-metabolite.

For imaging, beta or gamma emitters are preferred of medium energy (hence range) so as they leave the body they can be detected

There are very many techniques routinely used and new developments happen frequently. Here’s a few examples. The ones in red are medical applications.

Two are popularly used – the following is expanded from the above link:

Technetium – 99m: The most widely used radioactive isotope for diagnostic studies in nuclear medicine. Different chemical forms are used for brain, bone, liver, spleen and kidney imaging and also for blood flow studies. M = metastable. Eluted (milked) as sodium pertechnetate from a molybdenum “cow”, it has a convenient half life of 6h and energy a useful 140keV. An injected dose of the radiopharmaceutical is then able to either be followed around the body (dynamic renal studies), uptaken into an organ which can be externally imaged with a gamma camera or blood protein metabolism when tagged to serum albumen studied by repeated sampling later and measurement of radiation content in a scintillation counter.

The reduced uptake – cold spots – in this Tc 99m bone scan are clearly visible.

The later  image is on the left – notice the fuller bladder.

Iodine – 131: half life just over 8 days – a bit long for today’s clinical studies – and an energy of 360keV is used to diagnose and treat thyroid disorders. (Former President George Bush and Mrs. Bush were both successfully treated for Graves’ disease, a characteristic enlargement of the thyroid, with radioactive iodine.)

A small quantity of the isotope is given orally as dilute sodium iodide. The thyroid uptakes the iodine exclusively to metabolize two hormones which regulate iodine metabolism in the body. A scintillation detector is used to measure the count rate over the gland after specific time intervals, from which important diagnostic information is obtainable. A deficiency or overactivity in production causes characteristic symptoms which require either surgical or biochemical intervention.

In much larger doses, iodine 131 is an effective agent for treatment – its ionizing properties are restricted to the surrounding thyroid tissue and renders cancer cells ineffective.

Work through  Q6, p710

In conclusion – questions needn’t necessarily be restricted to these two. For example, Xe 133 is completely inert and has a very short half life. Patient takes a deep breath infused with a small quantity of the isotope, stands in front of the gamma camera for a few seconds and a lung perfusion image is created. Here’s an image of two healthy lungs with no cold spots.

Effective Half Life, T

Assuming both decay and elimination from the body are exponential decay processes, we can write:

So, if iodine 131 has a radiological half life of 8 days and a biological half life of 24 days, the effective half life is 6d.

## Ultrasonic Techniques in Medicine

Exam questions about this are sometimes comparative. X rays are more invasive since they cause ionization. Ultrasound (US) is a non-invasive procedure causing minimal local heating which relies on the fact that sound waves well above the range of human hearing are reflected at a boundary, such as that between bone and air, for example. The greater the density difference between the two media, the higher the percentage reflected back and vice-versa.

The speed with which an acoustic wave moves through a medium is dependent upon the density and elastic resistance of the medium. Media that are dense will transmit a mechanical wave with greater speed than those that are less dense. As an example, the acoustic speed of a mechanical wave through air is about 340m/s, through water, it is 1500m/s, through soft tissue, 1540 m/s and through bone, 4080m/s, twelve times faster than in air. It used to be though that ultrasonic resolution was poorer than X-rays, this is now not in fact the case; foetal heartbeat and retinal scanning requires a resolution of the order of 1-2mm and this is achieved by raising the US frequency. For obstetric work, between 2 and 7 MHz is fine, for retinal work, up to 15MHz is used which increases resolution substantially. A rule of thumb is that the organ under study should be about 200 wavelengths away from the transducer (or emitter/receiver)

When an AC source of very high frequency is applied to a piezoelectric crystal (even quartz works fine) it vibrates, creating an ultrasound wave at the same frequency as the AC. The sound produced is then directed at an object and then bounces back off the object under investigation. When the sound wave comes back to the   piezoelectric crystal, it has the reverse effect – causing the mechanical energy produced from the sound vibrating the crystal to be converted into electrical energy. By measuring the time between when the sound was sent and received, the amplitude of the sound and the pitch of the sound, a computer can produce images, calculate depths and calculate speeds.

We recall having previously mentioned specific acoustic impedance, Z, the product of material density and velocity in the medium.

Now consider the interface between two materials of specific acoustic impedance Zand Z2, having different densities and therefore speeds, we may write:

This is worth a moment’s thought, since it suggests the most reflection, hence largest signal received, will happen when the sound is reflected off the interface between two materials of very different densities.

In obstetrics, if a coupling gel were not used to move the transducer across the patient’s abdomen, apart from the lack of lubrication, all the sound would be reflected and none transmitted into the body. This “acoustic coupling” is vital to ensure that most of the sound isn’t reflected back and so gets past the air/skin barrier.

A Scan

A scans can be used in order to measure distances. A transducer emits an ultrasonic pulse and the time taken for the pulse to bounce off an object and come back is graphed in order to determine how far away the object is. A-scans only give one-dimensional information and therefore are not useful for imaging.This is the simplest kind. A burst of ultrasound is passed into, for example, a newborn foetal skull. If the two halves of the brain are equally sized, the midline echo will occur exactly between the reflections on the opposite sides of the skull. If not, the echoes are skewed to one side (see diagram. T = transducer and we imagine looking down at the head from the top. Not great but you hopefully get the idea)

Retinal detachment can be detected using high frequency US, as shown.

In a healthy retina, the middle spike should be absent but because the retina (yellow line) is not attached to the eyeball surface and is floating in the fluid in front, an echo is seen as the US bounces off it.

B Scans

B scans are multiple A scans, produced by a moving transducer. They take many A images per second and an image intensifier – these days a computer – retains and displays the information in real time, building up a 2D slice across the transducer’s moving path. It is a routine procedure in obstetrics where the transducer is moved rapidly over the abdomen from right to left and back again a few times so as to collect enough data to build up an image. This image shows twin foetuses.

If the transducer can be placed towards an oncoming blood vessel, the changing velocity of the red blood cells, hence overall speed of blood can be monitored as the US signal bounces off oncoming red blood cells and the consequent Doppler shift measured. Notice the use of the cosine since the blood velocity along the detector line between source (S) and emitter (E) is

where u=blood velocity,

c=velocity of sound in blood

and f0 is the incident ultrasound frequency

You should study carefully the “Advantages and Disadvantages” Table I2.2 on p 709

## IB HL Medical Imaging – X-rays

X-rays (short wavelength, high energy EM radiation) are produced when charged particles like electrons are accelerated around nuclei. Since they are negatively charged, they are accelerated towards a positive nucleus, emitting as they curve around a ‘braking radiation’ or‘bremsstrahlung’ in German. The degree of ‘bend’ determines the energy of the X rays produced, so the radiation emitted is over a continuum of wavelengths .

Additionally, the electrons may directly promote other electrons in the metal atoms of the target from its lower to higher energy levels- when these decay back  they emit characteristic X-photons. Here two characteristic jumps are superposed on the bremsstrahlung background. The relative intensity is a measure of how likely this event is.

We won’t go into details here, except the only real difference between these and gamma rays is that gamma rays are emitted spontaneously from excited nuclei.

X-rays pass through human tissue, being selectively absorbed by denser material. In 1901, Wilhelm Roentgen was the first person ever to win the Nobel Prize for Physics and his discovery revolutionised the medical world.

Some schools have a small version of one of these.

Electrons are accelerated in a vacuum towards a metal target, W in this case.  X rays are produced – notice they are not subject to a ‘law of reflection’ – and pass through and out of a collection window for use. At diagnostic voltages (140KV for a chest X-ray, lower for dental use) the anode gets very hot and has to be cooled, the target is often rotated otherwise the heat generated would destroy it. The X rays are partially absorbed by the area of interest in the patient and the data is collected on photographic film as a negative image where high absorption is seen as a light area and vice versa.

The mechanism for energy loss in the body is photoelectric.  Since this effect is strongly dependent upon Z, there is substantial difference between Z(bone), about 14 and Z(soft tissue) about 7, so X rays are good at contrasting bone and soft tissue but not very good at contrasting between different soft tissues since their Z values are too similar.

Increasing the sharpness of the shadow gives better diagnostic information. We can’t ‘focus’ X-rays  using a glass lens like we can with visible light so we have to arrange for rays normal to the film only to fall on its surface since scattered rays will blur the image. There is a section on p703-4  which explains how this is achieved by  filtering and lead grids – a possible exam question.

## How do Photons Interact with Matter? Simple scatter, Compton scatter, Pair Production

There are three ways. Which one depends on the initial photon energy.

You might remember that the photoelectric effect is concerned with the total absorption of an incident photon of visible or perhaps UV light in a metal surface (actually on an outer electron) with the consequent emission of this electron. This is a low energy phenomenon and obviously the energy acquired by the electron is dependent on the initial photon energy, rapidly diminishing in importance as initial photon energy increases. Here’s the equation:

At higher photon energies,  such as with X rays,  which penetrate further into the electron cloud an inelastic event may occur, called Compton Scatter. A free electron takes up part of the photon energy, the photon is scattered or re-emitted with a longer wavelength and the difference is in the kinetic energy of the electron scattered in a different direction.

By contrast to the photoelectric effect, Compton scatter doesn’t vary much with incident photon energy, but increases  linearly with atomic number. A simplified diagram shows the scattered electron as a black arrow.

At higher energies still we see Pair Production being dominant, close to the nucleus,  where the photon is converted into an electron-positron pair which then annihilate to produce two identical photons. NB: all other conserved quantum numbers (angular momentum, electric charge, lepton number) of the produced particles must sum to zero – thus the created particles have opposite values to each other. The lowest energy of the incident photon must have higher energy than the sum of the rest mass energies of an electron and positron (2 × 0.511 MeV = 1.022 MeV) for pair production to occur.

A chart might help to visualize… Comparing the interactions by atomic number Z and photon energy in MeV, we get…

So, at low Z and relatively high energy, mostly Compton scattering,  low energy and high Z means the photoelectric effect predominates and high energy and high Z means Pair Production. Notice the logarithmic scale on the energy axis.

## IB HL The Functioning of the Ear (3)

Because I’m old, I can only hear up to 10kHz. Normal people hear from 20Hz-20kHz and the upper frequencies decrease with age because of decreased flexibility in the aural mechanisms, thickening eardrum, ossicle degradation and so on. A few minutes spent with p697 on ‘hearing defects’ might be valuable here followed by q9, p699.

The ear’s sensitivity is frequency-dependent and our previous value for 1pW per square metre is only true for a frequency of 1kHz – the most sensitive region being as we can see from the graph below is at about 3kHz. If the ear canal is about 3cm in length or a quarter of a wavelength, what frequency does this correspond to? (2.83kHz) Many other mammals have different ranges dependent on the configuration of the outer ear, the way the pinna collects the sound and their electrical responsiveness.

Pitch is what we usually mean by perceived frequency, but a better way to think about it is it’s related to both pitch and intensity. A soft sound at a particular frequency is perceived as being of lower pitch than the same frequency but louder or more intense.

The sound is converted to an electrical signal by this (grossly simplified) mechanism. The sound wave interacts with hairs of different lengths like a Barton’s pendulum with a hair cell somewhere along the Organ of Corti and receptors on the Basilar Membrane which decreases in stiffness along its length. A particular frequency causes one of the hairs to oscillate with large amplitude or resonate and hence send its electrical signal to correspond to this frequency. The longest hairs are furthest away from the oval window hence respond to lowest frequencies.

## IB HL Sound Intensity and the dB scale

To prevent permanent hearing damage, it’s a bad idea to stand close to the speakers at a rock concert.

This looks about far enough away…

You’ve all seen inverse square rules like this before, here the sound Intensity in watts per square metre a distance r away from a sound of P watts is shown. Intensity is simply a measure of how much energy falls per second on a 1 metre square of surface.

Curiously, the increase in hearing sensation is proportional to the fractional increase in intensity and the ear’s response to intensity is logarithmic which can be exploited to define a scale of hearing based on the ‘bel’ or more usually, the ‘decibel’ where 10dB = 1B.

An increase of 10dB implies an increase in intensity by a factor of 10, similar to the Richter scale for measuring earthquake damage.

We can think of

We’re comparing the sound we’re listening to to the lowest perceptible intensity we can perceive with perfect hearing.

It’s phenomenally small, only 1 picowatt per square metre.

This is used in all comparative dB calculations – the threshold of hearing, which increases markedly with age. Libraries, even so-called ‘silent spaces’, are probably generating an intensity of 40dB and those close to the Stones’ concert speakers will in all probability be in pain and risking permanent damage at between 120 and 130dB.

Look at Q2 and 3 on page 694. This shows how to convert a sound intensity into dB. Q4 on page 695 reminds us that sound intensities must be added, not dB values. Try Q1-5 on page 698. Check with me if you have a problem.

## New Look

I’m trying out a new, minimalist look.

The Pages sidebar is now on the left behind a slider and the keyword search is at the end of the first batch of posts.

## Galaxies

### The Hubble classification scheme

for galaxies is based on their appearance.

 SHAPE STAR CONTENT GAS/DUST STAR FORMATION Spirals/Barred spirals Spiral Halo flattened disc bulge in middle, spiral or barred halo arms halo, mainly old stars none in halo, lots in disc spiral arms Ellipticals No fine structure, ellipses can be almost circular Ellipsoidal/spherical, uniform star distribution old v little nothing new for the last 10bn y Irregulars Often, very strange shapes no structure young and old lots of both plenty

Our Galaxy, the Milky Way is part of the Local Group (LG) extending for 10m ly containing about 20 galaxies, the nearest being the Large Magellanic Cloud at 2m ly

It contains the Andromeda Spiral (largest member)

Beyond this – a supercluster ( diameter = 15Mpc) of which our LG is a member.

How far away? We calculate from Doppler shift measurements on known spectral lines (either H or He, usually)

as long as we realise that the velocity as calculated is as if the star is moving directly away from us.

## Hubble’s Law

distance d and recession speed v are proportional in an isotropic universe, but there is controversy over its exact value. Notice the bubble in the middle of the (much simplified) graph.

The Particle data Group offer the best estimate of the Hubble constant (“per second”) at 72 km s-1 Mpc-1  by observing Type 1 supernovae whose distances are known to better than 5%. It implies that at some point in time the Universe was a point – indirectly implying a Big Bang at r=0.

Thus,  1/H approximates to the age of the Universe, currently thought to be 13.7 bn years. You might like to fiddle with the units to check this value.

## Black Holes – a non-mathematical introduction

A black hole is defined as a region of spacetime from which huge gravitational attraction prevents anything, including light, from escaping. If space-time is like a trampoline, a star causes a gravitational dent in the trampoline bed – the heavier it is the more the bed is distorted. A black hole distorts the bed to a massive degree, so much so that the ‘hole’ made by the hugely dense body has no bottom – like a tube with no end – a singularity in other words. General relativity predicts that a sufficiently compact, dense mass will deform spacetime to form a black hole. Around a black hole, there is a mathematically defined surface called an event horizon that marks the point of no return. An event horizon is a boundary in spacetime beyond which events cannot affect an outside observer. As water drains down a plughole, there is a radius less than from which a small floating object on its surface can no longer escape, in the same way that a ball rolled diagonally across the trampoline bed cannot escape from falling down the hole no matter how fast it is going, or light cannot escape from the curved fabric of spacetime. The hole is called “black” because it absorbs all the light that hits the horizon, reflecting nothing, just like a perfect black body in thermodynamics. Similarly, any object approaching the event horizon from the observer’s side appears to slow down (time dilation) and never quite pass through the horizon, with its image becoming more and more redshifted as time elapses.

Event horizons emit radiation like a black body with a finite temperature. This temperature is inversely proportional to the mass of the black hole.

The Schwarzschild radius (sometimes historically referred to as the gravitational radius) is the radius of a sphere such that, if all the mass of an object is compressed within that sphere, the escape speed from the surface of the sphere would equal the speed of light. An example of an object smaller than its Schwarzschild radius is a black hole.

Black holes are expected to form when very massive stars (> 40 solar masses) collapse at the end of their life cycle. After a black hole has formed it can continue to grow by absorbing or accreting mass from its surroundings. By absorbing other stars and merging with other black holes, supermassive black holes of millions of solar masses may form. Supermassive black holes are thought to exist in the centres of most galaxies since many invisible but massive objects appear to co-rotate around a visible binary companion.

This black hole is located in the nearby dwarf galaxy IC 10, 1.8 million ly from Earth in the constellation Cassiopeia. We can measure the black hole’s mass because it has an orbiting companion: a hot, highly evolved star. The star is ejecting gas in the form of a wind. Some of this material spirals toward the black hole, heats up, and emits X-rays before crossing the point of no return.

## Neutron Stars

Neutron stars are ancient remnants of stars that have reached the end of their evolutionary journey. They began as stars between four and eight times the mass of the sun before exploding in catastrophic supernovae. After such an explosion blows a star’s outer layers into space, the core remains, no longer producing nuclear fusion. With no outward pressure from fusion to counterbalance gravity’s inward pull, the star condenses and collapses in upon itself.

Despite their small diameters – 20 km, neutron stars are usually about 1.5 times more massive than the sun, and are thus incredibly dense. Think of a sugar cube 2cm in length having a weight of 10TN on Earth. The energy of  electrons will increase upon compression, so pressure must be exerted on the electron gas to compress it, producing electron degeneracy pressure. With sufficient compression, electrons are forced into nuclei in the process of electron capture, relieving the pressure. In brief, protons capture electrons forming neutrons—the process that gives such stars their name. The composition of their cores is unknown, but they may consist of a neutron superfluid or some unknown state of matter.

When they are formed, neutron stars rotate in space. As they compress and shrink, this spinning speeds up because of the conservation of angular momentum—the same principle that causes a spinning skater to speed up when she pulls in her arms. These stars gradually slow down but those bodies that are still spinning rapidly may emit radiation that from Earth appears to blink on and off as the star spins, like the beam of light from a turning lighthouse. This “pulsing” appearance gives some neutron stars the name pulsars.

After spinning for several million years pulsars are drained of their energy by gravitational drag from the outer layers and become normal neutron stars. Few of the known existing neutron stars are pulsars. Only about 1,000 pulsars are known to exist, though there may be hundreds of millions of old neutron stars in the galaxy.

The huge pressures that exist at the core of neutron stars may be like those that existed at the time of the Big Bang.

## Uranium Fission

This post is a description of how neutrons produced in a fission reaction may be used to initiate further fission reactions – a chain reaction.  Both power stations and weapons use a variety of nuclear ‘fuels’ – poorly named since a fuel is more usually a descriptor for a substance which burns and combustion is exothermic, releasing heat. A common nuclear fuel is uranium 235, which is the only naturally occurring fissile material.
We should know that only low-energy neutrons (< 0.2 eV) favour nuclear fission.
Principle.
Unstable nuclei capture a slow neutron of low energy. Why are the neutrons ‘slow’? Because the U nucleus is said to have a large ‘cross section’ for slow neutrons – rather like using a big, oversized racket to hit a small ball – the probability of the ball being hit is greater. The U spontaneously divides into 2 smaller parts, approximately equal in size plus 2,3 or 4 more neutrons which may be captured by other nuclei, producing a chain reaction (see diagram). Unless this is controlled, if enough fuel is present (a 7kg ball is enough – the critical mass), the reaction goes out of control and basically becomes explosive. If the ball is too small, neutrons will tend to pass through it, we need enough uranium present to ensure that the probability of neutron capture is high.
The mass of the fragments is less than the mass of the original parts, thus mass ‘loss’ and energy ‘gain’ are equivalent. [E=mc2].

This is one of a number of possible events. In a reactor for power generation, the KE of the moving fission products is converted into heat when passed through water in a very short time. Sometimes, we’re asked to find the average kinetic energy of the fragments knowing the temperature rise of a mass of coolant.

Enrichment.

is the process by which the percentage of fissile uranium 235 is increased. Uranium 235 as mined is about 0.7% by mass and much of the rest is U 238 as oxides and other more complex salts. Enriching by isotope separation, gaseous or thermal diffusion and centrifuging boosts this percentage to between 3 and 4% for reactor grade uranium and if the process continues, high purity weapons grade uranium requires almost 90% purity. A crude, inefficient weapon can be manufactured with  a source of 20% or more, however.

Control Rods and Moderators

The control rods absorb the neutrons which keeps the reaction rate relatively constant (rather than letting it grow exponentially). They create a situation where roughly one neutron per fission goes on to cause another fission event. Silver, indium, cadmium or boron are commonly used to make them.

Moderators slow down the neutrons without absorbing them. Fast neutrons are more inclined to bounce/deflect off of the surface of a nucleus so slower neutrons actually lead to a greater number of successful fissions i.e. moderators don’t slow the reaction down, they just help it to take place. Commonly-used moderators include regular (light) water (in 74.8% of the world’s reactors), solid graphite (20% of reactors) and deuterium oxide or heavy water (5% of reactors). Their job is to be introduced in controlled amounts into the nuclear pile (bundled fuel rods) to slow down neutrons and so increase the probability of a fission event.

Both of these control the rate of the nuclear event (not really a ‘reaction’)

You should be able to think of at least four vector quantities (size plus direction) and four scalar quantities (size only). If you can’t, look here.

You can’t just add numerically when you want to add two or more vectors (of the same kind, obviously) since a push of 1N to the left when added to a pull of 3.5N to the right results in a pull of 2.5N to the right. We have to take direction into account.

But, what if they don’t act along the same straight line? There are several methods for adding lots of them together. The head-to-tail method is one. A vector is just a line on a piece of paper of a particular length which represents its size with an arrow on it to indicate direction.  Adding two vectors A and B is quite simple. Take the tail of B and put it on the head of A. The vector sum is found by the line joining the tail of A and the head of B. Works for as many vectors as you like.

In more detail, the head-to-tail method involves drawing a vector to scale on a sheet of paper beginning at a designated starting position. Where the head of this first vector ends, the tail of the second vector begins (thus, head-to-tail method). The process is repeated for all vectors that are being added. Once all the vectors have been added head-to-tail, the resultant is then drawn from the tail of the first vector to the head of the last vector; i.e., from start to finish. Once the resultant is drawn, its length can be measured and converted to real units using the given scale. The direction of the resultant can be determined by using a protractor and measuring its angle of rotation – in my example anticlockwise from due East, but clockwise from North can also be used. Just specify.

A step-by-step method for applying the head-to-tail method to determine the sum of two or more vectors is given below.

1. Choose a scale and indicate it on a sheet of paper. The best choice of scale is one that will result in a diagram that is as large as possible, yet fits on the sheet of paper.
2. Pick a starting location and draw the first vector to scale in the indicated direction using a ruler and protractor. Label the magnitude and direction of the scale on the diagram (e.g., SCALE: 1 cm = 20N for adding forces, for example).
3. Starting from where the head of the first vector ends, draw the second vector to scale in the indicated direction. Label the magnitude and direction of this vector on the diagram.
4. Repeat steps 2 and 3 for all vectors that are to be added.
5. Draw the resultant from the tail of the first vector to the head of the last vector. Label this vector as Resultant or simply R.
6. Using a ruler, measure the length of the resultant and determine its magnitude by converting to real units using the scale (4.4 cm x 20N per cm = 88N. Here’s an example: