The Planck units

This is a little bit beyond IB, but worth a read. The image is of the silicon surface of a microchip.

 

There is this idea that the Planck units present some kind of fundamental limit on the values of physical quantities. This answers:

  • what the Planck units are and why they are interesting
  • which of them can be interpreted as some kind of “limit”, and how.

What are the Planck units and how are they derived?

Physics is all about establishing the relationships between various physical parameters. In the Système Internationale (SI), the “big 5” physical parameters that form the basis in which everything else can be expressed are mass [𝑀], length [𝐿], time [𝑇], electriccurrent [𝐼], and temperature [Θ]. These are called the dimensions of a quantity (note this is an entirely separate use of the word to the 3 dimensions of space and 1 of time).

The SI units of the base quantities are kilograms, metres, seconds, amperes and kelvins, respectively. These units are entirely arbitrary- there is no fundamental reason why we should use metres for [𝐿]and not inches, furlongs or cubits instead. To take an example, speed has dimensions [𝐿][𝑇]−1and SI units of metres per second.

We know that  there exist certain fundamental constants. Among the most important of these are:

  • the speed of light 𝑐, which has dimensions of a speed [𝐿][𝑇]−1and appears across all of physics,
  • Newton’s gravitational constant 𝐺, which governs the strength of gravity. By considering either Newton’s gravitational force law or general relativity (GR) one can see it has dimensions of [𝑀]1[𝐿]3[𝑇]−2
  • The reduced Planck’s constant ℏ, whose value governs quantum mechanics and which has dimensions [𝑀][𝐿]2[𝑇]−1.

In 1899 Max Planck observed that by multiplying these constants with one another in various combinations, one could create specific-valued quantities with any combination of the dimensions of length, time and mass. For example, we have:

Screenshot 2019-03-02 at 09.18.56

Note that while some of these seem extremely small, there is nothing remotely “limiting” about others— in fact, their values are astonishingly “human-scale” considering they are derived only from fundamental constants. The Planck mass is quite small by everyday standards but not exceptionally so- it is about the mass of a flea and you can easily see Planck-mass objects with simple magnifying equipment. We can derive others. The Planck momentum is similar to that of a served tennis ball. The Planck energy is just enough to boil enough water to make a cup of tea for everyone in the Royal Albert Hall.

By also including Boltzmann’s constant 𝑘 we can extend the Planck system to include temperature and hence the other thermodynamic parameters, and by including the permittivity of free space 𝜀we can include electric current and hence electromagnetism too.

Why are they interesting?

The Planck units are interesting for essentially two reasons: one very pragmatic, the other fundamental.

  1. Since our choice of SI units are entirely arbitrary, in the SI these constants all take very ugly values. ℏ, for example, has a value of  6.63×1034kg.m2s1. By replacing the SI units with the Planck units as the basic units of measurement, the fundamental constants 𝑐, 𝐺, ℏ, 𝑘and 𝜀all take the value of exactly one. This presents a convenient opportunity for physicists to declutter their equations, since we no longer need to keep track of all those wretched constants. In complex computer simulations, it also provides a speed-up by removing the need to multiply every term by the arbitrary SI values of the fundamental constants, which must be specified to high precision.In reality, most real physicists use a “hybrid” system of units because they work on systems that are far away from the Planck scale. I’ll explain what that means below, but suffice it to say that particle physicists don’t often care what value 𝐺 takes in their system of units, because gravitational effects are irrelevant to their line of work. So they will use units such that 𝑐=ℏ=1, but also that the energies in their system, which are always far below the Planck energy, take reasonable values. Similarly, physicists simulating black hole mergers disregard quantum effects and so don’t care about the value of ℏ in their unit system: they will use units where 𝑐=𝐺=1, but where the masses of their black holes are not unimaginably huge multiples of the mass unit.
  2. The more interesting reason why Planck units are important is that in a certain sense, they demarcate the scale where gravitational, quantum, electrodynamic and thermodynamic effects all become comparable. Since all the relevant constants in the laws of physics are simply 1 in the Planck system, if the input values to our equations are similar to the Planck values, then effects from all of these branches of physics are present in equal measure and we cannot neglect anything. For example, if we consider a black hole with the Planck mass, we find that both the radius of its event horizon (governing the scale where gravitational effects become dominant) and its de Broglie wavelength (which tells us when quantum effects move from negligible to dominant) are about the same- around the Planck length, in fact. Additionally, the mass-energy of this thing would be the Planck energy, and the Hawking temperature of its event horizon would be the Planck temperature, and so on. Imbue it with the Planck charge too, and you basically have nearly all of physics playing together in a single system. Of course, such a black hole (a so-called Planck hole) is both far smaller than the black holes we can observe in the sky, where gravity dominates and quantum effects are not discernible, yet also far more energetic than the particle collisions we can make at CERN, where the effects of self-gravity are immeasurably small and quantum effects dominate. This is basically because the Planck length is so small. As such, we are very far away from ever being able to see a Planck hole, have no idea how gravitational and quantum effects interplay for such an object, and can make no predictions for how it would behave. Experimentally, therefore, we are limited to observing physics only in either of the regimes where one of gravitational or quantum mechanics becomes too small to matter, and so have no prospect of figuring out the rules in the general case where both must be considered. This is why, in the 21st century, fundamental physics, as an empirical science, is stuck in such a deep rut. As we wrest the secrets about dark matter and dark energy from their hiding places, more light (pun) might be shed on such matters.

What Planck units represent “limits”, and how?

The only three Planck units that can in any sense be thought of as some kind of limit are the Planck length, the Planck time, and the Planck speed (which is of course simply the speed of light).

To finally answer the question: the reason the Planck length is considered a limit. Anything localised in at least one direction to something close to the Planck length has, by the Heisenberg uncertainty principle, an energy expectation value that approaches the Planck energy, and hence a gravitational radius approaching its spatial extent. In other words, the more localised it becomes, the closer it becomes to a Planck hole. Now, we know nearly nothing about Planck holes, so at this point physicists start talking out of their backsides to some extent. But from classical general relativity, we know that black holes distort the geometry of spacetime. It therefore doesn’t make much sense to talk about a physical system with substructure below the Planck length, since the sub-Planck scale variations in energy density would shroud the whole object in an event horizon and leave it causally disconnected from the rest of spacetime.

The Planck time is “limiting” simply because it is the minimum possible time it would take to traverse the shortest meaningful distance, so there is no physical process characterised as having a timescale faster than the Planck time.

It is important to state that the Planck length and time are not “hard” barriers like the sound barrier. They merely represent the scale where different physical phenomena move from being negligible to being significant. Indeed, one can define many different systems of Planck units, for example by taking the non-reduced Planck’s constant ℎ=2𝜋ℏ=1instead of ℏ=1. A factor of 2𝜋 doesn’t really matter- they should be considered instead a “here be dragons” warning post that when energy is around this localised, quantum phenomena cannot be considered approximately independent of the spacetime they occupy.

To dispute what some others have said: in the current widely accepted fundamental theories of physics (GR and the Standard Model), spacetime itself is continuous, self-similar at all scales, and not in any way quantised or pixelated. It does seem, however, that there are theories of quantum gravity that challenge this assumption, notably loop quantum gravity, but these are still very edgy and far from achieving consensus. The Planck length is only currently considered somehow minimal because of the rules governing energetic phenomena that live in that spacetime, not the spacetime itself.

One final subtlety. There are still certainly occasions in which it makes sense to speak about values of physical parameters on the wrong “side” of the limiting Planck unit. The classic example of this is that if I shoot a laser pointer at the Moon and sweep it across the sky very fast, the point of light can move across the surface of the Moon with a speed not limited by the speed of light (the Planck speed). This is because the point of light is not itself a physical object, cannot carry either energy or information along its path, and is not subject to the laws that constrain the photons themselves.

There are analogous examples where one can still talk about lengths below the Planck length. For example, in spectroscopy, physicists often talk about “linewidth”. No source of light emits a pure electromagnetic wave at a single frequency. If you measure the peak-to-peak distance for each wave of a laser, you will see some small variation in wavelength- this is known as the “linewidth”. There is no physical lower bound to this; the linewidth of the laser at LIGO used to detect gravitational waves is about a billion times smaller than the diameter of a proton. Potentially, this quantity could be smaller than the Planck length.

Finally, if we examine the emission lines of the hydrogen atom, there is no fundamental lower limit to the width of these lines – they may be narrower than the Planck length.

My thanks to a number of creatives for help.

 

 

 

Measuring Stellar Distances

Screen Shot 2.png
Venus, Jupiter and the Moon 17/11/2017

Three useful units – which we use depends on how far away the object is. Being able to convert quickly is useful to know.

  • Relatively small distances – Astronomical Unit, AU = average distance between Earth and Sun  (1 AU=150 million km)
  • Light Year. 1 ly is the distance light travels in a year at a speed of 300 million m/s.

Find the distance in km between us and our nearest star Proxima Centauri (4.3ly away) You might like to speculate how long it might take a spacecraft travelling at a maximum possible 25,000km/h to reach it.

  • One parsec (pc) is defined as the distance to a star that shifts by one arcsecond from one side of Earth’s orbit to the other. These angles are incredibly small. They’re too small for degrees to be a practical unit of measurement. There are 3,600 arcseconds (60 minutes x 60 seconds) in one degree. To provide some perspective: one arcsecond is equivalent to the width of an average human hair seen from 20m away.

The nearest star is Proxima Centauri, at 1.3 pc. The Andromeda Galaxy, the closest spiral galaxy to our own, is nearly 800 kiloparsecs away.

If we imagine ourselves taking measurements of an imaginary star at six monthly intervals, it seems to have moved with respect to the Sun, 1 AU away.

Screen Shot 9.png

Putting this another way:

Screen Shot 7.png

Explaining Trigonometric Parallax.

  • Because of the Earth’s revolution around the Sun, nearby stars appear to move with respect to very distant stars which seem to be standing still.
  • Measure the angle to the star and observe how it changes as the position of the earth changes. In the second diagram if the observation point is at the top of the picture, six months later it will be at the bottom, 2 AU’s away
  • You can use your fingers to show trigonometric parallax. Shut one eye and hold your finger about eighteen inches in front of your face. Observe a distant object and the finger. Keeping still, look with the other eye. The finger represents the near star and appears to have moved with respect to the background. If you ask a friend to hold up his finger and repeat the observation, it would seem to have moved much less.
  • The parallax or apparent shift (from the Greek for ‘alteration’) of a star is the apparent angular size of the ellipse that a nearby star appears to trace against the background stars. Because all parallaxes are small (the stars are very far away), we can use the small angle approximation as shown. If we measure the distance to the star in AU. (astronomical units), then the parallax is given by:Screen Shot 8.png

For example – the six month parallax angle for Alpha Centauri is 1.52 seconds of arc. You might like to calculate how far away this is in light years.

Luminosity

Screen Shot 6
A supernova
  • Is the amount of electromagnetic energy a body radiates per unit of time. {J/s (W)}
  • Is intrinsic to a body and is a measurable property which is independent of distance.

Imagine a point source of light of luminosity L that radiates equally in all directions. A hollow sphere centred on the point would have its entire interior surface illuminated. As the radius increases, the surface area will also increase, and the constant luminosity has more surface area to illuminate, leading to a decrease in observed brightness.Screen Shot.png

A is the area of the illuminated surface, a sphere of radius r.

Screen Shot 2.png

F is the Flux of the illuminated surface, the energy radiated per second per square metre of surface from a point source such as a star. This means that Luminosity is the total flux in watts

So:Screen Shot 4.png

The sun has a luminosity L = 3.8 x 1026W. If Earth – sun distance is 150m km and the Sun can be considered a point source, we can show that the radiant energy flux at the surface of the Earth is about 1.3kW m-2

Hertsprung and Russell showed that that the luminosity of a star L (assuming the star is a black body – a perfect emitter and absorber – which is a good approximation) is also related to temperature T and radius R of the star by the equation:

Screen Shot 5.png

where  ‘sigma’ is the Stefan-Boltzmann Constant  5.67 × 10−8 W·m-2·K-4

L is often quoted in terms of solar luminosities, or how many times as much energy the object radiates than the Sun, so LSun= 1

Questions:

  1. Find the actual luminosity of the Sun, given a surface temperature of 6,000K and a radius of 7 x 108m
  2. Compare with Sirius – a very bright star – temperature 12,000K and radius 2.22 x 109m. How much brighter is Sirius than the Sun – in solar luminosities?
  3. What would be the surface temperature of a star having the same luminosity as the Sun but twice the radius. What would it look like?
  • So a bigger star can be at a lower temperature and yet have the same luminosity, i.e. it looks just as bright
  • A hotter star is more luminous than a cooler one of the same radius.
  • A bigger star is more luminous than a smaller one of the same temperature.

A cool (red) giant star is more luminous than the Sun because, even though it is cooler, it is much larger than the Sun.

Finally, the idea of a Standard Candle is an important one in astrophysics. Certain classes of objects such as supernovae and Cepheid variable stars have properties whereby their luminosities can be determined separately from other measurements. For example, the period of a Cepheid variable star depends on it’s mean absolute magnitude; the more luminous the star, the longer the period.

If we know the luminosity and can measure the energy flux or brightness, then by comparing it with a standard candle then we can figure out how far away it is by comparing it with a standard candle of the same luminosity.

 

 

Resolvance of a Diffraction Grating

Illuminating a diffraction grating with monochromatic light from a He/Ne laser shows a typical pattern, out in the photograph to m=3 on both sides. The spots are equally spaced and we notice that the m=2 spot is hidden under the first single slit diffraction minimum – a “missing order”.

Screen Shot

The geometry is identical to that for a double slit, d being the distance between the centre of one slit and the next. For a bright maximum:Screen Shot

Screen Shot 1

Unlike two-slit interference, only at very particular angles do the contributions from each slit add constructively. Everywhere else, the contribution from one slit has a partner somewhere else down the grating which cancels its contribution out, hence the very bright spots and a lot of empty space.

You are strongly encouraged to go to the Wolfram Demonstrations Project, download the CDF player and experiment with this demonstration. 1, 2 or many slits -the choice is yours. With 15 slits the pattern is almost indistinguishable from a diffraction grating – screenshot below – the single slit diffraction envelope is clearly shown. Light intensity (y-axis) is proportional to amplitude squared.

Screen Shot

A flame test for sodium displays a very bright yellow emission. This emission is due to the sodium D-lines – two lines very close together.

Screen Shot 2.png

The diagram shows the absorption spectrum of the Sun by Fraunhöfer who labelled the lines. The sodium doublet is seen at wavelengths of about 589.0 nm and 589.6nm.

How could these be resolved using a diffraction grating? We recall that a diffraction grating gives sharp, clear orders.

More accurately, the D lines have wavelength1 = 589.592nm and wavelength2 = 588.995nm

We can find the resolvance or the resolving power required for the doublet to be resolved.

Screen Shot 3

For N lines of the diffraction grating, we can write (without derivation) for the mth order:

Screen Shot 4

So, in this case, for a required resolvance of about 1000, viewing the second order would need N=500 grating lines to be illuminated – even the coarsest of gratings manages this easily – a grating with 1800 lines per mm is quite common, if rather expensive. The larger N the better the resolution. If third, fourth or greater orders are visible, a coarser hence cheaper grating will do.

Black bodies, Wien, Boltzmann – very briefly

Screen Shot 9.pngA black body is an idealised body which absorbs and emits all radiation incident upon it. For example, this piece of iron which glows orange-red when heated is an approximate black body. The colour tells us the temperature of the metal. If we continue heating the metal, eventually it will glow orange, then yellow then white. By then, the metal would have melted and boiled off. Theoretically, if we kept on heating it, it’d glow blue-white, eventually emitting UV and even X-rays. Similarly, a star is an approximate black body radiator.  For hotter stars, the maximum wavelength emitted shifts to shorter wavelengths, as shown by the graph.

The black body radiation curves for different temperatures peak at a wavelengths inversely proportional to the temperature.

The plot is valid for determining the temperature of any object which is considerably hotter than its surroundings. Wien’s Displacement Law relating wavelength to temperature is shown on the diagram and the displacement constant shown.

Screen Shot 6.png

Our Sun has an external T of about 5300K – the intensity peak is at a wavelength of 550nm.

The thermal energy radiated by a hot body per unit area per second  (power/area) is proportional to the fourth power of temperature – Stefan-Boltzmann’s Law

Screen Shot 5.pngSo, for our Sun, at a temperature of 5300K, every square metre of the surface radiates almost 45MW of power. An object which absorbs all of the energy which falls on it is an ideal absorber or blackbody. For such a body e = 1, where e is the emissivity. Most emitters aren’t black bodies, however, so we can amend Stefan-Boltzmann thus:Screen Shot 8.pngwhere e is a number between 0 and 1. e is zero for a shiny mirror (absorption =0) and 1 for a black body.

By contrast, Albedo (in Latin ‘whiteness’) is the fraction or percentage of incident solar infrared energy  reflected from the Earth back into space and is a measure of how reflective the earth’s surface is. Ice, especially with snow on top of it, has a high albedo – up to 90%: most sunlight hitting the surface bounces back into space, whereas the albedo of a summer forest is only about 0.1 or 10%. Recently, a chunk of ice the size of Scotland fell off the Larsen C ice shelf in Antarctica, reducing its size by 12%. Ultimately, it will either melt or break up. Given that sea water has an albedo of only 6%, the reader might like to speculate about the effect this might have on global warming.

Rayleigh’s Criterion.

Here’s an exercise for a physics class. At eye level, make two tiny dots on a whiteboard as close together as possible. Blue, green or red dots work well – try to make them both small and the same size and separation. Then walk backwards looking at the dots with one eye. Eventually the two dots cannot be resolved as separate.

When the light from either one of the dots reaches our pupil, it will be diffracted through a circular aperture and a diffraction pattern is formed on our retina. When light from both dots reaches our eye, the diffraction patterns overlap. BTW, the red one blurs closer than the green (or blue) one. Any idea why?

As a reminder, you will have seen single slit diffraction with a laser, the light passing through a very narrow slit and displayed on a distant screen. The angle in the diagram below is exaggerated for clarity. Notice the central bright maximum is twice as wide as the other secondary maxima on either side of it.

Screen Shot.png

Each tiny element down the length of the slit ( width a) behaves like a point source which can be thought of as producing a circular ripple, like on a pond. These superpose at the screen. When the path difference between contributions at the top and bottom of the slit is one wavelength, (m=1) each contribution has a partner halfway down the slit which has a path difference of half a wavelength. So, every point source has a partner exactly out of phase. At the screen, all these contributions superpose and we get a dark first minimum. So, we see the familiar pattern of a wide central bright maximum and minima on each side, fainter maxima, minima and so on.

Just while we’re here – as a is decreased, pattern smears out (y increased). Narrower slit means broader diffraction pattern in other words.

If we decrease the wavelength (use blue light), y decreases.

Screen Shot.pngLord Rayleigh ( who told us why the sky is blue and discovered argon) gave us the accepted standard for the measurement of angular resolution. Rayleigh’s criterion is the generally accepted criterion for the minimum resolvable detail – the imaging process is said to be diffraction-limited when the first diffraction minimum of the image of one source point coincides with the maximum of another. This is the definition an examiner might want to see. This image of two circular apertures shows what it means; the middle picture shows two images which are JUST resolved.

Screen Shot 1.png

More rigorously, see the middle diagram below. First minimum of one diffraction is exactly underneath central maximum of the other.Screen Shot 1.png

In exams, they sometimes ask you to either draw this or calculate it. Clearly, it’s wavelength-dependent and also dependent on the width of the slit or aperture diameter (a).

Calculation:Screen Shot 8.png

As an example, how far away from two point sources of green paint of wavelength 400nm separated by a distance of 2 mm would you have to stand so they could no longer be resolved as separate?

Solution:

For a circular aperture (our own pupil), we have to invoke the factor 1.22

Screen Shot 7.png

If the paint were blue we could walk further away and still resolve them, theoretically.

NB, in reality – this is an upper limit for people with perfect vision – most people can’t do as well as this. Most people would only be able to resolve the dots as separate at about 4m, which makes this a good little exercise for a class.

 

 

Newton’s Laws of Motion

Screen Shot 3.png
Fun fact. Newton laughed only once in his life, when somebody asked him what was the point of studying Euclid.

FIRST: ” A body continues in a state of rest or motion at constant speed in a straight line unless acted upon by an unbalanced external force.”

Screen Shot.png

SECOND: “The applied force is equal to the rate of change of momentum of the body.” . A rather more modern interpretation is here. If cliff-diving appeals to you, watch this video. As long as you don’t scare easily…Screen Shot 1.png

the conveyor belt problem – if we are to keep a conveyor belt moving at a steady speed – for example in a coal mine where mass is being added to it all the time, we require a force to be applied to the conveyor belt.

THIRD: “For every action, there is an equal and opposite reaction”  

Think about a jet propulsion system. Thrust is a mechanical force which is generated through the reaction of accelerating a mass of gas, as explained by Newton 3. A gas or working fluid is accelerated to the rear and the engine and aircraft are accelerated in the opposite direction.

The force on the working fluid is equal and opposite to the force on the engine and aircraft.

Screen Shot 2.png

Look here for a very easy walkthrough of all of Newton’s Laws. This is a particularly good treatment so you can work through the videos yourselves.

Doppler Effect

Imagine a Formula 1 car approaching the stands at 60m/s. The frequency of sound made by the engine as heard by a stationary observer in the stands is higher than the actual frequency as heard by the driver. The sound is squashed up – or better, the apparent wavelength is decreased and the apparent frequency increased.

car approaching observer
car receding from observer

 

As the car recedes away from the stands, exactly the reverse happens. the observer waves goodbye to the red line. Think of EEEEYOWWWW as the car approaches then recedes.

For a stationary observer and a moving source, we can write:

These will mostly do – but IB requires us to use these as well:-A quick calculation shows how the first equation works. Let the car be moving towards us in the stands at a speed us of 60m/s and emitting a frequency f of 800Hz.

Speed of sound in air is 340m/s. We can find the frequency f’ as heard by the stationary observer. Common sense tells us whether we add or subtract the velocities – in this case, we subtract and hear a higher frequency as it approaches us. (the EEEE bit)

 

As it recedes, we add, thus: (the YOWWW bit)

Police speed detectors bounce microwave radiation (about 10GHz) off a moving vehicle and detect the reflected waves. Because the car is moving towards the police observer, these waves are shifted in frequency by the Doppler effect and the difference in frequency between the transmitted and reflected waves provides a measure of the vehicle’s speed. Of course it works just as well for recession speeds as well.

Two Doppler shifts because of the reflection from a moving target. c is of course the speed of light

By observing distant galaxies, Edwin Hubble concluded that distance and recession speed were proportional – so galaxies further away are receding faster than closer galaxies. We know this because the atomic fingerprint or spectrum of atomic hydrogen or helium is shifted to the red (long wavelength) end of the visible spectrum. The degree of redshift can be used to find out how far away a galaxy is.

This absorption spectrum shot (idealised) shows what the spectrum of atomic hydrogen might look like from several distant objects like galaxies. The further away, the greater the redshift. Redshifts of up to 0.95c have been observed – the light having taken almost the lifetime of the Universe to reach us.

 

 

Finally, a medical use. Doppler blood flow is a technique whereby ultrasound waves (f about 800Hz) emitted from a piezoelectric transducer (transmitter/receiver) are reflected off red blood cells in an artery or vein as they are moving towards the stationary detector. The more occluded or blocked the artery is (think about a fluid in a pipe) the faster the cells are moving. It can also be used to find blood clots in deep veins – DVT – deep vein thrombosis – can be fatal.

The detector and the moving cells are at an angle hence the cosine term and, like the police car, the factor 2 accounts for the reflection from a moving source.

 

Olber’s Paradox.

The comedian George Carlin used to do a sketch where he was a kind of hippy weather man. “And the weather tonight is … dark, man.”

But he didn’t ask the question, why is it dark? Once you get past the flippant answer “duh, because it’s night-time”, you see why it is an interesting question. If there are an infinite number of stars, shouldn’t the sky be bright?”

Since the sky isn’t very bright at night – a lot of dark plus a few pinpoints of light – this is called Olber’s Paradox. Put simply, if the universe is infinite then wherever you look you should see so many stars that the night would be brighter than the day.

Johannes Kepler considered this question but he argued that the universe must be finite. Otherwise the total flux from all the stars would make the night sky “as luminous as the sun.”

Suppose we gaze out in any direction from picture1Earth, imagining a thin sphere of radius R around us. Unlike Kepler, Newton’s model assumed a uniform, infinite (and static, or not expanding) universe, the number of stars in the shell is proportional to R2 and the intensity of radiation from the shell reaching Earth is proportional to 1/R2. So, according to Newton’s model such shells stretch to infinity so the sky can never be dark.

Of course, now we know that the Universe is expanding – it has a beginning – and stars and galaxies aren’t tastefully arranged in neat little spheres.

Lots of bits are dark and the combined effect is to make the night sky dark.

Furthermore, there’s more ‘dark’ than ‘light’ and the Universe expansion probably isn’t uniform. Best estimates suggest that if we calculate the energy needed to overcome gravity, dark energy ( the stuff that accelerates the expansion of the Universe) makes up roughly 68% of it. Dark matter makes up another 27%, leaving the “normal” or baryonic matter that we are familiar with to make up less than 5% of the cosmos.

 

Very Basic Thermodynamics

Screen Shot 9.pngThermodynamics is the study of energy. (IB Core: Section 3.2, Option B.2 part)

A MOLE is an amount of stuff ~6×1023 particles’ worth. (SI unit)

This number is the Avogadro constant, NA (mol-1) – the number of constituent particles in 1 mole of substance. In 12g of C12 or 18g of water, there are 6×1023 carbon atoms or water molecules respectively – one mole. One mole of electrons contains 6×1023 electrons, and so on.

We should remember that for a fixed mass of an ideal gas, the ideal gas equation (below) applies The equation is considered most accurate for monatomic gases at high temperatures and low pressures. Check the link so you understand the assumptions of the kinetic theory of gases.

Screen Shot.png

so k=R/NA

Temperature is a measure of the degree of hotness of a body, as compared to a fixed scale. Normally we calculate in kelvins (K) – a base unit – where a difference of 1K corresponds to a difference of 10C. For now, from the Ideal Gas Laws,Screen Shot 10.png

Energy exists in many forms, such as heat, light, chemical energy, and electrical energy. Energy is the ability to bring about change or to do work.

Laws of Thermodynamics                               Zeroth Law

If A is in thermal equilibrium with B and B is in thermal equilibrium with C then A and C are also in thermal equilibrium.

All thermal equilibrium means is that the rate of transfer of heat from A to B is the same as that from B to A. If B is hotter than A, B transfers heat more rapidly to A than A does to B. But – the transfer is still two-way. Temperature difference between two bodies determines the net flow rate of energy between them.

First Law

The First Law of Thermodynamics or Law of Conservation states that the total energy in the universe is always conserved; it cannot be created or destroyed.  Energy can only be converted from one form into another. For a fixed mass of an ideal gas, the gas can either do work on its surroundings, delta W, gain heat from its surroundings, delta Q or its internal energy increases. delta U change in internal energy is a function of temperature: U is a large scale concept. We cannot talk about the “thermal energy” of something – it has no meaning. Instead we refer to the internal energy of a body which is the total potential energy (arising from intermolecular forces) + random kinetic energy (translational and rotational) of the molecules in a sample of material and clearly we can only measure change in (delta) U not U directly. Easier to see with symbols.Screen Shot 1.png

Let’s imagine a frictionless piston, containing a fixed mass of an ideal gas (very low density, pressure and high temperature). Now, let’s make some changes to it. Isothermal changes are changes without change in temperature, thus the internal energy of the gas is unchanged. So, all the heat supplied = all the work done by the gas. Take note of which processes are slow (so heat gets in or out) and fast (no heat in or out)

Screen Shot 5.png

Screen Shot 2.png

Adiabatic Process. No heat in or out. Any work done is done fast on or by the gas and is reflected in a change in internal energy. Pure adiabatics are quite rare.

Screen Shot 3.pngThe nearest thing to a fast adiabatic process is a bursting tyre. The rubber of the tyre is an insulator, so no heat enters or leaves the gas and the work done by the gas on escaping through the hole is at the expense of a fall in temperature of the remaining gas inside the tyre. On the graph below, this would be the thick green line from high T to low T

Screen Shot 7.png

Screen Shot 6.png

NB: the AREA UNDER the pV graph is the work done on or by the interaction. (fave exam question where you have to count squares)

You might now ask yourselves what the pV graph might look like if the volume was NOT allowed to change – the “sticky piston” problem called an ISOVOLUMETRIC change

Screen Shot 4.png

Screen Shot 8.png

and, yes, it’d be a vertical line at some constant volume between two isothermals.

An ISOBARIC (constant pressure) line would be a horizontal line between two isothermals. It might be helpful to sketch both on the graph above.

A CYCLE of events means that we make changes to the gas to get back to our starting point. The (shaded) area swept out by these changes is a measure of the work done during one cycle. T1 is greater than T2, clearly. Here’s an idealised diagram. In reality heat enters (BC) and leaves  DA) the gas so BC and DA aren’t perfect adiabatics in real systems.Screen Shot.png

This looks complicated, but it’s not, really. Please make sure you look at the problem below from IB 2008. As you can see it consists of two isobaric and two isovolumetric events.screen-shot

Now calculate the overall energy transferred in one cycle and explain whether, after one complete cycle, the internal energy of the gas goes up, down or stays the same.

One-Way Processes.

The Second law of thermodynamics states that that the entropy or measure of disorder of an isolated system always increases, because isolated systems spontaneously evolve towards thermodynamic equilibrium – the state of maximum entropy (or minimum potential energy – a ball spontaneously rolls down a hill and not vice-versa.) A cup of tea tends to cool as energy is dissipated to the surroundings and not vice-versa. An increase in entropy is the Universe doing the most likely thing – the probable is what usually happens. (when you blow up a building it tends not to spontaneously reassemble as if the film ran backwards)

A mechanical watch will run until the potential energy in the spring is converted, and not again until energy is reapplied to the spring to rewind it. A car that has run out of gas will not run again until you refuel the car. In the process of energy transfer, some energy will dissipate as heat. Entropy always increases and is a measure of the disorder of the Universe – put another way, the more energy is transferred from one body to another the greater are the number of ways in which that energy dissipation can take place. For example, a waterfall turns a paddle wheel which drives a turbine which turns an alternator which produces electricity, dissipating energy into many different forms along the way.

IB: Engineering Science Option B: Torque, Angular Momentum and Moment of Inertia (amended)

THIS POST  WAS ORIGINALLY WRITTEN FOR MY OWN IB CLASS. THERE ARE HANDOUTS AND PROBLEMS HERE THAT WE DID IN CLASS BUT NEWCOMERS SHOULD FIND THEM HELPFUL. GO AHEAD AND TRY.

It’s useful to bear in mind that if you can do SUVAT problems, you should have no trouble with their circular equivalents.

There are 4 handouts in total to download; please make sure you work through them carefully. Any difficulty, get in touch.

The main arguments here are the idea of rotational motion and torque as force x distance from pivot x sin(angle between them)

Moments of Inertia need not be calculated for this course – if necessary, you’ll be given them. However, here’s a little problem to think about. You have 2 balls of identical diameter and weight. One is solid, one is hollow. You can’t tell which is which just by knocking on them. Devise a simple way of finding out which one is the solid one (hint: think about the balls rolling down an inclined plane from the same height. Now, compare the moments of inertia of the two balls. The rest is conservation of energy so quite easy.) If you can’t write out the solution, message me for help.

This little animation is quite fun to glance at – the angular displacement is, however, in degrees, not radians, so be careful.  Here’s a screenshot, showing displacement – time graphs for the ant and the ladybug – the constant time period implies constant angular velocity. Notice too that the ladybug leads the ant by 90 degrees

Screen ShotNotice, angular velocity is constant, but the linear speed of the ant and the ladybug are not the same. The ladybug, being closer to the axis of rotation has a smaller linear speed, because:screen-shot-2016-12-07-at-12-20-20(Notice the vertical displacements of the bugs execute SHM – AHLs will study this later)

  1. A review of circular-motion-1
  2. Key Ideas, torque and couple
  3. Basic concepts about moment-of-inertia
  4. From the Specimen Paper specimen-question-for-option-b

Moment of inertia I is defined as the ratio of the angular momentum L of a system to its angular velocity ω around a principal axis. Just as inertial mass is the ratio of linear momentum to speed – its resistance to acceleration, in other words:screen-shot-2016-11-29-at-17-57-19 Angular momentum in a closed system is, just like its linear counterpart, conserved. The ice skater rotates faster when the arms drop to the sides because moment of inertia is reduced and thus angular velocity increases.

Look at the Wolfram demonstration Screen Shot

You will need to download the Wolfram CDF player in order to run the demo.

We might also notice that, for a body starting to rotate from rest:screen-shot-2017-01-14-at-7-45-20-pm

Practically, we find I by imagining a flat sheet of any shape like this having an infinite number of mass elements m at their respective distances r from the pivot, each contributing  torque about the axis of rotation.

We have to add all the torques up, normally requiring integration. But, to keep it simple, we can write:

screen-shot-2016-11-30-at-18-24-25

which is, as the handout shows, the basis for finding I for lots of other shapes and axes of rotation. Remember, you’ll be given I for a particular shape as required.

screen-shot-2016-11-29-at-17-46-14

Bear in mind that when we do problems, the total energy of the system is the sum of the rotational and linear parts – important when we think about an object rolling (instead of sliding) down a hill, for example. Take a look at this solid-cylinder-rolling-down-an-inclined-plane. which runs through a few basic ideas plus some possible lab work.

Finally, for now, a use for all that stored energy.

screen-shot-2016-11-16-at-15-49-54

The great flywheel on Richard Trevithick’s 1802 locomotive, used to level out the power supplied by a single cylinder. Rotational inertia kept the wheel turning.

Energy Transformations (2) EPE to KE: The Battle of Agincourt

A bow stores elastic potential energy in the flexible ash of the wood of the bow and the stretched string.

Screen Shot 2016-06-27 at 14.11.32

The English archers won the Battle of Agincourt in 1415 because the range of the better-made and longer English bows was greater than the bows used by their French enemies. They stored and released energy more efficiently.

Screen Shot 2016-06-28 at 18.01.11They could probably count on a release speed of 100 feet per second = 3000 cm/s = 30m/s and they knew that a 45 degree angle gave maximum range.

 

So 30 cos 450 = horizontal speed = 21.2m/s. This stays the same because  a=0 in the horizontal plane.

In the vertical plane, we have to use the equations of motion (constant a). With 30sin450 as u, and acceleration = g (9.81ms-2)  this  yields t = 2.16s going up, PLUS the same coming down so the arrow’s total time in the air = 4.32s.

So, range =21.2×4.32 = 91.6m…and vertical height reached is  33.2m as long as air resistance is neglected.

Now, let’s take this apart…

At the top of its flight, what kind of energy does the arrow have? Estimate arrow mass at 0.1kg (is this reasonable?)

A:  Using kinetic energy in the horizontal direction (22.47J) plus GPE at maximum height (32.57)

Estimated energy 55J

Where does this energy come from? Think about the area under a F/x graph for a spring or similar where Hooke’s Law is obeyed… Screen Shot 2016-06-27 at 13.12.08.png

A: the energy stored in the stretched bowstring. Let’s assume it obeys Hooke’s Law.

Using our calculations so far, we can now estimate the maximum pulling force the archer would have to use. Suppose the bow is drawn a distance of 0.5m

Screen Shot 2016-06-27 at 13.28.27.png

With quite a lot of ifs and buts – you can see where the approximations are – the archer would have to pull with a maximum force of 55/0.5N or 110N. This would be like holding a mass of 11kg just off the ground. With one hand…

 

Heisenberg’s Uncertainty Principle. Confining a wave/particle in a box.

In subatomic terms, because of wave particle duality, certain pairs of measurements such as where a particle is (x) and  where it is going (its position and momentum) cannot be precisely known. If we know one very precisely, the other cannot be known. Putting this another way, a particle has mass (hence momentum) also a wavelength given by the de Broglie expression

Screen Shot 2016-06-03 at 09.29.37

When particles’ wavelengths interfere, they form a wave packet of finite size having a length which has to fit into the confining box, which can happen in a variety of ways…Here, we’re only really concerned with the smallest “wavefunction”, shown in red at the bottom. The diameter of the box is approximately half a wavelength. The rest are there just to show what’s possible. A wavefunction represents the probability of finding the wave in a particular space.

Screen Shot 2016-06-03 at 09.38.18

Confining our wave in a box, where it is and its momentum are defined like this:

Screen Shot 2016-06-02 at 16.39.04

This makes sense in the context of a problem. Imagine an alpha particle confined within a nucleus of gold. Given the alpha particle has a wavelength confined by a ‘box’ the size of the nucleus, whose diameter might be:

Screen Shot 2016-06-02 at 16.48.20

Suppose we want to find the energy of the confined alpha particle. We use:

Screen Shot 2016-06-02 at 16.58.24

Screen Shot 2016-06-02 at 17.00.02

The energy can be found using a different expression:

Screen Shot 2016-06-02 at 17.04.44

We can find the mass of an alpha particle (2 protons and 2 neutrons. If we plug in the numbers, we get 4.3×10-15 J or 27keV, consistent with observed energies.

You might try the same calculation to find out the energy an electron would have to have to confine it inside the nucleus.

This is why we don’t get electrons inside nuclei…

 

 

 

 

 

 

 

Errors and Uncertainties (IB)

Physics and Chemistry for IG and A level

There will always be something on the exam about this. Here’s a brief introduction

Some people get really wound up about errors. Don’t. They are easy to deal with if you just follow a few simple rules. An error isn’t a mistake, necessarily, just an uncertainty in a reading. Every measurement an experimenter makes is uncertain to some degree. The uncertainties are of two kinds: (1) random errors, or (2) systematic errors. For example, in measuring the time required for a weight to fall to the floor, a random error might occur when an experimenter attempts to push a button that starts a timer simultaneously with the release of the weight and waits a bit too long before pressing the button. Also, some external event, like a gust of wind on a falling feather might yield an anomalous result [For ‘anomalous’, read ‘weird’].  A transcription error is an example of…

View original post 251 more words

Terminal Velocity 2015 Version

Updated

Physics and Chemistry for IG and A level

267 mph, 0-60 in 2.4 secs. Aluminum, Narrow Angle 8 Litre W16 Engine with 1200 hp, $2,400,000

Tom Cruise has one of these. It’s a Bugatti Veyron Super Sport, the fastest production car in the world in 2012. It could go even quicker but the speed has been limited because the tyres won’t be able to handle it if it goes faster.

But, new for 2015, is this…Screen Shot 2015-03-18 at 10.40.11It’s a Hennessy Venom F5, referring to the most destructive tornado on the Fujita scale. Official figures aren’t even out yet, but it is likely to be quicker than its GT cousin at 435+ km/h. and an acceleration of 0-300km/h in under 14 seconds. Only thirty in the world and a price tag of over $1.2 million.

Q:Why do cars have a top speed?

A: When the forward thrust of the engine is balanced by the resistive force of air pushing…

View original post 352 more words

How big is the earth?

CIRCIt’s amazing how coincidence and a little critical thinking goes a long way. Homer the poet believed the Earth to be a flat disk, so if you walk far enough, you’ll fall off the end. Pythagoras believed it was a round ball, but how big was it? Eratosthenes, a Greek scholar living in Egypt 2,300 years ago had heard that on the longest day of summer the midday sun shone right to the bottom of a well in the town of Syene which was near what is now the Aswan Dam. At the same time, he found out that the sun was not directly overhead to the north in Alexandria, the shadow cast there wasn’t vertical instead, it cast a shadow with the vertical equal to 1/50th of a circle (7° 12′). He ‘knew’ that: 1) on the day of the summer solstice, the midday sun was directly over the Tropic of Cancer; 2) Syene was on this tropic; 3) Alexandria and Syene lay on a direct north-south line and 4) the sun was a relatively long way away so the rays of the sun didn’t spread out like the beam of a torch but were parallel instead. According to legend, he had someone walk from Alexandria to Syene to measure the distance – no afternoon stroll, for sure – it turned out to be 5000 stadia or, at 185 m per stadion, about 925 km, but I can’t imagine anybody doing a repeat measurement for accuracy since it would have taken over a month to make the trip. How long was a stadion? Various theories exist, but a good approximation might be that one Roman mile is 8 stadia, or ‘stades’ or 5000 ‘Roman feet’.  From these observations, he thus concluded  that, since the angular deviation of the sun from the vertical at Alexandria was also the angle of the subtended arc, the linear distance between Alexandria and Syene was 1/50 of the circumference of the Earth which thus must be 50×5000 = 250,000 stadia or just over 40,000km. The diagram isn’t precise – the Tropic isn’t this far north, but, you get the idea. The Roman mile is shorter than its present day counterpart, in case anybody noticed. At the Equator, the Earth is 24, 902 modern miles round. More or less.

Just for completeness, here’s a fun little exercise on finding the circumference using data from the space station.

Now we know how big it is, we can use Newton’s Law of Gravitation to find its mass. Screen Shot 2015-01-04 at 4.32.33 PM copyImagine a 1kg mass anywhere on the circumference r, found because we now know its circumference. G and g are both known so its mass drops out nicely. Having found its mass we can use density =mass/volume to find the average density of the material of the Earth.

IB HL Medical Imaging – More Advanced Newer Techniques

I think it likely that they will ask for comparisons rather than specific detail.MRI is the only one not using X rays so patient dose is the important factor.

Physics and Chemistry for IG and A level

Image intensifier screens (fig I2.7) allow the production of an image changing in real time, which can be advantageous, particularly when used with high contrast material like Ba which shows the passage of barium sulphate through the gut, for example. It does, however, subject the patient to long exposure times, hence high radiation dose. Since the objective is to minimize this, we look elsewhere for more sophisticated techniques.

Faster computing in the 1970’s yielded CAT scanning (Computerised Axial Tomography) Tomos is Greek for ‘slice’ and the technique involves imaging a slice through the body by rotating digital detectors and emitters, before moving on to the next slice, consequently reducing exposure time. It is to X-rays what digital cameras are to photography. A brain scan takes less than a second to image. (see p705) Resolution (remember Rayleigh) has much improved over the last three decades and kidney stones less than 2mm…

View original post 292 more words

Ionising Radiation. Absorption, Attenuation and HVT

Radioactive material used in medicine can be used for either diagnosis or therapy. In both cases the primary directive is to minimize patient dose to non-critical regions. This is best achieved using where possible isotopes with short half lives and fast metabolic or physical elimination. The second directive is to use material indistinguishable metabolically from the stable version.

First, a quick look at ionizing radiations and their properties. Any charged massive particle can ionize atoms directly by fundamental interaction through the Coulomb force if it carries sufficient kinetic energy. In living tissue, macromolecular damage can lead to dysfunction – enzymes won’t work if their structure is changed by ionization, for example.Screen Shot 2014-04-14 at 8.22.55 PM

Alpha particles are monoenergetic, massive but slow, stopped by thin paper or a few cm of air, giving up all their energy in a short range, 105 ion pairs mm-1

Beta electrons are emitted with a range of energies, just shy of 2000x lighter than alphas, but 100 x less ionizing ability. Useful for imaging because the particles are penetrating enough to escape from the body and be externally detected. Stopped by thin Al or a few cm/1m of tissue, 103 ion pairs mm-1

Gamma photons – zero rest mass and v=c, stopped by thick (10cm) lead, range several m in air. Only 1 ion pair in each mm of path.

An exponential law of intensity v absorber thickness (x) c.f, law of radioactive decay shows how the intensity decreases with absorber thickness.If the absorber is homogeneous, we can define its absorptive ability by the ‘linear absorption coefficient’ ‘mu’, analogous to decay constant. The larger the value of ‘mu’ the better absorber of radiation the material is. ‘mu’ is related, obviously, to the density of the material but multiplied by a quantity describing how much of a target its atoms present to incoming radiation. We can keep it simple, however and just say:Screen Shot 2014-04-15 at 12.00.30 PMScreen Shot 2014-04-15 at 12.04.59 PMAttenuation or the ability of a material to absorb therefore weaken the radiation, is an exponential function, as shown. We can think about attenuation as ‘partial absorption’.

The HVT or ‘half value thickness’ is therefore the value of thickness x such that I drops to I/2, or

Screen Shot 2014-04-14 at 8.57.42 PMHVT is energy dependent, as one might expect – see fig I2.2, but this is less important than the exponential principle. What’s the HVT of this material? In what units is it measured? Suppose you had an energetic beta source, a detector and several identically thin squares of Al. You might think how you’d measure HVT for Al. Do you see why the source/detector distance would have to be constant throughout?Screen Shot 2014-04-14 at 9.09.51 PM

You should be able to find the HVT for this material hence its attenuation coefficient.

Look at Qs 3 and 4 on p702. These are important – exactly the kind of thing they might ask, so go through them carefully.

 

 

 

IB HL Biomedical Physics – Specific Acoustic Impedance

Acoustic impedance (Z) indicates how much sound pressure is generated by the vibration of molecules of a particular acoustic medium at a given frequency. This frequency (f) dependence is useful when describing the behaviour of musical wind instruments. Mathematically, it is the sound pressure p divided by the particle velocity c and the surface area S, through which an acoustic wave of frequency propagates. This is a longwinded way of saying that the specific acoustic impedances must match for maximum transmission. If they don’t, there’s too much reflection and not enough transmission. This is one reason why during prenatal ultrasound scans, a gel is placed on the transceiver to provide good acoustic coupling as well as lubrication. Here’s the formula

Image

Poor acoustic coupling. This is a problem  between middle and inner ear because they don’t match acoustically. z (air) in the middle ear and z for the cochlear fluid is very different, hence a lot of the sound doesn’t get transmitted into the cochlear fluid. This is why the middle ear needs to amplify the sound first.

Since density is involved, temperature changes affect z. Higher temperature means higher speed and lower density. Since speed predominates, the higher the temperature the smaller z.

Image

 

 

IB HL Biomedical Physics – the Ear and Hearing

Image

The ear consists of three basic parts – the outer ear, the middle ear, and the inner ear. Each part serves a specific purpose in the task of detecting and interpreting sound. The outer ear collects and channels sound -a  longitudinal pressure wave- to the middle ear. Because of the length of the ear canal, it behaves like a resonance pipe open at one end, with antinode at the open end and node at the eardrum. It is capable of amplifying sounds with frequencies of approximately 3000 Hz – you should be able to verify this using an ear canal length of a few cm = a quarter wavelength, and wave speed about 340m/s. The eardrum is a flexible membrane like a drum skin, oscillating at the same frequency as the incoming sound. The 3 middle ear bones or ossicles are inside a fluid filled cavity and are levers, amplifying the pressure wave. The ossicles mechanically convert the vibrations of the eardrum into amplified pressure waves in the fluid of the  cochlea or inner ear with a lever arm factor of 1.3. Since the area of the eardrum is about 17 times larger than that of the exit point, the oval window, the sound pressure is concentrated, leading to a pressure gain of at least 22. This enhances the ability to hear very faint sounds where the incoming force on the eardrum is very small. Study the worked example on p692.

The pressure wave at the oval window is then transformed into a compression wave through the fluid in the inner ear which converts this energy into nerve impulses transmitted to the brain.