The Planck units

This is a little bit beyond IB, but worth a read. The image is of the silicon surface of a microchip. There is this idea that the Planck units present some kind of fundamental limit on the values of physical quantities. This answers:

• what the Planck units are and why they are interesting
• which of them can be interpreted as some kind of “limit”, and how.

What are the Planck units and how are they derived?

Physics is all about establishing the relationships between various physical parameters. In the Système Internationale (SI), the “big 5” physical parameters that form the basis in which everything else can be expressed are mass [𝑀], length [𝐿], time [𝑇], electriccurrent [𝐼], and temperature [Θ]. These are called the dimensions of a quantity (note this is an entirely separate use of the word to the 3 dimensions of space and 1 of time).

The SI units of the base quantities are kilograms, metres, seconds, amperes and kelvins, respectively. These units are entirely arbitrary- there is no fundamental reason why we should use metres for [𝐿]and not inches, furlongs or cubits instead. To take an example, speed has dimensions [𝐿][𝑇]−1and SI units of metres per second.

We know that  there exist certain fundamental constants. Among the most important of these are:

• the speed of light 𝑐, which has dimensions of a speed [𝐿][𝑇]−1and appears across all of physics,
• Newton’s gravitational constant 𝐺, which governs the strength of gravity. By considering either Newton’s gravitational force law or general relativity (GR) one can see it has dimensions of [𝑀]1[𝐿]3[𝑇]−2
• The reduced Planck’s constant ℏ, whose value governs quantum mechanics and which has dimensions [𝑀][𝐿]2[𝑇]−1.

In 1899 Max Planck observed that by multiplying these constants with one another in various combinations, one could create specific-valued quantities with any combination of the dimensions of length, time and mass. For example, we have: Note that while some of these seem extremely small, there is nothing remotely “limiting” about others— in fact, their values are astonishingly “human-scale” considering they are derived only from fundamental constants. The Planck mass is quite small by everyday standards but not exceptionally so- it is about the mass of a flea and you can easily see Planck-mass objects with simple magnifying equipment. We can derive others. The Planck momentum is similar to that of a served tennis ball. The Planck energy is just enough to boil enough water to make a cup of tea for everyone in the Royal Albert Hall.

By also including Boltzmann’s constant 𝑘 we can extend the Planck system to include temperature and hence the other thermodynamic parameters, and by including the permittivity of free space 𝜀we can include electric current and hence electromagnetism too.

Why are they interesting?

The Planck units are interesting for essentially two reasons: one very pragmatic, the other fundamental.

1. Since our choice of SI units are entirely arbitrary, in the SI these constants all take very ugly values. ℏ, for example, has a value of  6.63×1034kg.m2s1. By replacing the SI units with the Planck units as the basic units of measurement, the fundamental constants 𝑐, 𝐺, ℏ, 𝑘and 𝜀all take the value of exactly one. This presents a convenient opportunity for physicists to declutter their equations, since we no longer need to keep track of all those wretched constants. In complex computer simulations, it also provides a speed-up by removing the need to multiply every term by the arbitrary SI values of the fundamental constants, which must be specified to high precision.In reality, most real physicists use a “hybrid” system of units because they work on systems that are far away from the Planck scale. I’ll explain what that means below, but suffice it to say that particle physicists don’t often care what value 𝐺 takes in their system of units, because gravitational effects are irrelevant to their line of work. So they will use units such that 𝑐=ℏ=1, but also that the energies in their system, which are always far below the Planck energy, take reasonable values. Similarly, physicists simulating black hole mergers disregard quantum effects and so don’t care about the value of ℏ in their unit system: they will use units where 𝑐=𝐺=1, but where the masses of their black holes are not unimaginably huge multiples of the mass unit.
2. The more interesting reason why Planck units are important is that in a certain sense, they demarcate the scale where gravitational, quantum, electrodynamic and thermodynamic effects all become comparable. Since all the relevant constants in the laws of physics are simply 1 in the Planck system, if the input values to our equations are similar to the Planck values, then effects from all of these branches of physics are present in equal measure and we cannot neglect anything. For example, if we consider a black hole with the Planck mass, we find that both the radius of its event horizon (governing the scale where gravitational effects become dominant) and its de Broglie wavelength (which tells us when quantum effects move from negligible to dominant) are about the same- around the Planck length, in fact. Additionally, the mass-energy of this thing would be the Planck energy, and the Hawking temperature of its event horizon would be the Planck temperature, and so on. Imbue it with the Planck charge too, and you basically have nearly all of physics playing together in a single system. Of course, such a black hole (a so-called Planck hole) is both far smaller than the black holes we can observe in the sky, where gravity dominates and quantum effects are not discernible, yet also far more energetic than the particle collisions we can make at CERN, where the effects of self-gravity are immeasurably small and quantum effects dominate. This is basically because the Planck length is so small. As such, we are very far away from ever being able to see a Planck hole, have no idea how gravitational and quantum effects interplay for such an object, and can make no predictions for how it would behave. Experimentally, therefore, we are limited to observing physics only in either of the regimes where one of gravitational or quantum mechanics becomes too small to matter, and so have no prospect of figuring out the rules in the general case where both must be considered. This is why, in the 21st century, fundamental physics, as an empirical science, is stuck in such a deep rut. As we wrest the secrets about dark matter and dark energy from their hiding places, more light (pun) might be shed on such matters.

What Planck units represent “limits”, and how?

The only three Planck units that can in any sense be thought of as some kind of limit are the Planck length, the Planck time, and the Planck speed (which is of course simply the speed of light).

To finally answer the question: the reason the Planck length is considered a limit. Anything localised in at least one direction to something close to the Planck length has, by the Heisenberg uncertainty principle, an energy expectation value that approaches the Planck energy, and hence a gravitational radius approaching its spatial extent. In other words, the more localised it becomes, the closer it becomes to a Planck hole. Now, we know nearly nothing about Planck holes, so at this point physicists start talking out of their backsides to some extent. But from classical general relativity, we know that black holes distort the geometry of spacetime. It therefore doesn’t make much sense to talk about a physical system with substructure below the Planck length, since the sub-Planck scale variations in energy density would shroud the whole object in an event horizon and leave it causally disconnected from the rest of spacetime.

The Planck time is “limiting” simply because it is the minimum possible time it would take to traverse the shortest meaningful distance, so there is no physical process characterised as having a timescale faster than the Planck time.

It is important to state that the Planck length and time are not “hard” barriers like the sound barrier. They merely represent the scale where different physical phenomena move from being negligible to being significant. Indeed, one can define many different systems of Planck units, for example by taking the non-reduced Planck’s constant ℎ=2𝜋ℏ=1instead of ℏ=1. A factor of 2𝜋 doesn’t really matter- they should be considered instead a “here be dragons” warning post that when energy is around this localised, quantum phenomena cannot be considered approximately independent of the spacetime they occupy.

To dispute what some others have said: in the current widely accepted fundamental theories of physics (GR and the Standard Model), spacetime itself is continuous, self-similar at all scales, and not in any way quantised or pixelated. It does seem, however, that there are theories of quantum gravity that challenge this assumption, notably loop quantum gravity, but these are still very edgy and far from achieving consensus. The Planck length is only currently considered somehow minimal because of the rules governing energetic phenomena that live in that spacetime, not the spacetime itself.

One final subtlety. There are still certainly occasions in which it makes sense to speak about values of physical parameters on the wrong “side” of the limiting Planck unit. The classic example of this is that if I shoot a laser pointer at the Moon and sweep it across the sky very fast, the point of light can move across the surface of the Moon with a speed not limited by the speed of light (the Planck speed). This is because the point of light is not itself a physical object, cannot carry either energy or information along its path, and is not subject to the laws that constrain the photons themselves.

There are analogous examples where one can still talk about lengths below the Planck length. For example, in spectroscopy, physicists often talk about “linewidth”. No source of light emits a pure electromagnetic wave at a single frequency. If you measure the peak-to-peak distance for each wave of a laser, you will see some small variation in wavelength- this is known as the “linewidth”. There is no physical lower bound to this; the linewidth of the laser at LIGO used to detect gravitational waves is about a billion times smaller than the diameter of a proton. Potentially, this quantity could be smaller than the Planck length.

Finally, if we examine the emission lines of the hydrogen atom, there is no fundamental lower limit to the width of these lines – they may be narrower than the Planck length.

My thanks to a number of creatives for help.

Measuring Stellar Distances

Three useful units – which we use depends on how far away the object is. Being able to convert quickly is useful to know.

• Relatively small distances – Astronomical Unit, AU = average distance between Earth and Sun  (1 AU=150 million km)
• Light Year. 1 ly is the distance light travels in a year at a speed of 300 million m/s.

Find the distance in km between us and our nearest star Proxima Centauri (4.3ly away) You might like to speculate how long it might take a spacecraft travelling at a maximum possible 25,000km/h to reach it.

• One parsec (pc) is defined as the distance to a star that shifts by one arcsecond from one side of Earth’s orbit to the other. These angles are incredibly small. They’re too small for degrees to be a practical unit of measurement. There are 3,600 arcseconds (60 minutes x 60 seconds) in one degree. To provide some perspective: one arcsecond is equivalent to the width of an average human hair seen from 20m away.

The nearest star is Proxima Centauri, at 1.3 pc. The Andromeda Galaxy, the closest spiral galaxy to our own, is nearly 800 kiloparsecs away.

If we imagine ourselves taking measurements of an imaginary star at six monthly intervals, it seems to have moved with respect to the Sun, 1 AU away. Putting this another way: Explaining Trigonometric Parallax.

• Because of the Earth’s revolution around the Sun, nearby stars appear to move with respect to very distant stars which seem to be standing still.
• Measure the angle to the star and observe how it changes as the position of the earth changes. In the second diagram if the observation point is at the top of the picture, six months later it will be at the bottom, 2 AU’s away
• You can use your fingers to show trigonometric parallax. Shut one eye and hold your finger about eighteen inches in front of your face. Observe a distant object and the finger. Keeping still, look with the other eye. The finger represents the near star and appears to have moved with respect to the background. If you ask a friend to hold up his finger and repeat the observation, it would seem to have moved much less.
• The parallax or apparent shift (from the Greek for ‘alteration’) of a star is the apparent angular size of the ellipse that a nearby star appears to trace against the background stars. Because all parallaxes are small (the stars are very far away), we can use the small angle approximation as shown. If we measure the distance to the star in AU. (astronomical units), then the parallax is given by: For example – the six month parallax angle for Alpha Centauri is 1.52 seconds of arc. You might like to calculate how far away this is in light years.

Luminosity

• Is the amount of electromagnetic energy a body radiates per unit of time. {J/s (W)}
• Is intrinsic to a body and is a measurable property which is independent of distance.

Imagine a point source of light of luminosity L that radiates equally in all directions. A hollow sphere centred on the point would have its entire interior surface illuminated. As the radius increases, the surface area will also increase, and the constant luminosity has more surface area to illuminate, leading to a decrease in observed brightness. A is the area of the illuminated surface, a sphere of radius r. F is the Flux of the illuminated surface, the energy radiated per second per square metre of surface from a point source such as a star. This means that Luminosity is the total flux in watts

So: The sun has a luminosity L = 3.8 x 1026W. If Earth – sun distance is 150m km and the Sun can be considered a point source, we can show that the radiant energy flux at the surface of the Earth is about 1.3kW m-2

Hertsprung and Russell showed that that the luminosity of a star L (assuming the star is a black body – a perfect emitter and absorber – which is a good approximation) is also related to temperature T and radius R of the star by the equation: where  ‘sigma’ is the Stefan-Boltzmann Constant  5.67 × 10−8 W·m-2·K-4

L is often quoted in terms of solar luminosities, or how many times as much energy the object radiates than the Sun, so LSun= 1

Questions:

1. Find the actual luminosity of the Sun, given a surface temperature of 6,000K and a radius of 7 x 108m
2. Compare with Sirius – a very bright star – temperature 12,000K and radius 2.22 x 109m. How much brighter is Sirius than the Sun – in solar luminosities?
3. What would be the surface temperature of a star having the same luminosity as the Sun but twice the radius. What would it look like?
• So a bigger star can be at a lower temperature and yet have the same luminosity, i.e. it looks just as bright
• A hotter star is more luminous than a cooler one of the same radius.
• A bigger star is more luminous than a smaller one of the same temperature.

A cool (red) giant star is more luminous than the Sun because, even though it is cooler, it is much larger than the Sun.

Finally, the idea of a Standard Candle is an important one in astrophysics. Certain classes of objects such as supernovae and Cepheid variable stars have properties whereby their luminosities can be determined separately from other measurements. For example, the period of a Cepheid variable star depends on it’s mean absolute magnitude; the more luminous the star, the longer the period.

If we know the luminosity and can measure the energy flux or brightness, then by comparing it with a standard candle then we can figure out how far away it is by comparing it with a standard candle of the same luminosity.

Resolvance of a Diffraction Grating

Illuminating a diffraction grating with monochromatic light from a He/Ne laser shows a typical pattern, out in the photograph to m=3 on both sides. The spots are equally spaced and we notice that the m=2 spot is hidden under the first single slit diffraction minimum – a “missing order”. The geometry is identical to that for a double slit, d being the distance between the centre of one slit and the next. For a bright maximum:  Unlike two-slit interference, only at very particular angles do the contributions from each slit add constructively. Everywhere else, the contribution from one slit has a partner somewhere else down the grating which cancels its contribution out, hence the very bright spots and a lot of empty space.

You are strongly encouraged to go to the Wolfram Demonstrations Project, download the CDF player and experiment with this demonstration. 1, 2 or many slits -the choice is yours. With 15 slits the pattern is almost indistinguishable from a diffraction grating – screenshot below – the single slit diffraction envelope is clearly shown. Light intensity (y-axis) is proportional to amplitude squared. A flame test for sodium displays a very bright yellow emission. This emission is due to the sodium D-lines – two lines very close together. The diagram shows the absorption spectrum of the Sun by Fraunhöfer who labelled the lines. The sodium doublet is seen at wavelengths of about 589.0 nm and 589.6nm.

How could these be resolved using a diffraction grating? We recall that a diffraction grating gives sharp, clear orders.

More accurately, the D lines have wavelength1 = 589.592nm and wavelength2 = 588.995nm

We can find the resolvance or the resolving power required for the doublet to be resolved. For N lines of the diffraction grating, we can write (without derivation) for the mth order: So, in this case, for a required resolvance of about 1000, viewing the second order would need N=500 grating lines to be illuminated – even the coarsest of gratings manages this easily – a grating with 1800 lines per mm is quite common, if rather expensive. The larger N the better the resolution. If third, fourth or greater orders are visible, a coarser hence cheaper grating will do.

Black bodies, Wien, Boltzmann – very briefly A black body is an idealised body which absorbs and emits all radiation incident upon it. For example, this piece of iron which glows orange-red when heated is an approximate black body. The colour tells us the temperature of the metal. If we continue heating the metal, eventually it will glow orange, then yellow then white. By then, the metal would have melted and boiled off. Theoretically, if we kept on heating it, it’d glow blue-white, eventually emitting UV and even X-rays. Similarly, a star is an approximate black body radiator.  For hotter stars, the maximum wavelength emitted shifts to shorter wavelengths, as shown by the graph.

The black body radiation curves for different temperatures peak at a wavelengths inversely proportional to the temperature.

The plot is valid for determining the temperature of any object which is considerably hotter than its surroundings. Wien’s Displacement Law relating wavelength to temperature is shown on the diagram and the displacement constant shown. Our Sun has an external T of about 5300K – the intensity peak is at a wavelength of 550nm.

The thermal energy radiated by a hot body per unit area per second  (power/area) is proportional to the fourth power of temperature – Stefan-Boltzmann’s Law So, for our Sun, at a temperature of 5300K, every square metre of the surface radiates almost 45MW of power. An object which absorbs all of the energy which falls on it is an ideal absorber or blackbody. For such a body e = 1, where e is the emissivity. Most emitters aren’t black bodies, however, so we can amend Stefan-Boltzmann thus: where e is a number between 0 and 1. e is zero for a shiny mirror (absorption =0) and 1 for a black body.

By contrast, Albedo (in Latin ‘whiteness’) is the fraction or percentage of incident solar infrared energy  reflected from the Earth back into space and is a measure of how reflective the earth’s surface is. Ice, especially with snow on top of it, has a high albedo – up to 90%: most sunlight hitting the surface bounces back into space, whereas the albedo of a summer forest is only about 0.1 or 10%. Recently, a chunk of ice the size of Scotland fell off the Larsen C ice shelf in Antarctica, reducing its size by 12%. Ultimately, it will either melt or break up. Given that sea water has an albedo of only 6%, the reader might like to speculate about the effect this might have on global warming.

Rayleigh’s Criterion.

Here’s an exercise for a physics class. At eye level, make two tiny dots on a whiteboard as close together as possible. Blue, green or red dots work well – try to make them both small and the same size and separation. Then walk backwards looking at the dots with one eye. Eventually the two dots cannot be resolved as separate.

When the light from either one of the dots reaches our pupil, it will be diffracted through a circular aperture and a diffraction pattern is formed on our retina. When light from both dots reaches our eye, the diffraction patterns overlap. BTW, the red one blurs closer than the green (or blue) one. Any idea why?

As a reminder, you will have seen single slit diffraction with a laser, the light passing through a very narrow slit and displayed on a distant screen. The angle in the diagram below is exaggerated for clarity. Notice the central bright maximum is twice as wide as the other secondary maxima on either side of it. Each tiny element down the length of the slit ( width a) behaves like a point source which can be thought of as producing a circular ripple, like on a pond. These superpose at the screen. When the path difference between contributions at the top and bottom of the slit is one wavelength, (m=1) each contribution has a partner halfway down the slit which has a path difference of half a wavelength. So, every point source has a partner exactly out of phase. At the screen, all these contributions superpose and we get a dark first minimum. So, we see the familiar pattern of a wide central bright maximum and minima on each side, fainter maxima, minima and so on.

Just while we’re here – as a is decreased, pattern smears out (y increased). Narrower slit means broader diffraction pattern in other words.

If we decrease the wavelength (use blue light), y decreases. Lord Rayleigh ( who told us why the sky is blue and discovered argon) gave us the accepted standard for the measurement of angular resolution. Rayleigh’s criterion is the generally accepted criterion for the minimum resolvable detail – the imaging process is said to be diffraction-limited when the first diffraction minimum of the image of one source point coincides with the maximum of another. This is the definition an examiner might want to see. This image of two circular apertures shows what it means; the middle picture shows two images which are JUST resolved. More rigorously, see the middle diagram below. First minimum of one diffraction is exactly underneath central maximum of the other. In exams, they sometimes ask you to either draw this or calculate it. Clearly, it’s wavelength-dependent and also dependent on the width of the slit or aperture diameter (a).

Calculation: As an example, how far away from two point sources of green paint of wavelength 400nm separated by a distance of 2 mm would you have to stand so they could no longer be resolved as separate?

Solution:

For a circular aperture (our own pupil), we have to invoke the factor 1.22 If the paint were blue we could walk further away and still resolve them, theoretically.

NB, in reality – this is an upper limit for people with perfect vision – most people can’t do as well as this. Most people would only be able to resolve the dots as separate at about 4m, which makes this a good little exercise for a class.

Newton’s Laws of Motion Fun fact. Newton laughed only once in his life, when somebody asked him what was the point of studying Euclid.

FIRST: ” A body continues in a state of rest or motion at constant speed in a straight line unless acted upon by an unbalanced external force.” SECOND: “The applied force is equal to the rate of change of momentum of the body.” . A rather more modern interpretation is here. If cliff-diving appeals to you, watch this video. As long as you don’t scare easily… the conveyor belt problem – if we are to keep a conveyor belt moving at a steady speed – for example in a coal mine where mass is being added to it all the time, we require a force to be applied to the conveyor belt.

THIRD: “For every action, there is an equal and opposite reaction”

Think about a jet propulsion system. Thrust is a mechanical force which is generated through the reaction of accelerating a mass of gas, as explained by Newton 3. A gas or working fluid is accelerated to the rear and the engine and aircraft are accelerated in the opposite direction.

The force on the working fluid is equal and opposite to the force on the engine and aircraft. Look here for a very easy walkthrough of all of Newton’s Laws. This is a particularly good treatment so you can work through the videos yourselves.