Wednesday 4 November 2009

The solution to a conundrum.

Yesterday, BBC2 broadcast a fascinating Horizon programme on one of my favourite subjects – black holes.  A parade of physicists (theoretical and experimental) and astronomers were put before us to talk about them, and why the thing that lies in their very centre presents such problems for modern physics.

            Einstein’s General Theory of Relativity is very good at accounting for gravity, and what happens on the large scale; quantum mechanics is very good at describing atoms and sub-atomic particles, and what goes on at the very small scale.  With black holes, we know there is a connection of some sort between the two theories, and between them and thermodynamics, because of the phenomenon of Hawking radiation (which, saving Hawking’s status among physicists, should really be called Bekenstein-Hawking radiation, because Jacob Bekenstein worked out the formula for it first, in 1972, whereas Hawking came up with it in 1975[1]) and because inside a black hole, at its very centre, is a thing called a ‘space-time singularity’.

            What is a space-time singularity?  It is what existed at the very, very beginning of the Universe, and why we can describe our Universe as if it were an enormous black hole.  It is, in effect, a Euclidean point, of zero dimension and extension, but one that has mass, and therefore density and pressure – both infinite – and infinite energy density.  It is, to the theoretical physicists, a surd in the equations of the General Theory of Relativity, and one they can’t cope with.

            When they try to deal with it by quantising it, to create a Quantum Theory of Gravity, they get still more infinities.  Such a theory is not, in the technical jargon, ‘renormalisable’.  They get infinities in ordinary quantum field theory, too, but by a piece of mathematical sleight-of-hand called renormalisation[2], they can get rid of them.  Neither the great British theoretician, Paul Dirac, nor the American physicist, Richard Feynman were particularly happy about this conjuring trick, but they could see no alternative.

            Sir Roger Penrose has pointed out that, in order for our Universe to have the level of order we perceive, and in order for it to be capable of supporting organic chemistry and therefore life, what emerged from the space-time singularity had to be selected from the thermodynamic phase space of all possible Universes to a very high degree of precision.  If the number of baryons (protons and neutrons) in the Universe is of the order of 1080, and the cosmic background radiation is 2.7 K, then the current entropy per baryon is

 

S/N  = ( k logeV)/ N  =  108 ,

 

meaning there are roughly 108 photons for every baryon (k here is measured in ‘natural units’, and is thus equal to 1; N = 1080; logeV = 1088; V = e10^88).  Penrose calculates that, when one takes into account the black holes in all the galaxies, for a closed (Riemannian) Universe, the total volume, V, of phase space that the Creator had to aim for in creating the Universe out of the original space-time singularity, given an entropy per baryon at the ‘Big Crunch’ of 1043 in natural units, this would mean a total entropy of 1080 × 1043 = 10123, and a phase space volume of 1010^123 (or, to be more accurate, e10^123).  As Penrose says, in order to produce a Universe resembling the one in which we live, God had to aim for an absurdly tiny volume of the phase space of all possible universes – about 1/e10^123 – of the entire volume, a number one could not write down in the ordinary denary notation, even if one employed all the particles in the Universe on which to write each single digit[3].

            What neither Penrose nor any of the scientists interviewed on the Horizon programme can deal with is the idea that infinity, or the infinitesimal, might have a rôle in physics – and the very last thing they can deal with is the idea of God.  Penrose actually talks of ‘the Creator’, rather than God, and he is not serious in referring to Him.  Je n’avais-pas eu besoin de cette hypothèse.  They have tried to create a unified ‘theory of everything’ (‘TOE’), in the form of Kaluza-Klein Theory, N=8 Supergravity, String Theory, Superstring Theory, M-Theory, Quantum Loop Gravity, and E8 Polytope Theory, all of which have failed.  (This would undoubtedly be denied by the theoretical physicists who continue to waste their careers pursuing them, but the fact is they are simply going around in ever-decreasing circles.)

Electro-weak theory has been a partial success, in that it has managed to unify electromagnetism with the weak interaction.  Quantum chromodynamics is a reasonably successful account of the strong nuclear force, but there are several GUTs (‘Grand Unified Theories’) on the market, which attempt to unify electromagnetism, the weak force and the strong force, and no candidate has yet found universal favour and sufficient experimental support to count as the theory of the three forces other than gravity.  The Standard Model accounts for the various particles and forces, but it is ad hoc, and relies on the Higgs Field and the Higgs Boson to provide all the various particles with their mass.  The latter may be detected by the Large Hadron Collider at CERN in Geneva when it is finally over its teething problems.

Why am I so confident that the TOE-pursuing physicists are wasting their time?  Because they are trying to explain the inexplicable.  They are trying to extend physics where physics does not belong, and where theology and metaphysics take over.  A space-time singularity is not a surd in the equations, but an irreducible and inescapable reality.  The Big Bang singularity was the very moment of creation.  At cosmic time t = 0, spatial extension in all three dimensions, x, y and z was 0, and all the mass in the Universe – all 8.79674 × 1052 kg of it – was concentrated in a Euclidean point.

The rest-energy of this mass is Mc2 = 7.906 × 1069 J, a truly staggering number.  However, at cosmic epoch t = 0, when you convert this into action, which is energy × time, you get 0.  It is only when the jump is made from t = 0 to t = the Planck time, (Għ/c3)½ = 1.616 × 10-35 s that there is a non-zero quantity of action,

 

S  =  1.277648 × 1035 Js .

 

This is perfectly consistent with the Feynman path integral approach to quantum mechanics and quantum field theory.  Of course, the implication is, that there was one enormous jump in the amount of action (in the physical sense) in a very, very short space of time!  Divide s by h and one gets 1.928 × 1068, an enormous number, which is the equivalent of 1056 photons, each with a frequency of 1.928 THz (infra-red light, or radiated heat)[4].  The Big Bang was very hot indeed (about 1032 K, see: http://www.astro.ucla.edu/~wright/BBhistory.html)!

            Yet this won’t satisfy the physicists.  Why not?  Because they will want a set of equations telling them how to get rid of the Big Bang singularity altogether and allowing them to supply a complete, atheistic, naturalistic explanation of the Universe and its laws.  That they cannot have.  It simply isn’t on offer.  There is no such explanation, because the Universe was created by a Creator God from nothing.  There was a point in eternity (not time) when there wasn’t even a space-time singularity, because there was no mass-energy to be squeezed into that Euclidean point.

            Einstein’s dream of a final theory is just that – a dream.  His dream, and that of the British mathematician WK Clifford, and of Galileo Galilei before him, and Pythagoras and his followers before him, of reducing physics to geometry, is pure fantasy.  (There are mathematicians who want to reduce geometry to topology and thence to algebra!)

            The Austrian-American mathematician and physicist Kurt Gödel probably spelt the end to this dream in 1931, when he published the work proving his two incompleteness theorems[5].  Gödel proved that, in any formal system capable of generating propositions of arithmetic, it is the case that (a) for any consistent, effectively generated theory capable of proving certain basic arithmetical truths, there is at least one statement that is true, but not provable, in the theory; and (b) for any formal theory including basic arithmetical truths and also certain truths about formal provability, the theory would include a statement of its own consistency if and only if it were inconsistent.

            Any physical theory must, of its very nature, be mathematical, and – obviously – logically consistent.  While it may not be provable, it must, at least, be empirically disprovable, if the late Sir Karl Popper is any guide to the methodology of the natural sciences.  Applying Gödel’s two incompleteness theorems to all theories in physics, there must be some equations generated by such theories that are either insoluble, or if soluble, are not provably true, which amounts to the same thing (i.e., one might be able to come up with an answer, or several possible answers, but have no way of knowing if it was the right one, or which was the right one).

            It is therefore quite impossible for a TOE to specify the values of all the physical constants, determine the laws of physics, the relative and absolute strengths of the four physical forces, and the initial thermodynamic properties of the Universe, all with a unique, self-consistent and mathematically provable set of equations.  There are just too many different parameters involved, as we have already seen.

Even if it were possible for such a theory to exist, which I am very far from allowing, that still wouldn’t prove that God did not exist, nor would it obviate the need to postulate a Creator.  For one would still have to ask, where did this set of equations come from – who (or conceivably, what) decided to set the Universe up on that basis?  Where did the matter and energy come from, where the space and time?

There are none so blind as those who will not see.  It is a sad reflection on so many of our scientists that they will do anything rather than admit the possibility of a God.

 

***

    



[1] The equation is S = 2πm2(kG/ħc) for a spherically symmetrical black hole, where S is the entropy, m is the mass, k is Boltzmann’s constant, G is the Newtonian constant of gravitation, ħ is the Dirac constant (Planck’s constant over 2π) and c is the speed of light in vacuo.  See: Penrose, R. (1989), The Emperor’s New Mind, London: Vintage, pp.441-3.  The absolute temperature of the black hole is then T = ħc3/8πkGm, and the power output is given by P = ħc6/15,360πG2m2.  See: http://en.wikipedia.org/wiki/Hawking_radiation.  The super-massive black hole at the centre of our Galaxy has a mass equal to 4 million Suns (see: http://news.bbc.co.uk/1/hi/7774287.stm), and as the Sun has a mass of 1.9891 × 1030 kg (332,900 Earth masses; see: http://en.wikipedia.org/wiki/Sun), this means that its power output is 5.629 × 10-42 W.  Its temperature is 1.524 ×10-14 K, a very small amount above absolute zero, the lowest possible temperature.

[2] Normalisation entails the use of a normalising constant, which is a constant which, when multiplied by an everywhere non-negative function makes the area under its graph equal to 1.  The Boltzmann distribution requires one (see: http://en.wikipedia.org/wiki/Normalizing_constant).  Renormalisation arises out of the need to avoid infinities stemming from particle or virtual particle self-interactions, point charges, and so on.  See: http://en.wikipedia.org/wiki/Renormalization.

[3] Penrose, op.cit., pp.444-6.

[4] THz = Teraherz, 1012 Hz (= s-1).

[5] Gödel, K (1931), Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I.  Monatshefte für Mathematik und Physik 38:173-98.