Acceleration in String Theory – Savdeep Sethi

If it is true string theory cannot accommodate stable dark energy, that may be a reason to doubt string theory. But it is a reason to doubt dark energy – that is, dark energy in its most popular form, called a cosmological constant. The idea originated in 1917 with Einstein and was revived in 1998 when astronomers discovered that not only is spacetime expanding – the rate of that expansion is picking up. The cosmological constant would be a form of energy in the vacuum of space that never changes and counteracts the inward pull of gravity. But it is not the only possible explanation for the accelerating universe. An alternative is “quintessence,” a field pervading spacetime that can evolve. According to Cumrun Vafa, Harvard, “Regardless of whether one can realize a stable dark energy in string theory or not, it turns out that the idea of having dark energy changing over time is actually more natural in string theory. If this is the case, then one can measure this sliding of dark energy by astrophysical observations currently taking place.”

So far all astrophysical evidence supports the cosmological constant idea, but there is some wiggle room in the measurements. Upcoming experiments such as Europe’s Euclid space telescope, NASA’s Wide-Field Infrared Survey Telescope (WFIRST) and the Simons Observatory being built in Chile’s desert will look for signs dark energy was stronger or weaker in the past than the present. “The interesting thing is that we’re already at a sensitivity level to begin to put pressure on [the cosmological constant theory].” Paul Steinhardt, Princeton University says. “We don’t have to wait for new technology to be in the game. We’re in the game now.” And even skeptics of Vafa’s proposal support the idea of considering alternatives to the cosmological constant. “I actually agree that [a changing dark energy field] is a simplifying method for constructing accelerated expansion,” Eva Silverstein, Stanford University says. “But I don’t think there’s any justification for making observational predictions about the dark energy at this point.”

Quintessence is not the only other option. In the wake of Vafa’s papers, Ulf Danielsson, a physicist at Uppsala University and colleagues proposed another way of fitting dark energy into string theory. In their vision our universe is the three-dimensional surface of a bubble expanding within a larger-dimensional space. “The physics within this surface can mimic the physics of a cosmological constant,” Danielsson says. “This is a different way of realizing dark energy compared to what we’ve been thinking so far.”

Gauge Fixity Towards Hyperbolicity: General Theory of Relativity and Superpotentials. Part 1.


Gravitational field is described by a pseudo-Riemannian metric g (with Lorentzian signature (1, m-1)) over the spacetime M of dimension dim(M) = m; in standard General Relativity, m = 4. The configuration bundle is thence the bundle of Lorentzian metrics over M, denoted by Lor(M) . The Lagrangian is second order and it is usually chosen to be the so-called Hilbert Lagrangian:

LH: J2Lor(m) → ∧om(M)

LH: LH(gαβ, Rαβ)ds = 1/2κ (R – 2∧)√g ds —– (1)


R = gαβ Rαβ denotes the scalar curvature, √g the square root of the absolute value of the metric determinant and ∧ is a real constant (called the cosmological constant). The coupling constant 1/2κ which is completely irrelevant until the gravitational field is not coupled to some other field, depends on conventions; in natural units, i.e. c = 1, h = 1, G = 1, dimension 4 and signature ( + , – , – , – ) one has κ = – 8π.

Field equations are the well known Einstein equations with cosmological constant

Rαβ – 1/2 Rgαβ = -∧gαβ —— (2)

Lagrangian momenta is defined by:

pαβ = ∂LH/∂gαβ = 1/2κ (Rαβ – 1/2(R – 2∧)gαβ)√g

Pαβ = ∂LH/∂Rαβ = 1/2κ gαβ√g —– (3)

Thus the covariance identity is the following:

dα(LHξα) = pαβ£ξgαβ + Pαβ£ξRαβ —– (4)

or equivalently,

α(LHξα) = pαβ£ξgαβ + PαβεξΓεαβ – δεβ£ξΓλαλ) —– (5)

where ∇ε denotes the covariant derivative with respect to the Levi-Civita connection of g. Thence we have a weak conservation law for the Hilbert Lagrangian

Div ε(LH, ξ) = W(LH, ξ) —– (6)

Conserved currents and work forms have respectively the following expressions:

ε(LH, ξ) = [Pαβ£ξΓεαβ – Pαε£ξΓλαλ – LHξε]dsε = √g/2κ(gαβgεσ – gσβgεα) ∇α£ξgβσdsε – √g/2κξεRdsε = √g/2κ[(3/2Rαλ – (R – 2∧)δαλλ + (gβγδαλ – gα(γδβ)λβγξλ]dsα —– (7)

W(LH, ξ) = √g/κ(Rαβ – 1/2(R – 2∧)gαβ)∇(αξβ)ds —– (8)

As any other natural theory, General Relativity allows superpotentials. In fact, the current can be recast into the form:

ε(LH, ξ) = ε'(LH, ξ) + Div U(LH, ξ) —– (9)

where we set

ε'(LH, ξ) = √g/κ(Rαβ – 1/2(R – 2∧)δαββ)dsα

U(LH, ξ) = 1/2κ ∇[βξα] √gdsαβ —– (10)

The superpotential (10) generalizes to an arbitrary vector field ξ, the well known Komar superpotential which is originally derived for timelike Killing vectors. Whenever spacetime is assumed to be asymptotically fiat, then the superpotential of Komar is known to produce upon integration at spatial infinity ∞ the correct value for angular momentum (e.g. for Kerr-Newman solutions) but just one half of the expected value of the mass. The classical prescriptions are in fact:

m = 2∫ U(LH, ∂t, g)

J = ∫ U(LH, ∂φ, g) —– (11)

For an asymptotically flat solution (e.g. the Kerr-Newman black hole solution) m coincides with the so-called ADM mass and J is the so-called (ADM) angular momentum. For the Kerr-Newman solution in polar coordinates (t, r, θ, φ) the vector fields ∂t and ∂φ are the Killing vectors which generate stationarity and axial symmetry, respectively. Thence, according to this prescription, U(LH, ∂φ) is the superpotential for J while 2U(LH, ∂t) is the superpotential for m. This is known as the anomalous factor problem for the Komar potential. To obtain the expected values for all conserved quantities from the same superpotential, one has to correct the superpotential (10) by some ad hoc additional boundary term. Equivalently and alternatively, one can deduce a corrected superpotential as the canonical superpotential for a corrected Lagrangian, which is in fact the first order Lagrangian for standard General Relativity. This can be done covariantly, provided that one introduces an extra connection Γ’αβμ. The need of a reference connection Γ’ should be also motivated by physical considerations, according to which the conserved quantities have no absolute meaning but they are intrinsically relative to an arbitrarily fixed vacuum level. The simplest choice consists, in fact, in fixing a background metric g (not necessarily of the correct Lorentzian signature) and assuming Γ’ to be the Levi-Civita connection of g. This is rather similar to the gauge fixing à la Hawking which allows to show that Einstein equations form in fact an essentially hyperbolic PDE system. Nothing prevents, however, from taking Γ’ to be any (in principle torsionless) connection on spacetime; also this corresponds to a gauge fixing towards hyperbolicity.

Now, using the term background for a field which enters a field theory in the same way as the metric enters Yang-Mills theory, we see that the background has to be fixed once for all and thence preserved, e.g. by symmetries and deformations. A background has no field equations since deformations fix it; it eventually destroys the naturality of a theory, since fixing the background results in allowing a smaller group of symmetries G ⊂ Diff(M). Accordingly, in truly natural field theories one should not consider background fields either if they are endowed with a physical meaning (as the metric in Yang-Mills theory does) or if they are not.

On the contrary we shall use the expression reference or reference background to denote an extra dynamical field which is not endowed with a direct physical meaning. As long as variational calculus is concerned, reference backgrounds behave in exactly the same way as other dynamical fields do. They obey field equations and they can be dragged along deformations and symmetries. It is important to stress that such a behavior has nothing to do with a direct physical meaning: even if a reference background obeys field equations this does not mean that it is observable, i.e. it can be measured in a laboratory. Of course, not any dynamical field can be treated as a reference background in the above sense. The Lagrangian has in fact to depend on reference backgrounds in a quite peculiar way, so that a reference background cannot interact with any other physical field, otherwise its effect would be observable in a laboratory….

Philosophizing Loops – Why Spin Foam Constraints to 3D Dynamics Evolution?


The philosophy of loops is canonical, i.e., an analysis of the evolution of variables defined classically through a foliation of spacetime by a family of space-like three- surfaces ∑t. The standard choice is the three-dimensional metric gij, and its canonical conjugate, related to the extrinsic curvature. If the system is reparametrization invariant, the total hamiltonian vanishes, and this hamiltonian constraint is usually called the Wheeler-DeWitt equation. Choosing the canonical variables is fundamental, to say the least.

Abhay Ashtekar‘s insights stems from the definition of an original set of variables stemming from Einstein-Hilbert Lagrangian written in the form,

S = ∫ea ∧ eb ∧ Rcdεabcd —– (1)

where, eare the one-forms associated to the tetrad,

ea ≡ eaμdxμ —– (2)

The associated SO(1, 3) connection one-form ϖab is called the spin connection. Its field strength is the curvature expressed as a two form:

Rab ≡ dϖab + ϖac ∧ ϖcb —– (3)

Ashtekar’s variables are actually based on the SU(2) self-dual connection

A = ϖ − i ∗ ϖ —– (4)

Its field strength is

F ≡ dA + A ∧ A —– (5)

The dynamical variables are then (Ai, Ei ≡ F0i). The main virtue of these variables is that constraints are then linearized. One of them is exactly analogous to Gauss’ law:

DiEi = 0 —– (6)

There is another one related to three-dimensional diffeomorphisms invariance,

trFijEi = 0 —– (7)

and, finally, there is the Hamiltonian constraint,

trFijEiEj = 0 —– (8)

On a purely mathematical basis, there is no doubt that Astekhar’s variables are of a great ingenuity. As a physical tool to describe the metric of space, they are not real in general. This forces a reality condition to be imposed, which is akward. For this reason it is usually prefered to use the Barbero-Immirzi formalism in which the connection depends on a free parameter, γ

Aia + ϖia + γKia —– (9)

ϖ being the spin connection, and K the extrinsic curvature. When γ = i, Ashtekar’s formalism is recovered, for other values of γ, the explicit form of the constraints is more complicated. Even if there is a Hamiltonian constraint that seems promising, was isn’t particularly clear is if the quantum constraint algebra is isomorphic to the classical algebra.

Some states which satisfy the Astekhar constraints are given by the loop representation, which can be introduced from the construct (depending both on the gauge field A and on a parametrized loop γ)

W (γ, A) ≡ trPeφγA —– (10)

and a functional transform mapping functionals of the gauge field ψ(A) into functionals of loops, ψ(γ):

ψ(γ) ≡ ∫DAW(γ, A) ψ(A) —– (11)

When one divides by diffeomorphisms, it is found that functions of knot classes (diffeomorphisms classes of smooth, non self-intersecting loops) satisfy all the constraints. Some particular states sought to reproduce smooth spaces at coarse graining are the Weaves. It is not clear to what extent they also approach the conjugate variables (that is, the extrinsic curvature) as well.

In the presence of a cosmological constant the hamiltonian constraint reads:

εijkEaiEbj(Fkab + λ/3εabcEck) = 0 —– (12)

A particular class of solutions expounded by Lee Smolin of the constraint are self-dual solutions of the form

Fiab = -λ/3εabcEci —– (13)

Loop states in general (suitable symmetrized) can be represented as spin network states: colored lines (carrying some SU(2) representation) meeting at nodes where intertwining SU(2) operators act. There is also a path integral representation, known as spin foam, a topological theory of colored surfaces representing the evolution of a spin network. Spin foams can also be considered as an independent approach to the quantization of the gravitational field. In addition to its specific problems, the hamiltonian constraint does not say in what sense (with respect to what) the three-dimensional dynamics evolve.

Nomological Unification and Phenomenology of Gravitation. Thought of the Day 110.0


String theory, which promises to give an all-encompassing, nomologically unified description of all interactions did not even lead to any unambiguous solutions to the multitude of explanative desiderata of the standard model of quantum field theory: the determination of its specific gauge invariances, broken symmetries and particle generations as well as its 20 or more free parameters, the chirality of matter particles, etc. String theory does at least give an explanation for the existence and for the number of particle generations. The latter is determined by the topology of the compactified additional spatial dimensions of string theory; their topology determines the structure of the possible oscillation spectra. The number of particle generations is identical to half the absolute value of the Euler number of the compact Calabi-Yau topology. But, because it is completely unclear which topology should be assumed for the compact space, there are no definitive results. This ambiguity is part of the vacuum selection problem; there are probably more than 10100 alternative scenarios in the so-called string landscape. Moreover all concrete models, deliberately chosen and analyzed, lead to generation numbers much too big. There are phenomenological indications that the number of particle generations can not exceed three. String theory admits generation numbers between three and 480.

Attempts at a concrete solution of the relevant external problems (and explanative desiderata) either did not take place, or they did not show any results, or they led to escalating ambiguities and finally got drowned completely in the string landscape scenario: the recently developed insight that string theory obviously does not lead to a unique description of nature, but describes an immense number of nomologically, physically and phenomenologically different worlds with different symmetries, parameter values, and values of the cosmological constant.

String theory seems to be by far too much preoccupied with its internal conceptual and mathematical problems to be able to find concrete solutions to the relevant external physical problems. It is almost completely dominated by internal consistency constraints. It is not the fact that we are living in a ten-dimensional world which forces string theory to a ten-dimensional description. It is that perturbative string theories are only anomaly-free in ten dimensions; and they contain gravitons only in a ten-dimensional formulation. The resulting question, how the four-dimensional spacetime of phenomenology comes off from ten-dimensional perturbative string theories (or its eleven-dimensional non-perturbative extension: the mysterious, not yet existing M theory), led to the compactification idea and to the braneworld scenarios, and from there to further internal problems.

It is not the fact that empirical indications for supersymmetry were found, that forces consistent string theories to include supersymmetry. Without supersymmetry, string theory has no fermions and no chirality, but there are tachyons which make the vacuum instable; and supersymmetry has certain conceptual advantages: it leads very probably to the finiteness of the perturbation series, thereby avoiding the problem of non-renormalizability which haunted all former attempts at a quantization of gravity; and there is a close relation between supersymmetry and Poincaré invariance which seems reasonable for quantum gravity. But it is clear that not all conceptual advantages are necessarily part of nature, as the example of the elegant, but unsuccessful Grand Unified Theories demonstrates.

Apart from its ten (or eleven) dimensions and the inclusion of supersymmetry, both have more or less the character of only conceptually, but not empirically motivated ad-hoc assumptions. String theory consists of a rather careful adaptation of the mathematical and model-theoretical apparatus of perturbative quantum field theory to the quantized, one-dimensionally extended, oscillating string (and, finally, of a minimal extension of its methods into the non-perturbative regime for which the declarations of intent exceed by far the conceptual successes). Without any empirical data transcending the context of our established theories, there remains for string theory only the minimal conceptual integration of basic parts of the phenomenology already reproduced by these established theories. And a significant component of this phenomenology, namely the phenomenology of gravitation, was already used up in the selection of string theory as an interesting approach to quantum gravity. Only, because string theory, containing gravitons as string states, reproduces in a certain way the phenomenology of gravitation, it is taken seriously.

Superstrings as Grand Unifier. Thought of the Day 86.0


The first step of deriving General Relativity and particle physics from a common fundamental source may lie within the quantization of the classical string action. At a given momentum, quantized strings exist only at discrete energy levels, each level containing a finite number of string states, or particle types. There are huge energy gaps between each level, which means that the directly observable particles belong to a small subset of string vibrations. In principle, a string has harmonic frequency modes ad infinitum. However, the masses of the corresponding particles get larger, and decay to lighter particles all the quicker.

Most importantly, the ground energy state of the string contains a massless, spin-two particle. There are no higher spin particles, which is fortunate since their presence would ruin the consistency of the theory. The presence of a massless spin-two particle is undesirable if string theory has the limited goal of explaining hadronic interactions. This had been the initial intention. However, attempts at a quantum field theoretic description of gravity had shown that the force-carrier of gravity, known as the graviton, had to be a massless spin-two particle. Thus, in string theory’s comeback as a potential “theory of everything,” a curse turns into a blessing.

Once again, as with the case of supersymmetry and supergravity, we have the astonishing result that quantum considerations require the existence of gravity! From this vantage point, right from the start the quantum divergences of gravity are swept away by the extended string. Rather than being mutually exclusive, as it seems at first sight, quantum physics and gravitation have a symbiotic relationship. This reinforces the idea that quantum gravity may be a mandatory step towards the unification of all forces.

Unfortunately, the ground state energy level also includes negative-mass particles, known as tachyons. Such particles have light speed as their limiting minimum speed, thus violating causality. Tachyonic particles generally suggest an instability, or possibly even an inconsistency, in a theory. Since tachyons have negative mass, an interaction involving finite input energy could result in particles of arbitrarily high energies together with arbitrarily many tachyons. There is no limit to the number of such processes, thus preventing a perturbative understanding of the theory.

An additional problem is that the string states only include bosonic particles. However, it is known that nature certainly contains fermions, such as electrons and quarks. Since supersymmetry is the invariance of a theory under the interchange of bosons and fermions, it may come as no surprise, post priori, that this is the key to resolving the second issue. As it turns out, the bosonic sector of the theory corresponds to the spacetime coordinates of a string, from the point of view of the conformal field theory living on the string worldvolume. This means that the additional fields are fermionic, so that the particle spectrum can potentially include all observable particles. In addition, the lowest energy level of a supersymmetric string is naturally massless, which eliminates the unwanted tachyons from the theory.

The inclusion of supersymmetry has some additional bonuses. Firstly, supersymmetry enforces the cancellation of zero-point energies between the bosonic and fermionic sectors. Since gravity couples to all energy, if these zero-point energies were not canceled, as in the case of non-supersymmetric particle physics, then they would have an enormous contribution to the cosmological constant. This would disagree with the observed cosmological constant being very close to zero, on the positive side, relative to the energy scales of particle physics.

Also, the weak, strong and electromagnetic couplings of the Standard Model differ by several orders of magnitude at low energies. However, at high energies, the couplings take on almost the same value, almost but not quite. It turns out that a supersymmetric extension of the Standard Model appears to render the values of the couplings identical at approximately 1016 GeV. This may be the manifestation of the fundamental unity of forces. It would appear that the “bottom-up” approach to unification is winning. That is, gravitation arises from the quantization of strings. To put it another way, supergravity is the low-energy limit of string theory, and has General Relativity as its own low-energy limit.

Quantum Energy Teleportation. Drunken Risibility.


Time is one of the most difficult concepts in physics. It enters in the equations in a rather artificial way – as an external parameter. Although strictly speaking time is a quantity that we measure, it is not possible in quantum physics to define a time-observable in the same way as for the other quantities that we measure (position, momentum, etc.). The intuition that we have about time is that of a uniform flow, as suggested by the regular ticks of clocks. Time flows undisturbed by the variety of events that may occur in an irregular pattern in the world. Similarly, the quantum vacuum is the most regular state one can think of. For example, a persistent superconducting current flows at a constant speed – essentially forever. Can then one use the quantum vacuum as a clock? This is a fascinating dispute in condensed-matter physics, formulated as the problem of existence of time crystals. A time crystal, by analogy with a crystal in space, is a system that displays a time-regularity under measurement, while being in the ground (vacuum) state.

Then, if there is an energy (the zero-point energy) associated with empty space, it follows via the special theory of relativity that this energy should correspond to an inertial mass. By the principle of equivalence of the general theory of relativity, inertial mass is identical with the gravitational mass. Thus, empty space must gravitate. So, how much does empty space weigh? This question brings us to the frontiers of our knowledge of vacuum – the famous problem of the cosmological constant, a problem that Einstein was wrestling with, and which is still an open issue in modern cosmology.

Finally, although we cannot locally extract the zero-point energy of the vacuum fluctuations, the vacuum state of a field can be used to transfer energy from one place to another by using only information. This protocol has been called quantum energy teleportation and uses the fact that different spatial regions of a quantum field in the ground state are entangled. It then becomes possible to extract locally energy from the vacuum by making a measurement in one place, then communicating the result to an experimentalist in a spatially remote region, who would be able then to extract energy by making an appropriate (depending on the result communicated) measurement on her or his local vacuum. This suggests that the vacuum is the primordial essence, the ousia from which everything came into existence.

Without Explosions, WE Would NOT Exist!


The matter and radiation in the universe gets hotter and hotter as we go back in time towards the initial quantum state, because it was compressed into a smaller volume. In this Hot Big Bang epoch in the early universe, we can use standard physical laws to examine the processes going on in the expanding mixture of matter and radiation. A key feature is that about 300,000 years after the start of the Hot Big Bang epoch, nuclei and electrons combined to form atoms. At earlier times when the temperature was higher, atoms could not exist, as the radiation then had so much energy it disrupted any atoms that tried to form into their constituent parts (nuclei and electrons). Thus at earlier times matter was ionized, consisting of negatively charged electrons moving independently of positively charged atomic nuclei. Under these conditions, the free electrons interact strongly with radiation by Thomson scattering. Consequently matter and radiation were tightly coupled in equilibrium at those times, and the Universe was opaque to radiation. When the temperature dropped through the ionization temperature of about 4000K, atoms formed from the nuclei and electrons, and this scattering ceased: the Universe became very transparent. The time when this transition took place is known as the time of decoupling – it was the time when matter and radiation ceased to be tightly coupled to each other, at a redshift zdec ≃ 1100 (Scott Dodelson (Auth.)-Modern Cosmology-Academic Press). By

μbar ∝ S−3, μrad ∝ S−4, Trad ∝ S−1 —– (1)

The scale factor S(t) obeys the Raychaudhuri equation

3S ̈/S = -1/2 κ(μ +3p/c2) + Λ —– (2)

where κ is the gravitational constant and Λ the cosmological constant.

, the universe was radiation dominated (μrad ≫ μmat) at early times and matter dominated (μrad ≪ μmat) at late times; matter-radiation density equality occurred significantly before decoupling (the temperature Teq when this equality occurred was Teq ≃ 104K; at that time the scale factor was Seq ≃ 104S0, where S0 is the present-day value). The dynamics of both the background model and of perturbations about that model differ significantly before and after Seq.

Radiation was emitted by matter at the time of decoupling, thereafter travelling freely to us through the intervening space. When it was emitted, it had the form of blackbody radiation, because this is a consequence of matter and radiation being in thermodynamic equilibrium at earlier times. Thus the matter at z = zdec forms the Last Scattering Surface (LSS) in the early universe, emitting Cosmic Blackbody Background Radiation (‘CBR’) at 4000K, that since then has travelled freely with its temperature T scaling inversely with the scale function of the universe. As the radiation travelled towards us, the universe expanded by a factor of about 1100; consequently by the time it reaches us, it has cooled to 2.75 K (that is, about 3 degrees above absolute zero, with a spectrum peaking in the microwave region), and so is extremely hard to observe. It was however detected in 1965, and its spectrum has since been intensively investigated, its blackbody nature being confirmed to high accuracy (R. B. Partridge-3K_ The Cosmic Microwave Background Radiation). Its existence is now taken as solid proof both that the Universe has indeed expanded from a hot early phase, and that standard physics applied unchanged at that era in the early universe.

The thermal capacity of the radiation is hugely greater than that of the matter. At very early times before decoupling, the temperatures of the matter and radiation were the same (because they were in equilibrium with each other), scaling as 1/S(t) (Equation 1 above). The early universe exceeded any temperature that can ever be attained on Earth or even in the centre of the Sun; as it dropped towards its present value of 3 K, successive physical reactions took place that determined the nature of the matter we see around us today. At very early times and high temperatures, only elementary particles can survive and even neutrinos had a very small mean free path; as the universe cooled down, neutrinos decoupled from the matter and streamed freely through space. At these times the expansion of the universe was radiation dominated, and we can approximate the universe then by models with {k = 0, w = 1/3, Λ = 0}, the resulting simple solution of

3S ̇2/S2 = A/S3 + B/S4 + Λ/3 – 3k/S2 —– (3)

uniquely relating time to temperature:

S(t)=S0t1/2 , t=1.92sec [T/1010K]−2 —– (4)

(There are no free constants in the latter equation).

At very early times, even neutrinos were tightly coupled and in equilibrium with the radiation; they decoupled at about 1010K, resulting in a relic neutrino background density in the universe today of about Ων0 ≃ 10−5 if they are massless (but it could be higher depending on their masses). Key events in the early universe are associated with out of equilibrium phenomena. An important event was the era of nucleosynthesis, the time when the light elements were formed. Above about 109K, nuclei could not exist because the radiation was so energetic that as fast as they formed, they were disrupted into their constituent parts (protons and neutrons). However below this temperature, if particles collided with each other with sufficient energy for nuclear reactions to take place, the resultant nuclei remained intact (the radiation being less energetic than their binding energy and hence unable to disrupt them). Thus the nuclei of the light elements  – deuterium, tritium, helium, and lithium – were created by neutron capture. This process ceased when the temperature dropped below about 108K (the nuclear reaction threshold). In this way, the proportions of these light elements at the end of nucleosynthesis were determined; they have remained virtually unchanged since. The rate of reaction was extremely high; all this took place within the first three minutes of the expansion of the Universe. One of the major triumphs of Big Bang theory is that theory and observation are in excellent agreement provided the density of baryons is low: Ωbar0 ≃ 0.044. Then the predicted abundances of these elements (25% Helium by weight, 75% Hydrogen, the others being less than 1%) agrees very closely with the observed abundances. Thus the standard model explains the origin of the light elements in terms of known nuclear reactions taking place in the early Universe. However heavier elements cannot form in the time available (about 3 minutes).

In a similar way, physical processes in the very early Universe (before nucleosynthesis) can be invoked to explain the ratio of matter to anti-matter in the present-day Universe: a small excess of matter over anti-matter must be created then in the process of baryosynthesis, without which we could not exist today (if there were no such excess, matter and antimatter would have all annihilated to give just radiation). However other quantities (such as electric charge) are believed to have been conserved even in the extreme conditions of the early Universe, so their present values result from given initial conditions at the origin of the Universe, rather than from physical processes taking place as it evolved. In the case of electric charge, the total conserved quantity appears to be zero: after quarks form protons and neutrons at the time of baryosynthesis, there are equal numbers of positively charged protons and negatively charged electrons, so that at the time of decoupling there were just enough electrons to combine with the nuclei and form uncharged atoms (it seems there is no net electrical charge on astronomical bodies such as our galaxy; were this not true, electromagnetic forces would dominate cosmology, rather than gravity).

After decoupling, matter formed large scale structures through gravitational instability which eventually led to the formation of the first generation of stars and is probably associated with the re-ionization of matter. However at that time planets could not form for a very important reason: there were no heavy elements present in the Universe. The first stars aggregated matter together by gravitational attraction, the matter heating up as it became more and more concentrated, until its temperature exceeded the thermonuclear ignition point and nuclear reactions started burning hydrogen to form helium. Eventually more complex nuclear reactions started in concentric spheres around the centre, leading to a build-up of heavy elements (carbon, nitrogen, oxygen for example), up to iron. These elements can form in stars because there is a long time available (millions of years) for the reactions to take place. Massive stars burn relatively rapidly, and eventually run out of nuclear fuel. The star becomes unstable, and its core rapidly collapses because of gravitational attraction. The consequent rise in temperature blows it apart in a giant explosion, during which time new reactions take place that generate elements heavier than iron; this explosion is seen by us as a Supernova (“New Star”) suddenly blazing in the sky, where previously there was just an ordinary star. Such explosions blow into space the heavy elements that had been accumulating in the star’s interior, forming vast filaments of dust around the remnant of the star. It is this material that can later be accumulated, during the formation of second generation stars, to form planetary systems around those stars. Thus the elements of which we are made (the carbon, nitrogen, oxygen and iron nuclei for example) were created in the extreme heat of stellar interiors, and made available for our use by supernova explosions. Without these explosions, we could not exist.

Beginning of Matter, Start to Existence Itself


When the inequality

μ+3p/c2 >0 ⇔ w > −1/3

is satisfied, one obtains directly from the Raychaudhuri equation

3S ̈/S = -1/2 κ(μ +3p/c2) + Λ

the Friedmann-Lemaître (FL) Universe Singularity Theorem, which states that:

In a FL universe with Λ ≤ 0 and μ + 3p/c2 > 0 at all times, at any instant t0 when H0 ≡ (S ̇/S)0 > 0 there is a finite time t: t0 − (1/H0) < t < t0, such that S(t) → 0 as t → t; the universe starts at a space-time singularity there, with μ → ∞ and T → ∞ if μ + p/c2 > 0.

This is not merely a start to matter – it is a start to space, to time, to physics itself. It is the most dramatic event in the history of the universe: it is the start of existence of everything. The underlying physical feature is the non-linear nature of the Einstein’s Field Equations (EFE): going back into the past, the more the universe contracts, the higher the active gravitational density, causing it to contract even more. The pressure p that one might have hoped would help stave off the collapse makes it even worse because (consequent on the form of the EFE) p enters algebraically into the Raychaudhuri equation with the same sign as the energy density μ. Note that the Hubble constant gives an estimate of the age of the universe: the time τ0 = t0 − t since the start of the universe is less than 1/H0.

This conclusion can in principle be avoided by a cosmological constant, but in practice this cannot work because we know the universe has expanded by at least a ratio of 11, as we have seen objects at a redshift 6 of 10, the cosmological constant would have to have an effective magnitude at least 113 = 1331 times the present matter density to dominate and cause a turn-around then or at any earlier time, and so would be much bigger than its observed present upper limit (of the same order as the present matter density). Accordingly, no turnaround is possible while classical physics holds. However energy-violating matter components such as a scalar field can avoid this conclusion, if they dominate at early enough times; but this can only be when quantum fields are significant, when the universe was at least 1012 smaller than at present.

Because Trad ∝ S−1, a major conclusion is that a Hot Big Bang must have occurred; densities and temperatures must have risen at least to high enough energies that quantum fields were significant, at something like the GUT energy. The universe must have reached those extreme temperatures and energies at which classical theory breaks down.

Cosmology: Friedmann-Lemaître Universes


Cosmology starts by assuming that the large-scale evolution of spacetime can be determined by applying Einstein’s field equations of Gravitation everywhere: global evolution will follow from local physics. The standard models of cosmology are based on the assumption that once one has averaged over a large enough physical scale, isotropy is observed by all fundamental observers (the preferred family of observers associated with the average motion of matter in the universe). When this isotropy is exact, the universe is spatially homogeneous as well as isotropic. The matter motion is then along irrotational and shearfree geodesic curves with tangent vector ua, implying the existence of a canonical time-variable t obeying ua = −t,a. The Robertson-Walker (‘RW’) geometries used to describe the large-scale structure of the universe embody these symmetries exactly. Consequently they are conformally flat, that is, the Weyl tensor is zero:

Cijkl := Rijkl + 1/2(Rikgjl + Rjlgik − Ril gjk − Rjkgil) − 1/6R(gikgjl − gilgjk) = 0 —– (1)

this tensor represents the free gravitational field, enabling non-local effects such as tidal forces and gravitational waves which do not occur in the exact RW geometries.

Comoving coordinates can be chosen so that the metric takes the form:

ds2 = −dt2 + S2(t)dσ2, ua = δa0 (a=0,1,2,3) —– (2)

where S(t) is the time-dependent scale factor, and the worldlines with tangent vector ua = dxa/dt represent the histories of fundamental observers. The space sections {t = const} are surfaces of homogeneity and have maximal symmetry: they are 3-spaces of constant curvature K = k/S2(t) where k is the sign of K. The normalized metric dσ2 characterizes a 3-space of normalized constant curvature k; coordinates (r, θ, φ) can be chosen such that

2 = dr2 + f2(r) dθ2 + sin2θdφ2 —– (3)

where f (r) = {sin r, r, sinh r} if k = {+1, 0, −1} respectively. The rate of expansion at any time t is characterized by the Hubble parameter H(t) = S ̇/S.

To determine the metric’s evolution in time, one applies the Einstein Field Equations, showing the effect of matter on space-time curvature, to the metric (2,3). Because of local isotropy, the matter tensor Tab necessarily takes a perfect fluid form relative to the preferred worldlines with tangent vector ua:

Tab = (μ + p/c2)uaub + (p/c2)gab —– (4)

, c is the speed of light. The energy density μ(t) and pressure term p(t)/c2 are the timelike and spacelike eigenvalues of Tab. The integrability conditions for the Einstein Field Equations are the energy-density conservation equation

Tab;b = 0 ⇔ μ ̇ + (μ + p/c2)3S ̇/S = 0 —– (5)

This becomes determinate when a suitable equation of state function w := pc2/μ relates the pressure p to the energy density μ and temperature T : p = w(μ,T)μ/c2 (w may or may not be constant). Baryons have {pbar = 0 ⇔ w = 0} and radiation has {pradc2 = μrad/3 ⇔ w = 1/3,μrad = aT4rad}, which by (5) imply

μbar ∝ S−3, μrad ∝ S−4, Trad ∝ S−1 —– (6)

The scale factor S(t) obeys the Raychaudhuri equation

3S ̈/S = -1/2 κ(μ + 3p/c2) + Λ —– (7)

, where κ is the gravitational constant and Λ is the cosmological constant. A cosmological constant can also be regarded as a fluid with pressure p related to the energy density μ by {p = −μc2 ⇔ w = −1}. This shows that the active gravitational mass density of the matter and fields present is μgrav := μ + 3p/c2. For ordinary matter this will be positive:

μ + 3p/c2 > 0 ⇔ w > −1/3 —– (8)

(the ‘Strong Energy Condition’), so ordinary matter will tend to cause the universe to decelerate (S ̈ < 0). It is also apparent that a positive cosmological constant on its own will cause an accelerating expansion (S ̈ > 0). When matter and a cosmological constant are both present, either result may occur depending on which effect is dominant. The first integral of equations (5, 7) when S ̇≠ 0 is the Friedmann equation

S ̇2/S2 = κμ/3 + Λ/3 – k/S2 —– (9)

This is just the Gauss equation relating the 3-space curvature to the 4-space curvature, showing how matter directly causes a curvature of 3-spaces. Because of the spacetime symmetries, the ten Einstein Filed Equations are equivalent to the two equations (7, 9). Models of this kind, that is with a Robertson-Walker (‘RW’) geometry with metric (2, 3) and dynamics governed by equations (5, 7, 9), are called Friedmann-Lemaître universes (‘FL’). The Friedmann equation (9) controls the expansion of the universe, and the conservation equation (5) controls the density of matter as the universe expands; when S ̇≠ 0 , equation (7) will necessarily hold if (5, 9) are both satisfied. Given a determinate matter description (specifying the equation of state w = w(μ, T) explicitly or implicitly) for each matter component, the existence and uniqueness of solutions follows both for a single matter component and for a combination of different kinds of matter, for example μ = μbar + μrad + μcdm + μν where we include cold dark matter (cdm) and neutrinos (ν). Initial data for such solutions at an arbitrary time t0 (eg. today) consists of,

• The Hubble constant H0 := (S ̇/S)0 = 100h km/sec/Mpc;

• A dimensionless density parameter Ωi0 := κμi0/3H02 for each type of matter present (labelled by i);

• If Λ ≠ 0, either ΩΛ0 := Λ/3H20, or the dimensionless deceleration parameter q := −(S ̈/S) H−20.

Given the equations of state for the matter, this data then determines a unique solution {S(t), μ(t)}, i.e. a unique corresponding universe history. The total matter density is the sum of the terms Ωi0 for each type of matter present, for example

Ωm0 = Ωbar0 + Ωrad0 + Ωcdm0 + Ων0, —– (10)

and the total density parameter Ω0 is the sum of that for matter and for the cosmological constant:

Ω0 = Ωm0 + ΩΛ0 —– (11)

Evaluating the Raychaudhuri equation (7) at the present time gives an important relation between these parameters: when the pressure term p/c2 can be ignored relative to the matter term μ (as is plausible at the present time, and assuming we represent ‘dark energy’ as a cosmological constant.),

q0 = 1/2 Ωm0 − ΩΛ0 —– (12)

This shows that a cosmological constant Λ can cause an acceleration (negative q0); if it vanishes, the expression simplifies: Λ = 0 ⇒ q = 1 Ωm0, showing how matter causes a deceleration of the universe. Evaluating the Friedmann equation (9) at the time t0, the spatial curvature is
K0:= k/S02 = H020 − 1) —– (13)
The value Ω0 = 1 corresponds to spatially flat universes (K0 = 0), separating models with positive spatial curvature (Ω0 > 1 ⇔ K0 > 0) from those with negative spatial curvature (Ω0 < 1 ⇔ K0 < 0).
The FL models are the standard models of modern cosmology, surprisingly effective in view of their extreme geometrical simplicity. One of their great strengths is their explanatory role in terms of making explicit the way the local gravitational effect of matter and radiation determines the evolution of the universe as a whole, this in turn forming the dynamic background for local physics (including the evolution of the matter and radiation).

Typicality. Cosmological Constant and Boltzmann Brains. Note Quote.


In a multiverse we would expect there to be relatively many universe domains with large values of the cosmological constant, but none of these allow gravitationally bound structures (such as our galaxy) to occur, so the likelihood of observing ourselves to be in one is essentially zero.

The cosmological constant has negative pressure, but positive energy.  The negative pressure ensures that as the volume expands then matter loses energy (photons get red shifted, particles slow down); this loss of energy by matter causes the expansion to slow down – but the increase in energy of the increased volume is more important .  The increase of energy associated with the extra space the cosmological constant fills has to be balanced by a decrease in the gravitational energy of the expansion – and this expansion energy is negative, allowing the universe to carry on expanding.  If you put all the terms on one side in the Friedmann equation – which is just an energy balancing equation – (with the other side equal to zero) you will see that the expansion energy is negative, whereas the cosmological constant and matter (including dark matter) all have positive energy.


However, as the cosmological constant is decreased, we eventually reach a transition point where it becomes just small enough for gravitational structures to occur. Reduce it a bit further still, and you now get universes resembling ours. Given the increased likelihood of observing such a universe, the chances of our universe being one of these will be near its peak. Theoretical physicist Steven Weinberg used this reasoning to correctly predict the order of magnitude of the cosmological constant well before the acceleration of our universe was even measured.

Unfortunately this argument runs into conceptually murky water. The multiverse is infinite and it is not clear whether we can calculate the odds for anything to happen in an infinite volume of space- time. All we have is the single case of our apparently small but positive value of the cosmological constant, so it’s hard to see how we could ever test whether or not Weinberg’s prediction was a lucky coincidence. Such questions concerning infinity, and what one can reasonably infer from a single data point, are just the tip of the philosophical iceberg that cosmologists face.

Another conundrum is where the laws of physics come from. Even if these laws vary across the multiverse, there must be, so it seems, meta-laws that dictate the manner in which they are distributed. How can we, inhabitants on a planet in a solar system in a galaxy, meaningfully debate the origin of the laws of physics as well as the origins of something, the very universe, that we are part of? What about the parts of space-time we can never see? These regions could infinitely outnumber our visible patch. The laws of physics could differ there, for all we know.

We cannot settle any of these questions by experiment, and this is where philosophers enter the debate. Central to this is the so-called observational-selection effect, whereby an observation is influenced by the observer’s “telescope”, whatever form that may take. But what exactly is it to be an observer, or more specifically a “typical” observer, in a system where every possible sort of observer will come about infinitely many times? The same basic question, centred on the role of observers, is as fundamental to the science of the indefinitely large (cosmology) as it is to that of the infinitesimally small (quantum theory).

This key issue of typicality also confronted Austrian physicist and philosopher Ludwig Boltzmann. In 1897 he posited an infinite space-time as a means to explain how extraordinarily well-ordered the universe is compared with the state of high entropy (or disorder) predicted by thermodynamics. Given such an arena, where every conceivable combination of particle position and momenta would exist somewhere, he suggested that the orderliness around us might be that of an incredibly rare fluctuation within an infinite space-time.

But Boltzmann’s reasoning was undermined by another, more absurd, conclusion. Rare fluctuations could also give rise to single momentary brains – self aware entities that spontaneously arises through random collisions of particles. Such “Boltzmann brains”, the argument goes, are far more likely to arise than the entire visible universe or even the solar system. Ludwig Boltzmann reasoned that brains and other complex, orderly objects on Earth were the result of random fluctuations. But why, then, do we see billions of other complex, orderly objects all around us? Why aren’t we like the lone being in the sea of nonsense?Boltzmann theorized that if random fluctuations create brains like ours, there should be Boltzmann brains floating around in space or sitting alone on uninhabited planets untold lightyears away. And in fact, those Boltzmann brains should be incredibly more common than the herds of complex, orderly objects we see here on Earth. So we have another paradox. If the only requirement of consciousness is a brain like the one in your head, why aren’t you a Boltzmann brain? If you were assigned to experience a random consciousness, you should almost certainly find yourself alone in the depths of space rather than surrounded by similar consciousnesses. The easy answers seem to all require a touch of magic. Perhaps consciousness doesn’t arise naturally from a brain like yours but requires some metaphysical endowment. Or maybe we’re not random fluctuations in a thermodynamic soup, and we were put here by an intelligent being. An infinity of space would therefore contain an infinitude of such disembodied brains, which would then be the “typical observer”, not us. OR. Starting at the very beginning: entropy must always stay the same or increase over time, according to the second law of thermodynamics. However, Boltzmann (the Ludwig one, not the brain one) formulated a version of the law of entropy that was statistical. What this means for what you’re asking is that while entropy almost always increases or stays the same, over billions of billions of billions of billions of billions…you get the idea years, entropy might go down a bit. This is called a fluctuation. So backing up a tad, if entropy always increases/stays the same, what is surprising for cosmologists is that the universe started in such a low-entropy state. So to (try) to explain this, Boltzmann said, hey, what if there’s a bigger universe that our universe is in, and it is in a state of the most possible entropy, or thermal equilibrium. Then, let’s say it exists for a long long time, those billions we talked about earlier. There’ll be statistical fluctuations, right? And those statistical fluctuations might be represented by the birth of universes. Ahem, our universe is one of them. So now, we get into the brains. Our universe must be a HUGE statistical fluctuation comparatively to other fluctuations. I mean, think about it. If it is so nuts for entropy to decrease by just a little tiny bit, how nuts would it be for it to decrease enough for the birth of a universe to happen!? So the question is, why aren’t we just brains? That is, why aren’t we a statistical fluctuation just big enough for intelligent life to develop, look around, see it exists, and melt back into goop. And it is this goopy-not-long-existing intelligent life that is a Boltzmann brain. This is a huge challenge to the Boltzmann (Ludwig) theory.

Can this bizarre vision possibly be real, or does it indicate something fundamentally wrong with our notion of “typicality”? Or is our notion of “the observer” flawed – can thermodynamic fluctuations that give rise to Boltzmann’s brains really suffice? Or could a futuristic supercomputer even play the Matrix-like role of a multitude of observers?