Gauge Fixity Towards Hyperbolicity: General Theory of Relativity and Superpotentials. Part 1.


Gravitational field is described by a pseudo-Riemannian metric g (with Lorentzian signature (1, m-1)) over the spacetime M of dimension dim(M) = m; in standard General Relativity, m = 4. The configuration bundle is thence the bundle of Lorentzian metrics over M, denoted by Lor(M) . The Lagrangian is second order and it is usually chosen to be the so-called Hilbert Lagrangian:

LH: J2Lor(m) → ∧om(M)

LH: LH(gαβ, Rαβ)ds = 1/2κ (R – 2∧)√g ds —– (1)


R = gαβ Rαβ denotes the scalar curvature, √g the square root of the absolute value of the metric determinant and ∧ is a real constant (called the cosmological constant). The coupling constant 1/2κ which is completely irrelevant until the gravitational field is not coupled to some other field, depends on conventions; in natural units, i.e. c = 1, h = 1, G = 1, dimension 4 and signature ( + , – , – , – ) one has κ = – 8π.

Field equations are the well known Einstein equations with cosmological constant

Rαβ – 1/2 Rgαβ = -∧gαβ —— (2)

Lagrangian momenta is defined by:

pαβ = ∂LH/∂gαβ = 1/2κ (Rαβ – 1/2(R – 2∧)gαβ)√g

Pαβ = ∂LH/∂Rαβ = 1/2κ gαβ√g —– (3)

Thus the covariance identity is the following:

dα(LHξα) = pαβ£ξgαβ + Pαβ£ξRαβ —– (4)

or equivalently,

α(LHξα) = pαβ£ξgαβ + PαβεξΓεαβ – δεβ£ξΓλαλ) —– (5)

where ∇ε denotes the covariant derivative with respect to the Levi-Civita connection of g. Thence we have a weak conservation law for the Hilbert Lagrangian

Div ε(LH, ξ) = W(LH, ξ) —– (6)

Conserved currents and work forms have respectively the following expressions:

ε(LH, ξ) = [Pαβ£ξΓεαβ – Pαε£ξΓλαλ – LHξε]dsε = √g/2κ(gαβgεσ – gσβgεα) ∇α£ξgβσdsε – √g/2κξεRdsε = √g/2κ[(3/2Rαλ – (R – 2∧)δαλλ + (gβγδαλ – gα(γδβ)λβγξλ]dsα —– (7)

W(LH, ξ) = √g/κ(Rαβ – 1/2(R – 2∧)gαβ)∇(αξβ)ds —– (8)

As any other natural theory, General Relativity allows superpotentials. In fact, the current can be recast into the form:

ε(LH, ξ) = ε'(LH, ξ) + Div U(LH, ξ) —– (9)

where we set

ε'(LH, ξ) = √g/κ(Rαβ – 1/2(R – 2∧)δαββ)dsα

U(LH, ξ) = 1/2κ ∇[βξα] √gdsαβ —– (10)

The superpotential (10) generalizes to an arbitrary vector field ξ, the well known Komar superpotential which is originally derived for timelike Killing vectors. Whenever spacetime is assumed to be asymptotically fiat, then the superpotential of Komar is known to produce upon integration at spatial infinity ∞ the correct value for angular momentum (e.g. for Kerr-Newman solutions) but just one half of the expected value of the mass. The classical prescriptions are in fact:

m = 2∫ U(LH, ∂t, g)

J = ∫ U(LH, ∂φ, g) —– (11)

For an asymptotically flat solution (e.g. the Kerr-Newman black hole solution) m coincides with the so-called ADM mass and J is the so-called (ADM) angular momentum. For the Kerr-Newman solution in polar coordinates (t, r, θ, φ) the vector fields ∂t and ∂φ are the Killing vectors which generate stationarity and axial symmetry, respectively. Thence, according to this prescription, U(LH, ∂φ) is the superpotential for J while 2U(LH, ∂t) is the superpotential for m. This is known as the anomalous factor problem for the Komar potential. To obtain the expected values for all conserved quantities from the same superpotential, one has to correct the superpotential (10) by some ad hoc additional boundary term. Equivalently and alternatively, one can deduce a corrected superpotential as the canonical superpotential for a corrected Lagrangian, which is in fact the first order Lagrangian for standard General Relativity. This can be done covariantly, provided that one introduces an extra connection Γ’αβμ. The need of a reference connection Γ’ should be also motivated by physical considerations, according to which the conserved quantities have no absolute meaning but they are intrinsically relative to an arbitrarily fixed vacuum level. The simplest choice consists, in fact, in fixing a background metric g (not necessarily of the correct Lorentzian signature) and assuming Γ’ to be the Levi-Civita connection of g. This is rather similar to the gauge fixing à la Hawking which allows to show that Einstein equations form in fact an essentially hyperbolic PDE system. Nothing prevents, however, from taking Γ’ to be any (in principle torsionless) connection on spacetime; also this corresponds to a gauge fixing towards hyperbolicity.

Now, using the term background for a field which enters a field theory in the same way as the metric enters Yang-Mills theory, we see that the background has to be fixed once for all and thence preserved, e.g. by symmetries and deformations. A background has no field equations since deformations fix it; it eventually destroys the naturality of a theory, since fixing the background results in allowing a smaller group of symmetries G ⊂ Diff(M). Accordingly, in truly natural field theories one should not consider background fields either if they are endowed with a physical meaning (as the metric in Yang-Mills theory does) or if they are not.

On the contrary we shall use the expression reference or reference background to denote an extra dynamical field which is not endowed with a direct physical meaning. As long as variational calculus is concerned, reference backgrounds behave in exactly the same way as other dynamical fields do. They obey field equations and they can be dragged along deformations and symmetries. It is important to stress that such a behavior has nothing to do with a direct physical meaning: even if a reference background obeys field equations this does not mean that it is observable, i.e. it can be measured in a laboratory. Of course, not any dynamical field can be treated as a reference background in the above sense. The Lagrangian has in fact to depend on reference backgrounds in a quite peculiar way, so that a reference background cannot interact with any other physical field, otherwise its effect would be observable in a laboratory….

The Canonical of a priori and a posteriori Variational Calculus as Phenomenologically Driven. Note Quote.


The expression variational calculus usually identifies two different but related branches in Mathematics. The first aimed to produce theorems on the existence of solutions of (partial or ordinary) differential equations generated by a variational principle and it is a branch of local analysis (usually in Rn); the second uses techniques of differential geometry to deal with the so-called variational calculus on manifolds.

The local-analytic paradigm is often aimed to deal with particular situations, when it is necessary to pay attention to the exact definition of the functional space which needs to be considered. That functional space is very sensitive to boundary conditions. Moreover, minimal requirements on data are investigated in order to allow the existence of (weak) solutions of the equations.

On the contrary, the global-geometric paradigm investigates the minimal structures which allow to pose the variational problems on manifolds, extending what is done in Rn but usually being quite generous about regularity hypotheses (e.g. hardly ever one considers less than C-objects). Since, even on manifolds, the search for solutions starts with a local problem (for which one can use local analysis) the global-geometric paradigm hardly ever deals with exact solutions, unless the global geometric structure of the manifold strongly constrains the existence of solutions.



A further a priori different approach is the one of Physics. In Physics one usually has field equations which are locally given on a portion of an unknown manifold. One thence starts to solve field equations locally in order to find a local solution and only afterwards one tries to find the maximal analytical extension (if any) of that local solution. The maximal extension can be regarded as a global solution on a suitable manifold M, in the sense that the extension defines M as well. In fact, one first proceeds to solve field equations in a coordinate neighbourhood; afterwards, one changes coordinates and tries to extend the found solution out of the patches as long as it is possible. The coordinate changes are the cocycle of transition functions with respect to the atlas and they define the base manifold M. This approach is essential to physical applications when the base manifold is a priori unknown, as in General Relativity, and it has to be determined by physical inputs.

Luckily enough, that approach does not disagree with the standard variational calculus approach in which the base manifold M is instead fixed from the very beginning. One can regard the variational problem as the search for a solution on that particular base manifold. Global solutions on other manifolds may be found using other variational principles on different base manifolds. Even for this reason, the variational principle should be universal, i.e. one defines a family of variational principles: one for each base manifold, or at least one for any base manifold in a “reasonably” wide class of manifolds. The strong requirement, which is physically motivated by the belief that Physics should work more or less in the same way regardless of the particular spacetime which is actually realized in Nature. Of course, a scenario would be conceivable in which everything works because of the particular (topological, differentiable, etc.) structure of the spacetime. This position, however, is not desirable from a physical viewpoint since, in this case, one has to explain why that particular spacetime is realized (a priori or a posteriori).

In spite of the aforementioned strong regularity requirements, the spectrum of situations one can encounter is unexpectedly wide, covering the whole of fundamental physics. Moreover, it is surprising how the geometric formalism is effectual for what concerns identifications of basic structures of field theories. In fact, just requiring the theory to be globally well-defined and to depend on physical data only, it often constrains very strongly the choice of the local theories to be globalized. These constraints are one of the strongest motivations in choosing a variational approach in physical applications. Another motivation is a well formulated framework for conserved quantities. A global- geometric framework is a priori necessary to deal with conserved quantities being non-local.

In the modem perspective of Quantum Field Theory (QFT) the basic object encoding the properties of any quantum system is the action functional. From a quantum viewpoint the action functional is more fundamental than field equations which are obtained in the classical limit. The geometric framework provides drastic simplifications of some key issues, such as the definition of the variation operator. The variation is deeply geometric though, in practice, it coincides with the definition given in the local-analytic paradigm. In the latter case, the functional derivative is usually the directional derivative of the action functional which is a function on the infinite-dimensional space of fields defined on a region D together with some boundary conditions on the boundary ∂D. To be able to define it one should first define the functional space, then define some notion of deformation which preserves the boundary conditions (or equivalently topologize the functional space), define a variation operator on the chosen space, and, finally, prove the most commonly used properties of derivatives. Once one has done it, one finds in principle the same results that would be found when using the geometric definition of variation (for which no infinite dimensional space is needed). In fact, in any case of interest for fundamental physics, the functional derivative is simply defined by means of the derivative of a real function of one real variable. The Lagrangian formalism is a shortcut which translates the variation of (infinite dimensional) action functionals into the variation of the (finite dimensional) Lagrangian structure.

Another feature of the geometric framework is the possibility of dealing with non-local properties of field theories. There are, in fact, phenomena, such as monopoles or instantons, which are described by means of non-trivial bundles. Their properties are tightly related to the non-triviality of the configuration bundle; and they are relatively obscure when regarded by any local paradigm. In some sense, a local paradigm hides global properties in the boundary conditions and in the symmetries of the field equations, which are in turn reflected in the functional space we choose and about which, it being infinite dimensional, we do not know almost anything a priori. We could say that the existence of these phenomena is a further hint that field theories have to be stated on bundles rather than on Cartesian products. This statement, if anything, is phenomenologically driven.

When a non-trivial bundle is involved in a field theory, from a physical viewpoint it has to be regarded as an unknown object. As for the base manifold, it has then to be constructed out of physical inputs. One can do that in (at least) two ways which are both actually used in applications. First of all, one can assume the bundle to be a natural bundle which is thence canonically constructed out of its base manifold. Since the base manifold is identified by the (maximal) extension of the local solutions, then the bundle itself is identified too. This approach is the one used in General Relativity. In these applications, bundles are gauge natural and they are therefore constructed out of a structure bundle P, which, usually, contains extra information which is not directly encoded into the spacetime manifolds. In physical applications the structure bundle P has also to be constructed out of physical observables. This can be achieved by using gauge invariance of field equations. In fact, two local solutions differing by a (pure) gauge transformation describe the same physical system. Then while extending from one patch to another we feel free both to change coordinates on M and to perform a (pure) gauge transformation before glueing two local solutions. Then coordinate changes define the base manifold M, while the (pure) gauge transformations form a cocycle (valued in the gauge group) which defines, in fact, the structure bundle P. Once again solutions with different structure bundles can be found in different variational principles. Accordingly, the variational principle should be universal with respect to the structure bundle.

Local results are by no means less important. They are often the foundations on which the geometric framework is based on. More explicitly, Variational Calculus is perhaps the branch of mathematics that possibilizes the strongest interaction between Analysis and Geometry.

How Black Holes Emitting Hawking Radiation At Best Give Non-Trivial Information About Planckian Physics: Towards Entanglement Entropy.

The analogy between quantised sound waves in fluids and quantum fields in curved space-times facilitates an interdisciplinary knowhow transfer in both directions. On the one hand, one may use the microscopic structure of the fluid as a toy model for unknown high-energy (Planckian) effects in quantum gravity, for example, and investigate the influence of the corresponding cut-off. Examining the derivation of the Hawking effect for various dispersion relations, one reproduces Hawking radiation for a rather large class of scenarios, but there are also counter-examples, which do not appear to be unphysical or artificial, displaying strong deviations from Hawkings result. Therefore, whether real black holes emit Hawking radiation remains an open question and could give non-trivial information about Planckian physics.


On the other hand, the emergence of an effective geometry/metric allows us to apply the vast amount of universal tools and concepts developed for general relativity (such as horizons), which provide a unified description and better understanding of (classical and quantum) non-equilibrium phenomena (e.g., freezing and amplification of quantum fluctuations) in condensed matter systems. As an example for such a universal mechanism, the Kibble-Zurek effect describes the generation of topological effects due to the amplification of classical/thermal fluctuations in non-equilibrium thermal phase transitions. The loss of causal connection underlying the Kibble-Zurek mechanism can be understood in terms of an effective horizon – which clearly indicates the departure from equilibrium. The associated breakdown of adiabaticity leads to an amplification of thermal fluctuations (as in the Kibble-Zurek mechanism) as well as quantum fluctuations (at zero temperature). The zero-temperature version of this amplification mechanism is completely analogous to the early universe and becomes particularly important for the new and rapidly developing field of quantum phase transitions.

Furthermore, these analogue models might provide the exciting opportunity of measuring the analogues of these exotic effects – such as Hawking radiation or the generation of the seeds for structure formation during inflation – in actual laboratory experiments, i.e., experimental quantum simulations of black hole physics or the early universe. Even though the detection of these exotic quantum effects is partially very hard and requires ultra-low temperatures etc., there is no (known) principal objection against it. The analogue models range from black and/or white hole event horizons in flowing fluids and other laboratory systems over apparent horizons in expanding Bose–Einstein condensates, for example, to particle horizons in quantum phase transitions etc.

However, one should stress that the analogy reproduces the kinematics (quantum fields in curved space-times with horizons etc.) but not the dynamics, i.e., the effective geometry/metric is not described by the Einstein equations in general. An important and strongly related problem is the correct description of the back-reaction of the quantum fluctuations (e.g., phonons) onto the background (e.g., fluid flow). In gravity, the impact of the (classical or quantum) matter is usually incorporated by the (expectation value of) energy-momentum tensor. Since this quantity can be introduced at a purely kinematic level, one may use the same construction for phonons in flowing fluids, for example, the pseudo energy-momentum tensor. The relevant component of this tensor describing the energy density (which is conserved for stationary flows) may become negative as soon as the flow velocity exceeds the sound speed. These negative contributions explain the energy balance of the Hawking radiation in black hole analogues as well as super-radiant scattering. However, the (expectation value of the) pseudo energy-momentum tensor does not determine the quantum back-reaction correctly.

One should not neglect to mention the occurrence of a horizon in the laboratory – the Unruh effect. A uniformly accelerated observer cannot see half of the (1+1- dimensional) space-time, the two Rindler wedges are completely causally disconnected by the horizon(s). In each wedge, one may introduce a set of observables corresponding to the measurements made by the observers confined to this wedge – thereby obtaining two equivalent copies of observables in one wedge. In terms of these two copies, the Minkowski vacuum is an entangled state which yields the usual phenomena (thermo-field formalism) including the Unruh effect – i.e., the uniformly accelerated observer experiences the Minkowski vacuum as a thermal bath: For rather general quantum fields (Bisognano-Wichmann theorem), the quantum state ρ obtained by restricting the Minkowski vacuum to one of the Rindler wedges behaves as a mixed state ρ = exp{−2πHˆτ/κ}/Z, where Hˆτ corresponds to the Hamiltonian generating the proper (co-moving wristwatch) time τ measured by the accelerated observer and κ is the analogue to the surface gravity and determines the acceleration.


Space-time diagram with a trajectory of a uniformly accelerated observer and the resulting particle horizons. The observer is confined to the right Rindler wedge (region x > |ct| between the two horizons) and cannot influence or be influenced by all events in the left Rindler wedge (x < |ct|), which is completely causally disconnected.

The thermal character of this restricted state ρ arises from the quantum correlations of the Minkowski vacuum in the two Rindler wedges, i.e., the Minkowski vacuum is a multi-mode squeezed state with respect the two equivalent copies of observables in each wedge. This is a quite general phenomenon associated with doubling the degrees of freedom and describes the underlying idea of the thermo-field formalism, for example. The entropy of the thermal radiation in the Unruh and the Hawking effect can be understood as an entanglement entropy: For the Unruh effect, it is caused by averaging over the quantum correlations between the two Rindler wedges. In the black hole case, each particle of the outgoing Hawking radiation has its infalling partner particle (with a negative energy with respect to spatial infinity) and the entanglement between the two generates the entropy flux of the Hawking radiation. Instead of accelerating a detector and measuring its excitations, one could replace the accelerated observer by an accelerated scatterer. This device would scatter (virtual) particles from the thermal bath and thereby create real particles – which can be interpreted as a signature of Unruh effect.

Is There a Philosophy of Bundles and Fields? Drunken Risibility.

The bundle formulation of field theory is not at all motivated by just seeking a full mathematical generality; on the contrary it is just an empirical consequence of physical situations that concretely happen in Nature. One among the simplest of these situations may be that of a particle constrained to move on a sphere, denoted by S2; the physical state of such a dynamical system is described by providing both the position of the particle and its momentum, which is a tangent vector to the sphere. In other words, the state of this system is described by a point of the so-called tangent bundle TS2 of the sphere, which is non-trivial, i.e. it has a global topology which differs from the (trivial) product topology of S2 x R2. When one seeks for solutions of the relevant equations of motion some local coordinates have to be chosen on the sphere, e.g. stereographic coordinates covering the whole sphere but a point (let us say the north pole). On such a coordinate neighbourhood (which is contractible to a point being a diffeomorphic copy of R2) there exists a trivialization of the corresponding portion of the tangent bundle of the sphere, so that the relevant equations of motion can be locally written in R2 x R2. At the global level, however, together with the equations, one should give some boundary conditions which will ensure regularity in the north pole. As is well known, different inequivalent choices are possible; these boundary conditions may be considered as what is left in the local theory out of the non-triviality of the configuration bundle TS2.

Moreover, much before modem gauge theories or even more complicated new field theories, the theory of General Relativity is the ultimate proof of the need of a bundle framework to describe physical situations. Among other things, in fact, General Relativity assumes that spacetime is not the “simple” Minkowski space introduced for Special Relativity, which has the topology of R4. In general it is a Lorentzian four-dimensional manifold possibly endowed with a complicated global topology. On such a manifold, the choice of a trivial bundle M x F as the configuration bundle for a field theory is mathematically unjustified as well as physically wrong in general. In fact, as long as spacetime is a contractible manifold, as Minkowski space is, all bundles on it are forced to be trivial; however, if spacetime is allowed to be topologically non-trivial, then trivial bundles on it are just a small subclass of all possible bundles among which the configuration bundle can be chosen. Again, given the base M and the fiber F, the non-unique choice of the topology of the configuration bundle corresponds to different global requirements.

A simple purely geometrical example can be considered to sustain this claim. Let us consider M = S1 and F = (-1, 1), an interval of the real line R; then ∃ (at least) countably many “inequivalent” bundles other than the trivial one Mö0 = S1 X F , i.e. the cylinder, as shown


Furthermore the word “inequivalent” can be endowed with different meanings. The bundles shown in the figure are all inequivalent as embedded bundles (i.e. there is no diffeomorphism of the ambient space transforming one into the other) but the even ones (as well as the odd ones) are all equivalent among each other as abstract (i.e. not embedded) bundles (since they have the same transition functions).

The bundles Mön (n being any positive integer) can be obtained from the trivial bundle Mö0 by cutting it along a fiber, twisting n-times and then glueing again together. The bundle Mö1 is called the Moebius band (or strip). All bundles Mön are canonically fibered on S1, but just Mö0 is trivial. Differences among such bundles are global properties, which for example imply that the even ones Mö2k allow never-vanishing sections (i.e. field configurations) while the odd ones Mö2k+1 do not.

10 or 11 Dimensions? Phenomenological Conundrum. Drunken Risibility.


It is not the fact that we are living in a ten-dimensional world which forces string theory to a ten-dimensional description. It is that perturbative string theories are only anomaly-free in ten dimensions; and they contain gravitons only in a ten-dimensional formulation. The resulting question, how the four-dimensional spacetime of phenomenology comes off from ten-dimensional perturbative string theories (or its eleven-dimensional non-perturbative extension: the mysterious M theory), led to the compactification idea and to the braneworld scenarios.

It is not the fact that empirical indications for supersymmetry were found, that forces consistent string theories to include supersymmetry. Without supersymmetry, string theory has no fermions and no chirality, but there are tachyons which make the vacuum instable; and supersymmetry has certain conceptual advantages: it leads very probably to the finiteness of the perturbation series, thereby avoiding the problem of non-renormalizability which haunted all former attempts at a quantization of gravity; and there is a close relation between supersymmetry and Poincaré invariance which seems reasonable for quantum gravity. But it is clear that not all conceptual advantages are necessarily part of nature – as the example of the elegant, but unsuccessful Grand Unified Theories demonstrates.

Apart from its ten (or eleven) dimensions and the inclusion of supersymmetry – both have more or less the character of only conceptually, but not empirically motivated ad-hoc assumptions – string theory consists of a rather careful adaptation of the mathematical and model-theoretical apparatus of perturbative quantum field theory to the quantized, one-dimensionally extended, oscillating string (and, finally, of a minimal extension of its methods into the non-perturbative regime for which the declarations of intent exceed by far the conceptual successes). Without any empirical data transcending the context of our established theories, there remains for string theory only the minimal conceptual integration of basic parts of the phenomenology already reproduced by these established theories. And a significant component of this phenomenology, namely the phenomenology of gravitation, was already used up in the selection of string theory as an interesting approach to quantum gravity. Only, because string theory – containing gravitons as string states – reproduces in a certain way the phenomenology of gravitation, it is taken seriously.

But consistency requirements, the minimal inclusion of basic phenomenological constraints, and the careful extension of the model-theoretical basis of quantum field theory are not sufficient to establish an adequate theory of quantum gravity. Shouldn’t the landscape scenario of string theory be understood as a clear indication, not only of fundamental problems with the reproduction of the gauge invariances of the standard model of quantum field theory (and the corresponding phenomenology), but of much more severe conceptual problems? Almost all attempts at a solution of the immanent and transcendental problems of string theory seem to end in the ambiguity and contingency of the multitude of scenarios of the string landscape. That no physically motivated basic principle is known for string theory and its model-theoretical procedures might be seen as a problem which possibly could be overcome in future developments. But, what about the use of a static background spacetime in string theory which falls short of the fundamental insights of general relativity and which therefore seems to be completely unacceptable for a theory of quantum gravity?

At least since the change of context (and strategy) from hadron physics to quantum gravity, the development of string theory was dominated by immanent problems which led with their attempted solutions deeper. The result of this successively increasing self- referentiality is a more and more enhanced decoupling from phenomenological boundary conditions and necessities. The contact with the empirical does not increase, but gets weaker and weaker. The result of this process is a labyrinthic mathematical structure with a completely unclear physical relevance

Superstrings as Grand Unifier. Thought of the Day 86.0


The first step of deriving General Relativity and particle physics from a common fundamental source may lie within the quantization of the classical string action. At a given momentum, quantized strings exist only at discrete energy levels, each level containing a finite number of string states, or particle types. There are huge energy gaps between each level, which means that the directly observable particles belong to a small subset of string vibrations. In principle, a string has harmonic frequency modes ad infinitum. However, the masses of the corresponding particles get larger, and decay to lighter particles all the quicker.

Most importantly, the ground energy state of the string contains a massless, spin-two particle. There are no higher spin particles, which is fortunate since their presence would ruin the consistency of the theory. The presence of a massless spin-two particle is undesirable if string theory has the limited goal of explaining hadronic interactions. This had been the initial intention. However, attempts at a quantum field theoretic description of gravity had shown that the force-carrier of gravity, known as the graviton, had to be a massless spin-two particle. Thus, in string theory’s comeback as a potential “theory of everything,” a curse turns into a blessing.

Once again, as with the case of supersymmetry and supergravity, we have the astonishing result that quantum considerations require the existence of gravity! From this vantage point, right from the start the quantum divergences of gravity are swept away by the extended string. Rather than being mutually exclusive, as it seems at first sight, quantum physics and gravitation have a symbiotic relationship. This reinforces the idea that quantum gravity may be a mandatory step towards the unification of all forces.

Unfortunately, the ground state energy level also includes negative-mass particles, known as tachyons. Such particles have light speed as their limiting minimum speed, thus violating causality. Tachyonic particles generally suggest an instability, or possibly even an inconsistency, in a theory. Since tachyons have negative mass, an interaction involving finite input energy could result in particles of arbitrarily high energies together with arbitrarily many tachyons. There is no limit to the number of such processes, thus preventing a perturbative understanding of the theory.

An additional problem is that the string states only include bosonic particles. However, it is known that nature certainly contains fermions, such as electrons and quarks. Since supersymmetry is the invariance of a theory under the interchange of bosons and fermions, it may come as no surprise, post priori, that this is the key to resolving the second issue. As it turns out, the bosonic sector of the theory corresponds to the spacetime coordinates of a string, from the point of view of the conformal field theory living on the string worldvolume. This means that the additional fields are fermionic, so that the particle spectrum can potentially include all observable particles. In addition, the lowest energy level of a supersymmetric string is naturally massless, which eliminates the unwanted tachyons from the theory.

The inclusion of supersymmetry has some additional bonuses. Firstly, supersymmetry enforces the cancellation of zero-point energies between the bosonic and fermionic sectors. Since gravity couples to all energy, if these zero-point energies were not canceled, as in the case of non-supersymmetric particle physics, then they would have an enormous contribution to the cosmological constant. This would disagree with the observed cosmological constant being very close to zero, on the positive side, relative to the energy scales of particle physics.

Also, the weak, strong and electromagnetic couplings of the Standard Model differ by several orders of magnitude at low energies. However, at high energies, the couplings take on almost the same value, almost but not quite. It turns out that a supersymmetric extension of the Standard Model appears to render the values of the couplings identical at approximately 1016 GeV. This may be the manifestation of the fundamental unity of forces. It would appear that the “bottom-up” approach to unification is winning. That is, gravitation arises from the quantization of strings. To put it another way, supergravity is the low-energy limit of string theory, and has General Relativity as its own low-energy limit.

Something Out of Almost Nothing. Drunken Risibility.

Kant’s first antinomy makes the error of the excluded third option, i.e. it is not impossible that the universe could have both a beginning and an eternal past. If some kind of metaphysical realism is true, including an observer-independent and relational time, then a solution of the antinomy is conceivable. It is based on the distinction between a microscopic and a macroscopic time scale. Only the latter is characterized by an asymmetry of nature under a reversal of time, i.e. the property of having a global (coarse-grained) evolution – an arrow of time – or many arrows, if they are independent from each other. Thus, the macroscopic scale is by definition temporally directed – otherwise it would not exist.

On the microscopic scale, however, only local, statistically distributed events without dynamical trends, i.e. a global time-evolution or an increase of entropy density, exist. This is the case if one or both of the following conditions are satisfied: First, if the system is in thermodynamic equilibrium (e.g. there is degeneracy). And/or second, if the system is in an extremely simple ground state or meta-stable state. (Meta-stable states have a local, but not a global minimum in their potential landscape and, hence, they can decay; ground states might also change due to quantum uncertainty, i.e. due to local tunneling events.) Some still speculative theories of quantum gravity permit the assumption of such a global, macroscopically time-less ground state (e.g. quantum or string vacuum, spin networks, twistors). Due to accidental fluctuations, which exceed a certain threshold value, universes can emerge out of that state. Due to some also speculative physical mechanism (like cosmic inflation) they acquire – and, thus, are characterized by – directed non-equilibrium dynamics, specific initial conditions, and, hence, an arrow of time.

It is a matter of debate whether such an arrow of time is

1) irreducible, i.e. an essential property of time,

2) governed by some unknown fundamental and not only phenomenological law,

3) the effect of specific initial conditions or

4) of consciousness (if time is in some sense subjective), or

5) even an illusion.

Many physicists favour special initial conditions, though there is no consensus about their nature and form. But in the context at issue it is sufficient to note that such a macroscopic global time-direction is the main ingredient of Kant’s first antinomy, for the question is whether this arrow has a beginning or not.

Time’s arrow is inevitably subjective, ontologically irreducible, fundamental and not only a kind of illusion, thus if some form of metaphysical idealism for instance is true, then physical cosmology about a time before time is mistaken or quite irrelevant. However, if we do not want to neglect an observer-independent physical reality and adopt solipsism or other forms of idealism – and there are strong arguments in favor of some form of metaphysical realism -, Kant’s rejection seems hasty. Furthermore, if a Kantian is not willing to give up some kind of metaphysical realism, namely the belief in a “Ding an sich“, a thing in itself – and some philosophers actually insisted that this is superfluous: the German idealists, for instance -, he has to admit that time is a subjective illusion or that there is a dualism between an objective timeless world and a subjective arrow of time. Contrary to Kant’s thoughts: There are reasons to believe that it is possible, at least conceptually, that time has both a beginning – in the macroscopic sense with an arrow – and is eternal – in the microscopic notion of a steady state with statistical fluctuations.

Is there also some physical support for this proposal?

Surprisingly, quantum cosmology offers a possibility that the arrow has a beginning and that it nevertheless emerged out of an eternal state without any macroscopic time-direction. (Note that there are some parallels to a theistic conception of the creation of the world here, e.g. in the Augustinian tradition which claims that time together with the universe emerged out of a time-less God; but such a cosmological argument is quite controversial, especially in a modern form.) So this possible overcoming of the first antinomy is not only a philosophical conceivability but is already motivated by modern physics. At least some scenarios of quantum cosmology, quantum geometry/loop quantum gravity, and string cosmology can be interpreted as examples for such a local beginning of our macroscopic time out of a state with microscopic time, but with an eternal, global macroscopic timelessness.

To put it in a more general, but abstract framework and get a sketchy illustration, consider the figure.


Physical dynamics can be described using “potential landscapes” of fields. For simplicity, here only the variable potential (or energy density) of a single field is shown. To illustrate the dynamics, one can imagine a ball moving along the potential landscape. Depressions stand for states which are stable, at least temporarily. Due to quantum effects, the ball can “jump over” or “tunnel through” the hills. The deepest depression represents the ground state.

In the common theories the state of the universe – the product of all its matter and energy fields, roughly speaking – evolves out of a metastable “false vacuum” into a “true vacuum” which has a state of lower energy (potential). There might exist many (perhaps even infinitely many) true vacua which would correspond to universes with different constants or laws of nature. It is more plausible to start with a ground state which is the minimum of what physically can exist. According to this view an absolute nothingness is impossible. There is something rather than nothing because something cannot come out of absolutely nothing, and something does obviously exist. Thus, something can only change, and this change might be described with physical laws. Hence, the ground state is almost “nothing”, but can become thoroughly “something”. Possibly, our universe – and, independent from this, many others, probably most of them having different physical properties – arose from such a phase transition out of a quasi atemporal quantum vacuum (and, perhaps, got disconnected completely). Tunneling back might be prevented by the exponential expansion of this brand new space. Because of this cosmic inflation the universe not only became gigantic but simultaneously the potential hill broadened enormously and got (almost) impassable. This preserves the universe from relapsing into its non-existence. On the other hand, if there is no physical mechanism to prevent the tunneling-back or makes it at least very improbable, respectively, there is still another option: If infinitely many universes originated, some of them could be long-lived only for statistical reasons. But this possibility is less predictive and therefore an inferior kind of explanation for not tunneling back.

Another crucial question remains even if universes could come into being out of fluctuations of (or in) a primitive substrate, i.e. some patterns of superposition of fields with local overdensities of energy: Is spacetime part of this primordial stuff or is it also a product of it? Or, more specifically: Does such a primordial quantum vacuum have a semi-classical spacetime structure or is it made up of more fundamental entities? Unique-universe accounts, especially the modified Eddington models – the soft bang/emergent universe – presuppose some kind of semi-classical spacetime. The same is true for some multiverse accounts describing our universe, where Minkowski space, a tiny closed, finite space or the infinite de Sitter space is assumed. The same goes for string theory inspired models like the pre-big bang account, because string and M- theory is still formulated in a background-dependent way, i.e. requires the existence of a semi-classical spacetime. A different approach is the assumption of “building-blocks” of spacetime, a kind of pregeometry also the twistor approach of Roger Penrose, and the cellular automata approach of Stephen Wolfram. The most elaborated accounts in this line of reasoning are quantum geometry (loop quantum gravity). Here, “atoms of space and time” are underlying everything.

Though the question whether semiclassical spacetime is fundamental or not is crucial, an answer might be nevertheless neutral with respect of the micro-/macrotime distinction. In both kinds of quantum vacuum accounts the macroscopic time scale is not present. And the microscopic time scale in some respect has to be there, because fluctuations represent change (or are manifestations of change). This change, reversible and relationally conceived, does not occur “within” microtime but constitutes it. Out of a total stasis nothing new and different can emerge, because an uncertainty principle – fundamental for all quantum fluctuations – would not be realized. In an almost, but not completely static quantum vacuum however, macroscopically nothing changes either, but there are microscopic fluctuations.

The pseudo-beginning of our universe (and probably infinitely many others) is a viable alternative both to initial and past-eternal cosmologies and philosophically very significant. Note that this kind of solution bears some resemblance to a possibility of avoiding the spatial part of Kant’s first antinomy, i.e. his claimed proof of both an infinite space without limits and a finite, limited space: The theory of general relativity describes what was considered logically inconceivable before, namely that there could be universes with finite, but unlimited space, i.e. this part of the antinomy also makes the error of the excluded third option. This offers a middle course between the Scylla of a mysterious, secularized creatio ex nihilo, and the Charybdis of an equally inexplicable eternity of the world.

In this context it is also possible to defuse some explanatory problems of the origin of “something” (or “everything”) out of “nothing” as well as a – merely assumable, but never provable – eternal cosmos or even an infinitely often recurring universe. But that does not offer a final explanation or a sufficient reason, and it cannot eliminate the ultimate contingency of the world.

Is General Theory of Relativity a Gauge Theory? Trajectories of Diffeomorphism.


Historically the problem of observables in classical and quantum gravity is closely related to the so-called Einstein hole problem, i.e. to some of the consequences of general covariance in general relativity (GTR).

The central question is the physical meaning of the points of the event manifold underlying GTR. In contrast to pure mathematics this is a non-trivial point in physics. While in pure differential geometry one simply decrees the existence of, for example, a (pseudo-) Riemannian manifold with a differentiable structure (i.e., an appropriate cover with coordinate patches) plus a (pseudo-) Riemannian metric, g, the relation to physics is not simply one-one. In popular textbooks about GTR, it is frequently stated that all diffeomorphic (space-time) manifolds, M are physically indistinguishable. Put differently:

S − T = Riem/Diff —– (1)

This becomes particularly virulent in the Einstein hole problem. i.e., assuming that we have a region of space-time, free of matter, we can apply a local diffeomorphism which only acts within this hole, letting the exterior invariant. We get thus in general two different metric tensors

g(x) , g′(x) := Φ ◦ g(x) —– (2)

in the hole while certain inital conditions lying outside of the hole are unchanged, thus yielding two different solutions of the Einstein field equations.

Many physicists consider this to be a violation of determinism (which it is not!) and hence argue that the class of observable quantities have to be drastically reduced in (quantum) gravity theory. They follow the line of reasoning developed by Dirac in the context of gauge theory, thus implying that GTR is essentially also a gauge theory. This then winds up to the conclusion:

Dirac observables in quantum gravity are quantities which are diffeomorphism invariant with the diffeomorphism group, Diff acting from M to M, i.e.

Φ : M → M —– (3)

One should note that with respect to physical observations there is no violation of determinism. An observer can never really observe two different metric fields on one and the same space-time manifold. This can only happen on the mathematical paper. He will use a fixed measurement protocol, using rods and clocks in e.g. a local inertial frame where special relativity locally applies and then extend the results to general coordinate frames.

We get a certain orbit under Diff if we start from a particular manifold M with a metric tensor g and take the orbit

{M, Φ ◦g} —– (4)

In general we have additional fields and matter distributions on M which are transformd accordingly.

Note that not even scalars are invariant in general in the above sense, i.e., not even the Ricci scalar is observable in the Dirac sense:

R(x) ≠ Φ ◦ R(x) —– (5)

in the generic case. Thus, this would imply that the class of admissible observables can be pretty small (even empty!). Furthermore, it follows that points of M are not a priori distinguishable. On the other hand, many consider the Ricci scalar at a point to be an observable quantity.

This winds up to the question whether GTR is a true gauge theory or perhaps only apparently so at a first glance, while on a more fundamental level it is something different. In the words of Kuchar (What is observable..),

Quantities non-invariant under the full diffeomorphism group are observable in gravity.

The reason for these apparently diverging opinions stems from the role reference systems are assumed to play in GTR with some arguing that the gauge property of general coordinate invariance is only of a formal nature.

In the hole argument it is for example argued that it is important to add some particle trajectories which cross each other, thus generating concrete events on M. As these point events transform accordingly under a diffeomorphism, the distance between the corresponding coordinates x, y equals the distance between the transformed points Φ(x), Φ(y), thus being a Dirac observable. On the other hand, the coordinates x or y are not observable.

One should note that this observation is somewhat tautological in the realm of Riemannian geometry as the metric is an absolute quantity, put differently (and somewhat sloppily), ds2 is invariant under passive and by the same token active coordinate transformation (diffeomorphisms) because, while conceptually different, the transformation properties under the latter operations are defined as in the passive case. In the case of GTR this absolute quantity enters via the equivalence principle i.e., distances are measured for example in a local inertial frame (LIF) where special relativity holds and are then generalized to arbitrary coordinate systems.

Quantum Energy Teleportation. Drunken Risibility.


Time is one of the most difficult concepts in physics. It enters in the equations in a rather artificial way – as an external parameter. Although strictly speaking time is a quantity that we measure, it is not possible in quantum physics to define a time-observable in the same way as for the other quantities that we measure (position, momentum, etc.). The intuition that we have about time is that of a uniform flow, as suggested by the regular ticks of clocks. Time flows undisturbed by the variety of events that may occur in an irregular pattern in the world. Similarly, the quantum vacuum is the most regular state one can think of. For example, a persistent superconducting current flows at a constant speed – essentially forever. Can then one use the quantum vacuum as a clock? This is a fascinating dispute in condensed-matter physics, formulated as the problem of existence of time crystals. A time crystal, by analogy with a crystal in space, is a system that displays a time-regularity under measurement, while being in the ground (vacuum) state.

Then, if there is an energy (the zero-point energy) associated with empty space, it follows via the special theory of relativity that this energy should correspond to an inertial mass. By the principle of equivalence of the general theory of relativity, inertial mass is identical with the gravitational mass. Thus, empty space must gravitate. So, how much does empty space weigh? This question brings us to the frontiers of our knowledge of vacuum – the famous problem of the cosmological constant, a problem that Einstein was wrestling with, and which is still an open issue in modern cosmology.

Finally, although we cannot locally extract the zero-point energy of the vacuum fluctuations, the vacuum state of a field can be used to transfer energy from one place to another by using only information. This protocol has been called quantum energy teleportation and uses the fact that different spatial regions of a quantum field in the ground state are entangled. It then becomes possible to extract locally energy from the vacuum by making a measurement in one place, then communicating the result to an experimentalist in a spatially remote region, who would be able then to extract energy by making an appropriate (depending on the result communicated) measurement on her or his local vacuum. This suggests that the vacuum is the primordial essence, the ousia from which everything came into existence.



Let us introduce the concept of space using the notion of reflexive action (or reflex action) between two things. Intuitively, a thing x acts on another thing y if the presence of x disturbs the history of y. Events in the real world seem to happen in such a way that it takes some time for the action of x to propagate up to y. This fact can be used to construct a relational theory of space à la Leibniz, that is, by taking space as a set of equitemporal things. It is necessary then to define the relation of simultaneity between states of things.

Let x and y be two things with histories h(xτ) and h(yτ), respectively, and let us suppose that the action of x on y starts at τx0. The history of y will be modified starting from τy0. The proper times are still not related but we can introduce the reflex action to define the notion of simultaneity. The action of y on x, started at τy0, will modify x from τx1 on. The relation “the action of x on y is reflected to x” is the reflex action. Historically, Galileo introduced the reflection of a light pulse on a mirror to measure the speed of light. With this relation we will define the concept of simultaneity of events that happen on different basic things.


Besides we have a second important fact: observation and experiment suggest that gravitation, whose source is energy, is a universal interaction, carried by the gravitational field.

Let us now state the above hypothesis axiomatically.

Axiom 1 (Universal interaction): Any pair of basic things interact. This extremely strong axiom states not only that there exist no completely isolated things but that all things are interconnected.

This universal interconnection of things should not be confused with “universal interconnection” claimed by several mystical schools. The present interconnection is possible only through physical agents, with no mystical content. It is possible to model two noninteracting things in Minkowski space assuming they are accelerated during an infinite proper time. It is easy to see that an infinite energy is necessary to keep a constant acceleration, so the model does not represent real things, with limited energy supply.

Now consider the time interval (τx1 − τx0). Special Relativity suggests that it is nonzero, since any action propagates with a finite speed. We then state

Axiom 2 (Finite speed axiom): Given two different and separated basic things x and y, such as in the above figure, there exists a minimum positive bound for the interval (τx1 − τx0) defined by the reflex action.

Now we can define Simultaneity as τy0 is simultaneous with τx1/2 =Df (1/2)(τx1 + τx0)

The local times on x and y can be synchronized by the simultaneity relation. However, as we know from General Relativity, the simultaneity relation is transitive only in special reference frames called synchronous, thus prompting us to include the following axiom:

Axiom 3 (Synchronizability): Given a set of separated basic things {xi} there is an assignment of proper times τi such that the relation of simultaneity is transitive.

With this axiom, the simultaneity relation is an equivalence relation. Now we can define a first approximation to physical space, which is the ontic space as the equivalence class of states defined by the relation of simultaneity on the set of things is the ontic space EO.

The notion of simultaneity allows the analysis of the notion of clock. A thing y ∈ Θ is a clock for the thing x if there exists an injective function ψ : SL(y) → SL(x), such that τ < τ′ ⇒ ψ(τ) < ψ(τ′). i.e.: the proper time of the clock grows in the same way as the time of things. The name Universal time applies to the proper time of a reference thing that is also a clock. From this we see that “universal time” is frame dependent in agreement with the results of Special Relativity.