Black Hole Entropy in terms of Mass. Note Quote.

c839ecac963908c173c6b13acf3cd2a8--friedrich-nietzsche-the-portal

If M-theory is compactified on a d-torus it becomes a D = 11 – d dimensional theory with Newton constant

GD = G11/Ld = l911/Ld —– (1)

A Schwartzschild black hole of mass M has a radius

Rs ~ M(1/(D-3)) GD(1/(D-3)) —– (2)

According to Bekenstein and Hawking the entropy of such a black hole is

S = Area/4GD —– (3)

where Area refers to the D – 2 dimensional hypervolume of the horizon:

Area ~ RsD-2 —– (4)

Thus

S ~ 1/GD (MGD)(D-2)/(D-3) ~ M(D-2)/(D-3) GD1/(D-3) —– (5)

From the traditional relativists’ point of view, black holes are extremely mysterious objects. They are described by unique classical solutions of Einstein’s equations. All perturbations quickly die away leaving a featureless “bald” black hole with ”no hair”. On the other hand Bekenstein and Hawking have given persuasive arguments that black holes possess thermodynamic entropy and temperature which point to the existence of a hidden microstructure. In particular, entropy generally represents the counting of hidden microstates which are invisible in a coarse grained description. An ultimate exact treatment of objects in matrix theory requires a passage to the infinite N limit. Unfortunately this limit is extremely difficult. For the study of Schwarzchild black holes, the optimal value of N (the value which is large enough to obtain an adequate description without involving many redundant variables) is of order the entropy, S, of the black hole.

Considering the minimum such value for N, we have

Nmin(S) = MRs = M(MGD)1/D-3 = S —– (6)

We see that the value of Nmin in every dimension is proportional to the entropy of the black hole. The thermodynamic properties of super Yang Mills theory can be estimated by standard arguments only if S ≤ N. Thus we are caught between conflicting requirements. For N >> S we don’t have tools to compute. For N ~ S the black hole will not fit into the compact geometry. Therefore we are forced to study the black hole using N = Nmin = S.

Matrix theory compactified on a d-torus is described by d + 1 super Yang Mills theory with 16 real supercharges. For d = 3 we are dealing with a very well known and special quantum field theory. In the standard 3+1 dimensional terminology it is U(N) Yang Mills theory with 4 supersymmetries and with all fields in the adjoint repersentation. This theory is very special in that, in addition to having electric/magnetic duality, it enjoys another property which makes it especially easy to analyze, namely it is exactly scale invariant.

Let us begin by considering it in the thermodynamic limit. The theory is characterized by a “moduli” space defined by the expectation values of the scalar fields φ. Since the φ also represents the positions of the original DO-branes in the non compact directions, we choose them at the origin. This represents the fact that we are considering a single compact object – the black hole- and not several disconnected pieces.

The equation of state of the system, defined by giving the entropy S as a function of temperature. Since entropy is extensive, it is proportional to the volume ∑3 of the dual torus. Furthermore, the scale invariance insures that S has the form

S = constant T33 —– (7)

The constant in this equation counts the number of degrees of freedom. For vanishing coupling constant, the theory is described by free quanta in the adjoint of U(N). This means that the number of degrees of freedom is ~ N2.

From the standard thermodynamic relation,

dE = TdS —– (8)

and the energy of the system is

E ~ N2T43 —– (9)

In order to relate entropy and mass of the black hole, let us eliminate temperature from (7) and (9).

S = N23((E/N23))3/4 —– (10)

Now the energy of the quantum field theory is identified with the light cone energy of the system of DO-branes forming the black hole. That is

E ≈ M2/N R —– (11)

Plugging (11) into (10)

S = N23(M2R/N23)3/4 —– (12)

This makes sense only when N << S, as when N >> S computing the equation of state is slightly trickier. At N ~ S, this is precisely the correct form for the black hole entropy in terms of the mass.

Black Hole Analogue: Extreme Blue Shift Disturbance. Thought of the Day 141.0

One major contribution of the theoretical study of black hole analogues has been to help clarify the derivation of the Hawking effect, which leads to a study of Hawking radiation in a more general context, one that involves, among other features, two horizons. There is an apparent contradiction in Hawking’s semiclassical derivation of black hole evaporation, in that the radiated fields undergo arbitrarily large blue-shifting in the calculation, thus acquiring arbitrarily large masses, which contravenes the underlying assumption that the gravitational effects of the quantum fields may be ignored. This is known as the trans-Planckian problem. A similar issue arises in condensed matter analogues such as the sonic black hole.

Untitled

Sonic horizons in a moving fluid, in which the speed of sound is 1. The velocity profile of the fluid, v(z), attains the value −1 at two values of z; these are horizons for sound waves that are right-moving with respect to the fluid. At the right-hand horizon right-moving waves are trapped, with waves just to the left of the horizon being swept into the supersonic flow region v < −1; no sound can emerge from this region through the horizon, so it is reminiscent of a black hole. At the left-hand horizon right-moving waves become frozen and cannot enter the supersonic flow region; this is reminiscent of a white hole.

Considering the sonic horizons in one-dimensional fluid flow, the velocity profile of the fluid as depicted in the figure above, the two horizons are formed for sound waves that propagate to the right with respect to the fluid. The horizon on the right of the supersonic flow region v < −1 behaves like a black hole horizon for right-moving waves, while the horizon on the left of the supersonic flow region behaves like a white hole horizon for these waves. In such a system, the equation for a small perturbation φ of the velocity potential is

(∂t + ∂zv)(∂t + v∂z)φ − ∂z2φ = 0 —– (1)

In terms of a new coordinate τ defined by

dτ := dt + v/(1 – v2) dz

(1) is the equation φ = 0 of a scalar field in the black-hole-type metric

ds2 = (1 – v2)dτ2 – dz2/(1 – v2)

Each horizon will produce a thermal spectrum of phonons with a temperature determined by the quantity that corresponds to the surface gravity at the horizon, namely the absolute value of the slope of the velocity profile:

kBT = ħα/2π, α := |dv/dz|v=-1 —– (2)

Untitled

Hawking phonons in the fluid flow: Real phonons have positive frequency in the fluid-element frame and for right-moving phonons this frequency (ω − vk) is ω/(1 + v) = k. Thus in the subsonic-flow regions ω (conserved 1 + v for each ray) is positive, whereas in the supersonic-flow region it is negative; k is positive for all real phonons. The frequency in the fluid-element frame diverges at the horizons – the trans-Planckian problem.

The trajectories of the created phonons are formally deduced from the dispersion relation of the sound equation (1). Geometrical acoustics applied to (1) gives the dispersion relation

ω − vk = ±k —– (3)

and the Hamiltonians

dz/dt = ∂ω/∂k = v ± 1 —– (4)

dk/dt = -∂ω/∂z = − v′k —– (5)

The left-hand side of (3) is the frequency in the frame co-moving with a fluid element, whereas ω is the frequency in the laboratory frame; the latter is constant for a time-independent fluid flow (“time-independent Hamiltonian” dω/dt = ∂ω/∂t = 0). Since the Hawking radiation is right-moving with respect to the fluid, we clearly must choose the positive sign in (3) and hence in (4) also. By approximating v(z) as a linear function near the horizons we obtain from (4) and (5) the ray trajectories. The disturbing feature of the rays is the behavior of the wave vector k: at the horizons the radiation is exponentially blue-shifted, leading to a diverging frequency in the fluid-element frame. These runaway frequencies are unphysical since (1) asserts that sound in a fluid element obeys the ordinary wave equation at all wavelengths, in contradiction with the atomic nature of fluids. Moreover the conclusion that this Hawking radiation is actually present in the fluid also assumes that (1) holds at all wavelengths, as exponential blue-shifting of wave packets at the horizon is a feature of the derivation. Similarly, in the black-hole case the equation does not hold at arbitrarily high frequencies because it ignores the gravity of the fields. For the black hole, a complete resolution of this difficulty will require inputs from the gravitational physics of quantum fields, i.e. quantum gravity, but for the dumb hole the physics is available for a more realistic treatment.

 

Appropriation of (Ir)reversibility of Noise Fluctuations to (Un)Facilitate Complexity

 

data

The logical depth is a suitable measure of subjective complexity for physical as well as mathematical objects. this, upon considering the effect of irreversibility, noise, and spatial symmetries of the equations of motion and initial conditions on the asymptotic depth-generating abilities of model systems.

“Self-organization” suggests a spontaneous increase of complexity occurring in a system with simple, generic (e.g. spatially homogeneous) initial conditions. The increase of complexity attending a computation, by contrast, is less remarkable because it occurs in response to special initial conditions. An important question, which would have interested Turing, is whether self-organization is an asymptotically qualitative phenomenon like phase transitions. In other words, are there physically reasonable models in which complexity, appropriately defined, not only increases, but increases without bound in the limit of infinite space and time? A positive answer to this question would not explain the natural history of our particular finite world, but would suggest that its quantitative complexity can legitimately be viewed as an approximation to a well-defined qualitative property of infinite systems. On the other hand, a negative answer would suggest that our world should be compared to chemical reaction-diffusion systems (e.g. Belousov-Zhabotinsky), which self-organize on a macroscopic, but still finite scale, or to hydrodynamic systems which self-organize on a scale determined by their boundary conditions.

The suitability of logical depth as a measure of physical complexity depends on the assumed ability (“physical Church’s thesis”) of Turing machines to simulate physical processes, and to do so with reasonable efficiency. Digital machines cannot of course integrate a continuous system’s equations of motion exactly, and even the notion of computability is not very robust in continuous systems, but for realistic physical systems, subject throughout their time development to finite perturbations (e.g. electromagnetic and gravitational) from an uncontrolled environment, it is plausible that a finite-precision digital calculation can approximate the motion to within the errors induced by these perturbations. Empirically, many systems have been found amenable to “master equation” treatments in which the dynamics is approximated as a sequence of stochastic transitions among coarse-grained microstates.

We concentrate arbitrarily on cellular automata, in the broad sense of discrete lattice models with finitely many states per site, which evolve according to a spatially homogeneous local transition rule that may be deterministic or stochastic, reversible or irreversible, and synchronous (discrete time) or asynchronous (continuous time, master equation). Such models cover the range from evidently computer-like (e.g. deterministic cellular automata) to evidently material-like (e.g. Ising models) with many gradations in between.

More of the favorable properties need to be invoked to obtain “self-organization,” i.e. nontrivial computation from a spatially homogeneous initial condition. A rather artificial system (a cellular automaton which is stochastic but noiseless, in the sense that it has the power to make purely deterministic as well as random decisions) undergoes this sort of self-organization. It does so by allowing the nucleation and growth of domains, within each of which a depth-producing computation begins. When two domains collide, one conquers the other, and uses the conquered territory to continue its own depth-producing computation (a computation constrained to finite space, of course, cannot continue for more than exponential time without repeating itself). To achieve the same sort of self-organization in a truly noisy system appears more difficult, partly because of the conflict between the need to encourage fluctuations that break the system’s translational symmetry, while suppressing fluctuations that introduce errors in the computation.

Irreversibility seems to facilitate complex behavior by giving noisy systems the generic ability to correct errors. Only a limited sort of error-correction is possible in microscopically reversible systems such as the canonical kinetic Ising model. Minority fluctuations in a low-temperature ferromagnetic Ising phase in zero field may be viewed as errors, and they are corrected spontaneously because of their potential energy cost. This error correcting ability would be lost in nonzero field, which breaks the symmetry between the two ferromagnetic phases, and even in zero field it gives the Ising system the ability to remember only one bit of information. This limitation of reversible systems is recognized in the Gibbs phase rule, which implies that under generic conditions of the external fields, a thermodynamic system will have a unique stable phase, all others being metastable. Even in reversible systems, it is not clear why the Gibbs phase rule enforces as much simplicity as it does, since one can design discrete Ising-type systems whose stable phase (ground state) at zero temperature simulates an aperiodic tiling of the plane, and can even get the aperiodic ground state to incorporate (at low density) the space-time history of a Turing machine computation. Even more remarkably, one can get the structure of the ground state to diagonalize away from all recursive sequences.

Without Explosions, WE Would NOT Exist!

bb_theory

The matter and radiation in the universe gets hotter and hotter as we go back in time towards the initial quantum state, because it was compressed into a smaller volume. In this Hot Big Bang epoch in the early universe, we can use standard physical laws to examine the processes going on in the expanding mixture of matter and radiation. A key feature is that about 300,000 years after the start of the Hot Big Bang epoch, nuclei and electrons combined to form atoms. At earlier times when the temperature was higher, atoms could not exist, as the radiation then had so much energy it disrupted any atoms that tried to form into their constituent parts (nuclei and electrons). Thus at earlier times matter was ionized, consisting of negatively charged electrons moving independently of positively charged atomic nuclei. Under these conditions, the free electrons interact strongly with radiation by Thomson scattering. Consequently matter and radiation were tightly coupled in equilibrium at those times, and the Universe was opaque to radiation. When the temperature dropped through the ionization temperature of about 4000K, atoms formed from the nuclei and electrons, and this scattering ceased: the Universe became very transparent. The time when this transition took place is known as the time of decoupling – it was the time when matter and radiation ceased to be tightly coupled to each other, at a redshift zdec ≃ 1100 (Scott Dodelson (Auth.)-Modern Cosmology-Academic Press). By

μbar ∝ S−3, μrad ∝ S−4, Trad ∝ S−1 —– (1)

The scale factor S(t) obeys the Raychaudhuri equation

3S ̈/S = -1/2 κ(μ +3p/c2) + Λ —– (2)

where κ is the gravitational constant and Λ the cosmological constant.

, the universe was radiation dominated (μrad ≫ μmat) at early times and matter dominated (μrad ≪ μmat) at late times; matter-radiation density equality occurred significantly before decoupling (the temperature Teq when this equality occurred was Teq ≃ 104K; at that time the scale factor was Seq ≃ 104S0, where S0 is the present-day value). The dynamics of both the background model and of perturbations about that model differ significantly before and after Seq.

Radiation was emitted by matter at the time of decoupling, thereafter travelling freely to us through the intervening space. When it was emitted, it had the form of blackbody radiation, because this is a consequence of matter and radiation being in thermodynamic equilibrium at earlier times. Thus the matter at z = zdec forms the Last Scattering Surface (LSS) in the early universe, emitting Cosmic Blackbody Background Radiation (‘CBR’) at 4000K, that since then has travelled freely with its temperature T scaling inversely with the scale function of the universe. As the radiation travelled towards us, the universe expanded by a factor of about 1100; consequently by the time it reaches us, it has cooled to 2.75 K (that is, about 3 degrees above absolute zero, with a spectrum peaking in the microwave region), and so is extremely hard to observe. It was however detected in 1965, and its spectrum has since been intensively investigated, its blackbody nature being confirmed to high accuracy (R. B. Partridge-3K_ The Cosmic Microwave Background Radiation). Its existence is now taken as solid proof both that the Universe has indeed expanded from a hot early phase, and that standard physics applied unchanged at that era in the early universe.

The thermal capacity of the radiation is hugely greater than that of the matter. At very early times before decoupling, the temperatures of the matter and radiation were the same (because they were in equilibrium with each other), scaling as 1/S(t) (Equation 1 above). The early universe exceeded any temperature that can ever be attained on Earth or even in the centre of the Sun; as it dropped towards its present value of 3 K, successive physical reactions took place that determined the nature of the matter we see around us today. At very early times and high temperatures, only elementary particles can survive and even neutrinos had a very small mean free path; as the universe cooled down, neutrinos decoupled from the matter and streamed freely through space. At these times the expansion of the universe was radiation dominated, and we can approximate the universe then by models with {k = 0, w = 1/3, Λ = 0}, the resulting simple solution of

3S ̇2/S2 = A/S3 + B/S4 + Λ/3 – 3k/S2 —– (3)

uniquely relating time to temperature:

S(t)=S0t1/2 , t=1.92sec [T/1010K]−2 —– (4)

(There are no free constants in the latter equation).

At very early times, even neutrinos were tightly coupled and in equilibrium with the radiation; they decoupled at about 1010K, resulting in a relic neutrino background density in the universe today of about Ων0 ≃ 10−5 if they are massless (but it could be higher depending on their masses). Key events in the early universe are associated with out of equilibrium phenomena. An important event was the era of nucleosynthesis, the time when the light elements were formed. Above about 109K, nuclei could not exist because the radiation was so energetic that as fast as they formed, they were disrupted into their constituent parts (protons and neutrons). However below this temperature, if particles collided with each other with sufficient energy for nuclear reactions to take place, the resultant nuclei remained intact (the radiation being less energetic than their binding energy and hence unable to disrupt them). Thus the nuclei of the light elements  – deuterium, tritium, helium, and lithium – were created by neutron capture. This process ceased when the temperature dropped below about 108K (the nuclear reaction threshold). In this way, the proportions of these light elements at the end of nucleosynthesis were determined; they have remained virtually unchanged since. The rate of reaction was extremely high; all this took place within the first three minutes of the expansion of the Universe. One of the major triumphs of Big Bang theory is that theory and observation are in excellent agreement provided the density of baryons is low: Ωbar0 ≃ 0.044. Then the predicted abundances of these elements (25% Helium by weight, 75% Hydrogen, the others being less than 1%) agrees very closely with the observed abundances. Thus the standard model explains the origin of the light elements in terms of known nuclear reactions taking place in the early Universe. However heavier elements cannot form in the time available (about 3 minutes).

In a similar way, physical processes in the very early Universe (before nucleosynthesis) can be invoked to explain the ratio of matter to anti-matter in the present-day Universe: a small excess of matter over anti-matter must be created then in the process of baryosynthesis, without which we could not exist today (if there were no such excess, matter and antimatter would have all annihilated to give just radiation). However other quantities (such as electric charge) are believed to have been conserved even in the extreme conditions of the early Universe, so their present values result from given initial conditions at the origin of the Universe, rather than from physical processes taking place as it evolved. In the case of electric charge, the total conserved quantity appears to be zero: after quarks form protons and neutrons at the time of baryosynthesis, there are equal numbers of positively charged protons and negatively charged electrons, so that at the time of decoupling there were just enough electrons to combine with the nuclei and form uncharged atoms (it seems there is no net electrical charge on astronomical bodies such as our galaxy; were this not true, electromagnetic forces would dominate cosmology, rather than gravity).

After decoupling, matter formed large scale structures through gravitational instability which eventually led to the formation of the first generation of stars and is probably associated with the re-ionization of matter. However at that time planets could not form for a very important reason: there were no heavy elements present in the Universe. The first stars aggregated matter together by gravitational attraction, the matter heating up as it became more and more concentrated, until its temperature exceeded the thermonuclear ignition point and nuclear reactions started burning hydrogen to form helium. Eventually more complex nuclear reactions started in concentric spheres around the centre, leading to a build-up of heavy elements (carbon, nitrogen, oxygen for example), up to iron. These elements can form in stars because there is a long time available (millions of years) for the reactions to take place. Massive stars burn relatively rapidly, and eventually run out of nuclear fuel. The star becomes unstable, and its core rapidly collapses because of gravitational attraction. The consequent rise in temperature blows it apart in a giant explosion, during which time new reactions take place that generate elements heavier than iron; this explosion is seen by us as a Supernova (“New Star”) suddenly blazing in the sky, where previously there was just an ordinary star. Such explosions blow into space the heavy elements that had been accumulating in the star’s interior, forming vast filaments of dust around the remnant of the star. It is this material that can later be accumulated, during the formation of second generation stars, to form planetary systems around those stars. Thus the elements of which we are made (the carbon, nitrogen, oxygen and iron nuclei for example) were created in the extreme heat of stellar interiors, and made available for our use by supernova explosions. Without these explosions, we could not exist.

‘Pranic’ Allostery (Microprocessor + Functional Machine)

CqyqdhZXEAAqdFV

Michael J. Denton in Nature’s Destiny (1998) gives an interesting example from biochemistry, that of proteins. Proteins are built of chains of amino acids which mainly consist of carbon, hydrogen, and nitrogen. Proteins have a specific spatial structure which, as we have seen above, is very sensitive – for example to the temperature or acidity of the environment – and which can very easily be changed and restored for specific purposes within a living organism. Proteins are stable, but remain in a delicate balance, ever on the threshold of chaos. They are able to bond themselves to certain chemicals and to release them in other situations. It is this property which enables them to perform a variety of functions, for example catalyzing other chemical reactions in a cell. Proteins have the power to integrate information from various chemical sources, which is determined by the concentration within the cell of the chemicals involved. As we have seen when discussing the eye, proteins enable the processes in the cell to regulate themselves. This self-regulation is called allostery.

Thus proteins have a remarkable two-sided power – firstly, the performance of unique chemical reactions and the integration of the information of diverse chemical components of the cell; and secondly, intelligent reaction to this information by increasing or decreasing their own enzymic activity according to present needs. How this is possible is still regarded as one of the greatest mysteries of life. It means that the functional units which perform the chemical processes are at the same time the regulating units. This property is crucial for the functioning of the cell processes in orderly coherence. It prevents the chaos that would no doubt follow if the enzymic activity were not precisely adjusted to the ever-changing needs of the cell. It is thus the remarkable property of proteins to unite the role of both a microprocessor and a functional machine in one object. Because of this fundamental property, proteins are far more advanced than any man-made instrument. An oven, for example, has a thermostat to regulate temperature, and a functional unit, the burner or electric coil, which produces heat. In a protein these two would be unified.

Blavatsky maintained that every cell in the human body is furnished with its own brain, with a memory of its own, and therefore with the experience and power to discriminate between things. How could she say so within the context of the scientific knowledge of her day? Her knowledge was deduced from occult axioms concerning the functioning of the universe and from analogy, which is applicable on all levels of being. If there is intelligence in the great order of the cosmos, then this is also represented within a cell, and there must be a structure within the cell comparable to the physical brain. This structure must have the power to enable the processes of intelligence on the physical level to take place.

G. de Purucker wrote some seventy years ago about life-atoms, centrosomes, and centrioles. He stated that “In each cell there is a central pranic nucleus which is the life-germ of a life-atom, and all the rest of the cell is merely the carpentry of the cell builded around it by the forces flowing forth from the heart of this life-atom.” A life-atom is a consciousness-point. He explained that

the life-atom works through the two tiny dots or sparks in the centrosome which fall apart at the beginning of cell-division and its energies stream out from these two tiny dots, and each tiny dot, as it were, is already the beginning of a new cell; or, to put it in other words, one remains the central part of the mother-cell, while the other tiny dot becomes the central part of the daughter-cell, etc. All these phenomena of mitosis or cell-division are simply the works of the inner soul of the physical cell . . . The heart of an original nucleolus in a cell is the life-atom, and the two tiny dots or spots [the centrioles] in the centrosome are, as it were, extensions or fingers of its energy. The energy of the original life-atom, which is the heart of a cell, works throughout the entire cellular framework or structure in general, but more particularly through the nucleolus and also through the two tiny dots. — Studies in Occult Philosophy 

Along these lines Blavatsky says that the

inner soul of the physical cell . . . dominates the germinal plasm . . . the key that must open one day the gates of the terra incognita of the Biologist . . . (The Secret Doctrine).

“approximandum,” will not be General Theory of Relativity, but only its vacuum sector of spacetimes of topology Σ × R, or quantum gravity as a fecund ground for metaphysician. Note Quote.

1*Sr3ZCgKlan3YlW_n6oBc0w

In string theory as well as in Loop Quantum Gravity, and in other approaches to quantum gravity, indications are coalescing that not only time, but also space is no longer a fundamental entity, but merely an “emergent” phenomenon that arises from the basic physics. In the language of physics, spacetime theories such as GTR are “effective” theories and spacetime itself is “emergent”. However, unlike the notion that temperature is emergent, the idea that the universe is not in space and time arguably shocks our very idea of physical existence as profoundly as any scientific revolution ever did. It is not even clear whether we can coherently formulate a physical theory in the absence of space and time. Space disappears in LQG insofar as the physical structures it describes bear little, if any, resemblance to the spatial geometries found in GTR. These structures are discrete and not continuous as classical spacetimes are. They represent the fundamental constitution of our universe that correspond, somehow, to chunks of physical space and thus give rise – in a way yet to be elucidated – to the spatial geometries we find in GTR. The conceptual problem of coming to grasp how to do physics in the absence of an underlying spatio-temporal stage on which the physics can play out is closely tied to the technical difficulty of mathematically relating LQG back to GTR. Physicists have yet to fully understand how classical spacetimes emerge from the fundamental non-spatio-temporal structure of LQG, and philosophers are only just starting to study its conceptual foundations and the implications of quantum gravity in general and of the disappearance of space-time in particular. Even though the mathematical heavy-lifting will fall to the physicists, there is a role for philosophers here in exploring and mapping the landscape of conceptual possibilites, bringing to bear the immense philosophical literature in emergence and reduction which offers a variegated conceptual toolbox.

To understand how classical spacetime re-emerges from the fundamental quantum structure involves what the physicists call “taking the classical limit.” In a sense, relating the spin network states of LQG back to the spacetimes of GTR is a reversal of the quantization procedure employed to formulate the quantum theory in the first place. Thus, while the quantization can be thought of as the “context of discovery,” finding the classical limit that relates the quantum theory of gravity to GTR should be considered the “context of (partial) justification.” It should be emphasized that understanding how (classical) spacetime re-emerges by retrieving GTR as a low-energy limit of a more fundamental theory is not only important to “save the appearances” and to accommodate common sense – although it matters in these respects as well, but must also be considered a methodologically central part of the enterprise of quantum gravity. If it cannot be shown that GTR is indeed related to LQG in some mathematically well-understood way as the approximately correct theory when energies are sufficiently low or, equivalently, when scales are sufficiently large, then LQG cannot explain why GTR has been empirically as successful as it has been. But a successful theory can only be legitimately supplanted if the successor theory not only makes novel predictions or offers deeper explanations, but is also able to replicate the empirical success of the theory it seeks to replace.

Ultimately, of course, the full analysis will depend on the full articulation of the theory. But focusing on the kinematical level, and thus avoiding having to fully deal with the problem of time, lets apply the concepts to the problem of the emergence of full spacetime, rather than just time. Chris Isham and Butterfield identify three types of reductive relations between theories: definitional extension, supervenience, and emergence, of which only the last has any chance of working in the case at hand. For Butterfield and Isham, a theory T1 emerges from another theory T2 just in case there exists either a limiting or an approximating procedure to relate the two theories (or a combination of the two). A limiting procedure is taking the mathematical limit of some physically relevant parameters, in general in a particular order, of the underlying theory in order to arrive at the emergent theory. A limiting procedure won’t work, at least not by itself, due to technical problems concerning the maximal loop density as well as to what essentially amounts to the measurement problem familiar from non-relativistic quantum physics.

An approximating procedure designates the process of either neglecting some physical magni- tudes, and justifying such neglect, or selecting a proper subset of states in the state space of the approximating theory, and justifying such selection, or both, in order to arrive at a theory whose values of physical quantities remain sufficiently close to those of the theory to be approximated. Note that the “approximandum,” the theory to be approximated, in our case will not be GTR, but only its vacuum sector of spacetimes of topology Σ × R. One of the central questions will be how the selection of states will be justified. Such a justification would be had if we could identify a mechanism that “drives the system” to the right kind of states. Any attempt to finding such a mechanism will foist a host of issues known from the traditional problem of relating quantum to classical mechanics upon us. A candidate mechanism, here and there, is some form of “decoherence,” even though that standardly involves an “environment” with which the system at stake can interact. But the system of interest in our case is, of course, the universe, which makes it hard to see how there could be any outside environment with which the system could interact. The challenge then is to conceptualize decoherence is a way to circumvents this problem.

Once it is understood how classical space and time disappear in canonical quantum gravity and how they might be seen to re-emerge from the fundamental, non-spatiotemporal structure, the way in which classicality emerges from the quantum theory of gravity does not radically differ from the way it is believed to arise in ordinary quantum mechanics. The project of pursuing such an understanding is of relevance and interest for at least two reasons. First, important foundational questions concerning the interpretation of, and the relation between, theories are addressed, which can lead to conceptual clarification of the foundations of physics. Such conceptual progress may well prove to be the decisive stepping stone to a full quantum theory of gravity. Second, quantum gravity is a fertile ground for any metaphysician as it will inevitably yield implications for specifically philosophical, and particularly metaphysical, issues concerning the nature of space and time.

Kant, Poincaré, Sklar and Philosophico-Geometrical Problem of Under-Determination. Note Quote.

maxresdefault1

What did Kant really mean in viewing Euclidean geometry as the correct geometrical structure of the world? It is widely known that one of the main goals that Kant pursued in the First Critique was that of unearthing the a priori foundations of Newtonian physics, which describes the structure of the world in terms of Euclidean geometry. How did he achieve that? Kant maintained that our understanding of the physical world had its foundations not merely in experience, but in both experience and a priori concepts. He argues that the possibility of sensory experience depends on certain necessary conditions which he calls a priori forms and that these conditions structure and hold true of the world of experience. As he maintains in the “Transcendental Aesthetic”, Space and Time are not derived from experience but rather are its preconditions. Experience provides those things which we sense. It is our mind, though, that processes this information about the world and gives it order, allowing us to experience it. Our mind supplies the conditions of space and time to experience objects. Thus “space” for Kant is not something existing – as it was for Newton. Space is an a priori form that structures our perception of objects in conformity to the principles of the Euclidean geometry. In this sense, then, the latter is the correct geometrical structure of the world. It is necessarily correct because it is part of the a priori principles of organization of our experience. This claim is exactly what Poincaré criticized about Kant’s view of geometry. Poincaré did not agree with Kant’s view of space as precondition of experience. He thought that our knowledge of the physical space is the result of inferences made out of our direct perceptions.

This knowledge is a theoretical construct, i.e, we infer the existence and nature of the physical space as an explanatory hypothesis which provides us with an account for the regularity we experience in our direct perceptions. But this hypothesis does not possess the necessity of an a priori principle that structures what we directly perceive. Although Poincaré does not endorse an empiricist account, he seems to think, though, that an empiricist view of geometry is more adequate than Kantian conception. In fact, the idea that only a large number of observations inquiring the geometry of physical world can establish which geometrical structure is the correct one, is considered by him as more plausible. But, this empiricist approach is not going to work as well. In fact Poincaré does not endorse an empiricist view of geometry. The outcome of his considerations about a comparison between the empiricist and Kantian accounts of geometry is well described by Sklar:

Nevertheless the empiricist account is wrong. For, given any collections of empirical observations a multitude of geometries, all incompatible with one another, will be equally compatible with the experimental results.

This is the problem of under-determination of hypotheses about the geometrical structure of physical space by experimental evidence. The under-determination is not due to our ability to collect experimental facts. No matter how rich and sophisticated are our experimental procedures for accumulating empirical results, these results will be never enough compelling to support just one of the hypotheses about the geometry of physical space – ruling out the competitors once for all. Actually, it is even worse than that: empirical results seem not to give us any reason at all to think one of the other hypothesis correct. Poincaré thought that this problem was grist to the mill of the conventionalist approach to geometry. The adoption of a geometry for physical space is a matter of making a conventional choice. A brief description of Poincaré disk model might unravel a bit more the issue that is coming up here. The short story about this imaginary world shows that an empiricist account of geometry fails to be adequate. In fact, Poincaré describes a scenario in which Euclidean and hyperbolic geometrical descriptions of that physical space end up being equally consistent with the same collection of empirical data. However, what this story tells us can be generalized to any other scenario, including ours, in which a scientific inquiry concerning the intrinsic geometry of the world is performed.

The imaginary world described in Poincaré’s example is an Euclidean two dimensional disk heated to a constant temperature at the center, whereas, along the radius R, it is heated in a way that produces a temperature’s variation described by R2 − r2. Therefore, the edge of the disk is uniformly cooled to 00.

A group of scientists living on the disk are interested in knowing what the intrinsic geometry of their world is. As Sklar says, the equipment available to them consists in rods uniformly dilating with increasing temperatures, i.e. at each point of the space they all change their lengths in a way which is directly proportional to temperature’s value at that point. However, the scientists are not aware of this peculiar temperature distortion of their rods. So, without anybody knowing, every time a measurement is performed, rods shrank or dilated, depending if they are close to the edge or to the center. After repeated measurements all over the disk, they have a list of empirical data that seems to support strongly the idea that their world is a Lobachevskian plane. So, this view becomes the official one. However, a different data’s interpretation is presented by a member of the community who, striking a discordant note, claims that those empirical data can be taken to indicate that the world is in fact an Euclidean disk, but equipped with fields shrinking or dilating lengths.

Although the two geometrical theories about the structure of the physical space are competitors, the empirical results collected by the scientists support both of them. According to our external three-dimensional Euclidean perspective we know their bi-dimensional world is Euclidean and so we know that only the innovator’s interpretation is the correct one. Using our standpoint the problem of under-determination would seem indeed a problem of epistemic access due to the particular experimental repertoire of the inhabitants. After all expanding this repertoire and increasing the amount of empirical data can overcome the problem. But, according to Poincaré that would completely miss the point. Moving from our “superior” perspective to their one would collocate us exactly in the same situation as they are, i.e.in the impossibility to decide which geometry is the correct one. But more importantly, Poincaré seems to say that any arbitrarily large amount of empirical data cannot refute a geometric hypothesis. In fact, a scientific theory about space is divided in two branches, a geometric one and a physical one. These two parts are deeply related. It would be possible to save from experimental refutation any geometric hypothesis about space, suitably changing some features of the physical branch of the theory. According to Sklar, this fact forces Poincaré to the conclusion that the choice of one hypothesis among several competitors is purely conventional.

The problem of under-determination comes up in the analysis of dual string theories with two string theories postulating two geometrically inequivalent backgrounds, if dual, can produce the same experimental results: same expectation values, same scattering amplitude, and so on. Therefore, similarly to Poincaré’s short story, empirical data relative to physical properties and physical dynamics of strings are not sufficient to determine which one between the two different geometries postulated for the background is the right one, or if there is any more fundamental geometry at all influencing physical dynamics.