Black Hole Entropy in terms of Mass. Note Quote.

c839ecac963908c173c6b13acf3cd2a8--friedrich-nietzsche-the-portal

If M-theory is compactified on a d-torus it becomes a D = 11 – d dimensional theory with Newton constant

GD = G11/Ld = l911/Ld —– (1)

A Schwartzschild black hole of mass M has a radius

Rs ~ M(1/(D-3)) GD(1/(D-3)) —– (2)

According to Bekenstein and Hawking the entropy of such a black hole is

S = Area/4GD —– (3)

where Area refers to the D – 2 dimensional hypervolume of the horizon:

Area ~ RsD-2 —– (4)

Thus

S ~ 1/GD (MGD)(D-2)/(D-3) ~ M(D-2)/(D-3) GD1/(D-3) —– (5)

From the traditional relativists’ point of view, black holes are extremely mysterious objects. They are described by unique classical solutions of Einstein’s equations. All perturbations quickly die away leaving a featureless “bald” black hole with ”no hair”. On the other hand Bekenstein and Hawking have given persuasive arguments that black holes possess thermodynamic entropy and temperature which point to the existence of a hidden microstructure. In particular, entropy generally represents the counting of hidden microstates which are invisible in a coarse grained description. An ultimate exact treatment of objects in matrix theory requires a passage to the infinite N limit. Unfortunately this limit is extremely difficult. For the study of Schwarzchild black holes, the optimal value of N (the value which is large enough to obtain an adequate description without involving many redundant variables) is of order the entropy, S, of the black hole.

Considering the minimum such value for N, we have

Nmin(S) = MRs = M(MGD)1/D-3 = S —– (6)

We see that the value of Nmin in every dimension is proportional to the entropy of the black hole. The thermodynamic properties of super Yang Mills theory can be estimated by standard arguments only if S ≤ N. Thus we are caught between conflicting requirements. For N >> S we don’t have tools to compute. For N ~ S the black hole will not fit into the compact geometry. Therefore we are forced to study the black hole using N = Nmin = S.

Matrix theory compactified on a d-torus is described by d + 1 super Yang Mills theory with 16 real supercharges. For d = 3 we are dealing with a very well known and special quantum field theory. In the standard 3+1 dimensional terminology it is U(N) Yang Mills theory with 4 supersymmetries and with all fields in the adjoint repersentation. This theory is very special in that, in addition to having electric/magnetic duality, it enjoys another property which makes it especially easy to analyze, namely it is exactly scale invariant.

Let us begin by considering it in the thermodynamic limit. The theory is characterized by a “moduli” space defined by the expectation values of the scalar fields φ. Since the φ also represents the positions of the original DO-branes in the non compact directions, we choose them at the origin. This represents the fact that we are considering a single compact object – the black hole- and not several disconnected pieces.

The equation of state of the system, defined by giving the entropy S as a function of temperature. Since entropy is extensive, it is proportional to the volume ∑3 of the dual torus. Furthermore, the scale invariance insures that S has the form

S = constant T33 —– (7)

The constant in this equation counts the number of degrees of freedom. For vanishing coupling constant, the theory is described by free quanta in the adjoint of U(N). This means that the number of degrees of freedom is ~ N2.

From the standard thermodynamic relation,

dE = TdS —– (8)

and the energy of the system is

E ~ N2T43 —– (9)

In order to relate entropy and mass of the black hole, let us eliminate temperature from (7) and (9).

S = N23((E/N23))3/4 —– (10)

Now the energy of the quantum field theory is identified with the light cone energy of the system of DO-branes forming the black hole. That is

E ≈ M2/N R —– (11)

Plugging (11) into (10)

S = N23(M2R/N23)3/4 —– (12)

This makes sense only when N << S, as when N >> S computing the equation of state is slightly trickier. At N ~ S, this is precisely the correct form for the black hole entropy in terms of the mass.

From God’s Perspective, There Are No Fields…Justified Newtonian, Unjustified Relativistic Claim. Note Quote.

Electromagnetism is a relativistic theory. Indeed, it had been relativistic, or Lorentz invariant, before Einstein and Minkowski understood that this somewhat peculiar symmetry of Maxwell’s equations was not accidental but expressive of a radically new structure of time and space. Minkowski spacetime, in contrast to Newtonian spacetime, doesn’t come with a preferred space-like foliation, its geometric structure is not one of ordered slices representing “objective” hyperplanes of absolute simultaneity. But Minkowski spacetime does have an objective (geometric) structure of light-cones, with one double-light-cone originating in every point. The most natural way to define a particle interaction in Minkowski spacetime is to have the particles interact directly, not along equal-time hyperplanes but along light-cones

Particle-b-interacts-with-particle-a-at-point-x-via-retarded-and-advanced-waves-The-mass

In other words, if zi􏱁i)  and zjj􏱁) denote the trajectories of two charged particles, it wouldn’t make sense to say that the particles interact at “equal times” as it is in Newtonian theory. It would however make perfectly sense to say that the particles interact whenever

(zμi zμj)(zμi zμj) = (zi – zj)2 = 0 —– (1)

For an observer finding himself in a universe guided by such laws it might then seem like the effects of particle interactions were propagating through space with the speed of light. And this observer may thus insist that there must be something in addition to the particles, something moving or evolving in spacetime and mediating interactions between charged particles. And all this would be a completely legitimate way of speaking, only that it would reflect more about how things appear from a local perspective in a particular frame of reference than about what is truly and objectively going on in the physical world. From “Gods perspective” there are no fields (or photons, or anything of that kind) – only particles in spacetime interacting with each other. This might sound hypothetical, but, it actually is not entirely fictitious. for such a formulation of electrodynamics actually exists and is known as Wheeler-Feynman electrodynamics, or Wheeler-Feynman Absorber Theory. There is a formal property of field equations called “gauge invariance” which makes it possible to look at things in several different, but equivalent, ways. Because of gauge invariance, this theory says that when you push on something, it creates a disturbance in the gravitational field that propagates outward into the future. Out there in the distant future the disturbance interacts with chiefly the distant matter in the universe. It wiggles. When it wiggles it sends a gravitational disturbance backward in time (a so-called “advanced” wave). The effect of all of these “advanced” disturbances propagating backward in time is to create the inertial reaction force you experience at the instant you start to push (and cancel the advanced wave that would otherwise be created by you pushing on the object). So, in this view fields do not have a real existence independent of the sources that emit and absorb them. It is defined by the principle of least action.

Wheeler–Feynman electrodynamics and Maxwell–Lorentz electrodynamics are for all practical purposes empirically equivalent, and it may seem that the choice between the two candidate theories is merely one of convenience and philosophical preference. But this is not really the case since the sad truth is that the field theory, despite its phenomenal success in practical applications and the crucial role it played in the development of modern physics, is inconsistent. The reason is quite simple. The Maxwell–Lorentz theory for a system of N charged particles is defined, as it should be, by a set of mathematical equations. The equation of motion for the particles is given by the Lorentz force law, which is

The electromagnetic force F on a test charge at a given point and time is a certain function of its charge q and velocity v, which can be parameterized by exactly two vectors E and B, in the functional form:

describing the acceleration of a charged particle in an electromagnetic field. The electromagnetic field, represented by the field-tensor Fμν, is described by Maxwell’s equations. The homogenous Maxwell equations tell us that the antisymmetric tensor Fμν (a 2-form) can be written as the exterior derivative of a potential (a 1-form) Aμ(x), i.e. as

Fμν = ∂μ Aν – ∂ν Aμ —– (2)

The inhomogeneous Maxwell equations couple the field degrees of freedom to matter, that is, they tell us how the charges determine the configuration of the electromagnetic field. Fixing the gauge-freedom contained in (2) by demanding ∂μAμ(x) = 0 (Lorentz gauge), the remaining Maxwell equations take the particularly simple form:

□ Aμ = – 4π jμ —– (3)

where

□ = ∂μμ

is the d’Alembert operator and jμ the 4-current density.

The light-cone structure of relativistic spacetime is reflected in the Lorentz-invariant equation (3). The Liénard–Wiechert field at spacetime point x depends on the trajectories of the particles at the points of intersection with the (past and future) light-cones originating in x. The Liénard–Wiechert field (the solution of (3)) is singular precisely at the points where it is needed, namely on the world-lines of the particles. This is the notorious problem of the electron self-interaction: a charged particle generates a field, the field acts back on the particle, the field-strength becomes infinite at the point of the particle and the interaction terms blow up. Hence, the simple truth is that the field concept for managing interactions between point-particles doesn’t work, unless one relies on formal manipulations like renormalization or modifies Maxwell’s laws on small scales. However, we don’t need the fields and by taking the idea of a relativistic interaction theory seriously, we can “cut the middle man” and let the particles interact directly. The status of the Maxwell equation’s (3) in Wheeler–Feynman theory is now somewhat analogous to the status of Laplace’s equation in Newtonian gravity. We can get to the Gallilean invariant theory by writing the force as the gradient of a potential and having that potential satisfy the simplest nontrivial Galilean invariant equation, which is the Laplace equation:

∆V(x, t) = ∑iδ(x – xi(t)) —– (4)

Similarly, we can get the (arguably) simplest Lorentz invariant theory by writing the force as the exterior derivative of a potential and having that potential satisfy the simplest nontrivial Lorentz invariant equation, which is (3). And as concerns the equation of motion for the particles, the trajectories, if, are parametrized by proper time, then the Minkowski norm of the 4-velocity is a constant of motion. In Newtonian gravity, we can make sense of the gravitational potential at any point in space by conceiving its effect on a hypothetical test particle, feeling the gravitational force without gravitating itself. However, nothing in the theory suggests that we should take the potential seriously in that way and conceive of it as a physical field. Indeed, the gravitational potential is really a function on configuration space rather than a function on physical space, and it is really a useful mathematical tool rather than corresponding to physical degrees of freedom. From the point of view of a direct interaction theory, an analogous reasoning would apply in the relativistic context. It may seem (and historically it has certainly been the usual understanding) that (3), in contrast to (4), is a dynamical equation, describing the temporal evolution of something. However, from a relativistic perspective, this conclusion seems unjustified.

Graviton Fields Under Helicity Rotations. Thought of the Day 156.0

reconstruction_comparison

Einstein described gravity as equivalent to curves in space and time, but physicists have long searched for a theory of gravitons, its putative quantum-scale source. Though gravitons are individually too weak to detect, most physicists believe the particles roam the quantum realm in droves, and that their behavior somehow collectively gives rise to the macroscopic force of gravity, just as light is a macroscopic effect of particles called photons. But every proposed theory of how gravity particles might behave faces the same problem: upon close inspection, it doesn’t make mathematical sense. Calculations of graviton interactions might seem to work at first, but when physicists attempt to make them more exact, they yield gibberish – an answer of “infinity.” This is the disease of quantized gravity. With regard to the exchange particles concept in the quantum electrodynamics theory and the existence of graviton, let’s consider a photon that is falling in the gravitational field, and revert back to the behavior of a photon in the gravitational field. But when we define the graviton relative to the photon, it is necessary to explain the properties and behavior of photon in the gravitational field. The fields around a “ray of light” are electromagnetic waves, not static fields. The electromagnetic field generated by a photon is much stronger than the associated gravitational field. When a photon is falling in the gravitational field, it goes from a low layer to a higher layer density of gravitons. We should assume that the graviton is not a solid sphere without any considerable effect. Graviton carries gravity force, so it is absorbable by other gravitons; in general; gravitons absorb each other and combine. This new view on graviton shows, identities of graviton changes, in fact it has mass with changeable spin.

When we derive various supermultiplets of states, at the noninteracting level, these states can easily be described in terms of local fields. But, at the interacting level, there are certain ambiguities that withdraw as a result of different field representations describing the same massless free states. So the proper choice of the field representation may be subtle. The supermultiplets can then be converted into supersymmetric actions, quadratic in the fields. For selfdual tensor fields, the action must be augmented by a duality constraint on the corresponding field strength. For the graviton field,

The linearized Einstein equation for gμν = ημν + κhμν implies that (for D ≥ 3)

Rμν ∝ ∂2hμν + ∂μνh – ∂μρhνρ – ∂νρhρμ = 0 —– (1)

where h ≡ hμμ and Rμν is the Ricci tensor. To analyze the number of states implied by this equation, one may count the number of plane-wave solutions with given momentum qμ. It then turns out that there are D arbitrary solutions, corresponding to the linearized gauge invariance hμν → hμν + ∂μξν + ∂νξμ, which can be discarded. Many other components vanish and the only nonvanishing ones require the momentum to be lightlike. Thee reside in the fields hij, where the components i, j are in the transverse (D-2) dimensional subspace. In addition, the trace of hij must be zero. Hence, the relevant plane-wave solutions are massless and have polarizations (helicities) characterized by a symmetric traceless 2-rank tensor. This tensor comprises 1/2D(D-3), which transform irreducibly under the SO(D-2) helicity group of transverse rotations. For the special case of D = 6 spacetime dimensions, the helicity group is SO(4), which factorizes into two SU(2) groups. The symmetric traceless representation then transforms as a doublet under each of the SU(2) factors and it is thus denoted by (2,2). As for D = 3, there are obviously no dynamic degrees of freedom associated with the gravitational field. When D = 2 there are again no dynamic degrees of freedom, but here (1) should be replaced by Rμν = 1/2gμνR.

Gauge Fixity Towards Hyperbolicity: General Theory of Relativity and Superpotentials. Part 1.

Untitled

Gravitational field is described by a pseudo-Riemannian metric g (with Lorentzian signature (1, m-1)) over the spacetime M of dimension dim(M) = m; in standard General Relativity, m = 4. The configuration bundle is thence the bundle of Lorentzian metrics over M, denoted by Lor(M) . The Lagrangian is second order and it is usually chosen to be the so-called Hilbert Lagrangian:

LH: J2Lor(m) → ∧om(M)

LH: LH(gαβ, Rαβ)ds = 1/2κ (R – 2∧)√g ds —– (1)

where

R = gαβ Rαβ denotes the scalar curvature, √g the square root of the absolute value of the metric determinant and ∧ is a real constant (called the cosmological constant). The coupling constant 1/2κ which is completely irrelevant until the gravitational field is not coupled to some other field, depends on conventions; in natural units, i.e. c = 1, h = 1, G = 1, dimension 4 and signature ( + , – , – , – ) one has κ = – 8π.

Field equations are the well known Einstein equations with cosmological constant

Rαβ – 1/2 Rgαβ = -∧gαβ —— (2)

Lagrangian momenta is defined by:

pαβ = ∂LH/∂gαβ = 1/2κ (Rαβ – 1/2(R – 2∧)gαβ)√g

Pαβ = ∂LH/∂Rαβ = 1/2κ gαβ√g —– (3)

Thus the covariance identity is the following:

dα(LHξα) = pαβ£ξgαβ + Pαβ£ξRαβ —– (4)

or equivalently,

α(LHξα) = pαβ£ξgαβ + PαβεξΓεαβ – δεβ£ξΓλαλ) —– (5)

where ∇ε denotes the covariant derivative with respect to the Levi-Civita connection of g. Thence we have a weak conservation law for the Hilbert Lagrangian

Div ε(LH, ξ) = W(LH, ξ) —– (6)

Conserved currents and work forms have respectively the following expressions:

ε(LH, ξ) = [Pαβ£ξΓεαβ – Pαε£ξΓλαλ – LHξε]dsε = √g/2κ(gαβgεσ – gσβgεα) ∇α£ξgβσdsε – √g/2κξεRdsε = √g/2κ[(3/2Rαλ – (R – 2∧)δαλλ + (gβγδαλ – gα(γδβ)λβγξλ]dsα —– (7)

W(LH, ξ) = √g/κ(Rαβ – 1/2(R – 2∧)gαβ)∇(αξβ)ds —– (8)

As any other natural theory, General Relativity allows superpotentials. In fact, the current can be recast into the form:

ε(LH, ξ) = ε'(LH, ξ) + Div U(LH, ξ) —– (9)

where we set

ε'(LH, ξ) = √g/κ(Rαβ – 1/2(R – 2∧)δαββ)dsα

U(LH, ξ) = 1/2κ ∇[βξα] √gdsαβ —– (10)

The superpotential (10) generalizes to an arbitrary vector field ξ, the well known Komar superpotential which is originally derived for timelike Killing vectors. Whenever spacetime is assumed to be asymptotically fiat, then the superpotential of Komar is known to produce upon integration at spatial infinity ∞ the correct value for angular momentum (e.g. for Kerr-Newman solutions) but just one half of the expected value of the mass. The classical prescriptions are in fact:

m = 2∫ U(LH, ∂t, g)

J = ∫ U(LH, ∂φ, g) —– (11)

For an asymptotically flat solution (e.g. the Kerr-Newman black hole solution) m coincides with the so-called ADM mass and J is the so-called (ADM) angular momentum. For the Kerr-Newman solution in polar coordinates (t, r, θ, φ) the vector fields ∂t and ∂φ are the Killing vectors which generate stationarity and axial symmetry, respectively. Thence, according to this prescription, U(LH, ∂φ) is the superpotential for J while 2U(LH, ∂t) is the superpotential for m. This is known as the anomalous factor problem for the Komar potential. To obtain the expected values for all conserved quantities from the same superpotential, one has to correct the superpotential (10) by some ad hoc additional boundary term. Equivalently and alternatively, one can deduce a corrected superpotential as the canonical superpotential for a corrected Lagrangian, which is in fact the first order Lagrangian for standard General Relativity. This can be done covariantly, provided that one introduces an extra connection Γ’αβμ. The need of a reference connection Γ’ should be also motivated by physical considerations, according to which the conserved quantities have no absolute meaning but they are intrinsically relative to an arbitrarily fixed vacuum level. The simplest choice consists, in fact, in fixing a background metric g (not necessarily of the correct Lorentzian signature) and assuming Γ’ to be the Levi-Civita connection of g. This is rather similar to the gauge fixing à la Hawking which allows to show that Einstein equations form in fact an essentially hyperbolic PDE system. Nothing prevents, however, from taking Γ’ to be any (in principle torsionless) connection on spacetime; also this corresponds to a gauge fixing towards hyperbolicity.

Now, using the term background for a field which enters a field theory in the same way as the metric enters Yang-Mills theory, we see that the background has to be fixed once for all and thence preserved, e.g. by symmetries and deformations. A background has no field equations since deformations fix it; it eventually destroys the naturality of a theory, since fixing the background results in allowing a smaller group of symmetries G ⊂ Diff(M). Accordingly, in truly natural field theories one should not consider background fields either if they are endowed with a physical meaning (as the metric in Yang-Mills theory does) or if they are not.

On the contrary we shall use the expression reference or reference background to denote an extra dynamical field which is not endowed with a direct physical meaning. As long as variational calculus is concerned, reference backgrounds behave in exactly the same way as other dynamical fields do. They obey field equations and they can be dragged along deformations and symmetries. It is important to stress that such a behavior has nothing to do with a direct physical meaning: even if a reference background obeys field equations this does not mean that it is observable, i.e. it can be measured in a laboratory. Of course, not any dynamical field can be treated as a reference background in the above sense. The Lagrangian has in fact to depend on reference backgrounds in a quite peculiar way, so that a reference background cannot interact with any other physical field, otherwise its effect would be observable in a laboratory….

Knowledge Limited for Dummies….Didactics.

header_Pipes

Bertrand Russell with Alfred North Whitehead, in the Principia Mathematica aimed to demonstrate that “all pure mathematics follows from purely logical premises and uses only concepts defined in logical terms.” Its goal was to provide a formalized logic for all mathematics, to develop the full structure of mathematics where every premise could be proved from a clear set of initial axioms.

Russell observed of the dense and demanding work, “I used to know of only six people who had read the later parts of the book. Three of those were Poles, subsequently (I believe) liquidated by Hitler. The other three were Texans, subsequently successfully assimilated.” The complex mathematical symbols of the manuscript required it to be written by hand, and its sheer size – when it was finally ready for the publisher, Russell had to hire a panel truck to send it off – made it impossible to copy. Russell recounted that “every time that I went out for a walk I used to be afraid that the house would catch fire and the manuscript get burnt up.”

Momentous though it was, the greatest achievement of Principia Mathematica was realized two decades after its completion when it provided the fodder for the metamathematical enterprises of an Austrian, Kurt Gödel. Although Gödel did face the risk of being liquidated by Hitler (therefore fleeing to the Institute of Advanced Studies at Princeton), he was neither a Pole nor a Texan. In 1931, he wrote a treatise entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which demonstrated that the goal Russell and Whitehead had so single-mindedly pursued was unattainable.

The flavor of Gödel’s basic argument can be captured in the contradictions contained in a schoolboy’s brainteaser. A sheet of paper has the words “The statement on the other side of this paper is true” written on one side and “The statement on the other side of this paper is false” on the reverse. The conflict isn’t resolvable. Or, even more trivially, a statement like; “This statement is unprovable.” You cannot prove the statement is true, because doing so would contradict it. If you prove the statement is false, then that means its converse is true – it is provable – which again is a contradiction.

The key point of contradiction for these two examples is that they are self-referential. This same sort of self-referentiality is the keystone of Gödel’s proof, where he uses statements that imbed other statements within them. This problem did not totally escape Russell and Whitehead. By the end of 1901, Russell had completed the first round of writing Principia Mathematica and thought he was in the homestretch, but was increasingly beset by these sorts of apparently simple-minded contradictions falling in the path of his goal. He wrote that “it seemed unworthy of a grown man to spend his time on such trivialities, but . . . trivial or not, the matter was a challenge.” Attempts to address the challenge extended the development of Principia Mathematica by nearly a decade.

Yet Russell and Whitehead had, after all that effort, missed the central point. Like granite outcroppings piercing through a bed of moss, these apparently trivial contradictions were rooted in the core of mathematics and logic, and were only the most readily manifest examples of a limit to our ability to structure formal mathematical systems. Just four years before Gödel had defined the limits of our ability to conquer the intellectual world of mathematics and logic with the publication of his Undecidability Theorem, the German physicist Werner Heisenberg’s celebrated Uncertainty Principle had delineated the limits of inquiry into the physical world, thereby undoing the efforts of another celebrated intellect, the great mathematician Pierre-Simon Laplace. In the early 1800s Laplace had worked extensively to demonstrate the purely mechanical and predictable nature of planetary motion. He later extended this theory to the interaction of molecules. In the Laplacean view, molecules are just as subject to the laws of physical mechanics as the planets are. In theory, if we knew the position and velocity of each molecule, we could trace its path as it interacted with other molecules, and trace the course of the physical universe at the most fundamental level. Laplace envisioned a world of ever more precise prediction, where the laws of physical mechanics would be able to forecast nature in increasing detail and ever further into the future, a world where “the phenomena of nature can be reduced in the last analysis to actions at a distance between molecule and molecule.”

What Gödel did to the work of Russell and Whitehead, Heisenberg did to Laplace’s concept of causality. The Uncertainty Principle, though broadly applied and draped in metaphysical context, is a well-defined and elegantly simple statement of physical reality – namely, the combined accuracy of a measurement of an electron’s location and its momentum cannot vary far from a fixed value. The reason for this, viewed from the standpoint of classical physics, is that accurately measuring the position of an electron requires illuminating the electron with light of a very short wavelength. But the shorter the wavelength the greater the amount of energy that hits the electron, and the greater the energy hitting the electron the greater the impact on its velocity.

What is true in the subatomic sphere ends up being true – though with rapidly diminishing significance – for the macroscopic. Nothing can be measured with complete precision as to both location and velocity because the act of measuring alters the physical properties. The idea that if we know the present we can calculate the future was proven invalid – not because of a shortcoming in our knowledge of mechanics, but because the premise that we can perfectly know the present was proven wrong. These limits to measurement imply limits to prediction. After all, if we cannot know even the present with complete certainty, we cannot unfailingly predict the future. It was with this in mind that Heisenberg, ecstatic about his yet-to-be-published paper, exclaimed, “I think I have refuted the law of causality.”

The epistemological extrapolation of Heisenberg’s work was that the root of the problem was man – or, more precisely, man’s examination of nature, which inevitably impacts the natural phenomena under examination so that the phenomena cannot be objectively understood. Heisenberg’s principle was not something that was inherent in nature; it came from man’s examination of nature, from man becoming part of the experiment. (So in a way the Uncertainty Principle, like Gödel’s Undecidability Proposition, rested on self-referentiality.) While it did not directly refute Einstein’s assertion against the statistical nature of the predictions of quantum mechanics that “God does not play dice with the universe,” it did show that if there were a law of causality in nature, no one but God would ever be able to apply it. The implications of Heisenberg’s Uncertainty Principle were recognized immediately, and it became a simple metaphor reaching beyond quantum mechanics to the broader world.

This metaphor extends neatly into the world of financial markets. In the purely mechanistic universe of classical physics, we could apply Newtonian laws to project the future course of nature, if only we knew the location and velocity of every particle. In the world of finance, the elementary particles are the financial assets. In a purely mechanistic financial world, if we knew the position each investor has in each asset and the ability and willingness of liquidity providers to take on those assets in the event of a forced liquidation, we would be able to understand the market’s vulnerability. We would have an early-warning system for crises. We would know which firms are subject to a liquidity cycle, and which events might trigger that cycle. We would know which markets are being overrun by speculative traders, and thereby anticipate tactical correlations and shifts in the financial habitat. The randomness of nature and economic cycles might remain beyond our grasp, but the primary cause of market crisis, and the part of market crisis that is of our own making, would be firmly in hand.

The first step toward the Laplacean goal of complete knowledge is the advocacy by certain financial market regulators to increase the transparency of positions. Politically, that would be a difficult sell – as would any kind of increase in regulatory control. Practically, it wouldn’t work. Just as the atomic world turned out to be more complex than Laplace conceived, the financial world may be similarly complex and not reducible to a simple causality. The problems with position disclosure are many. Some financial instruments are complex and difficult to price, so it is impossible to measure precisely the risk exposure. Similarly, in hedge positions a slight error in the transmission of one part, or asynchronous pricing of the various legs of the strategy, will grossly misstate the total exposure. Indeed, the problems and inaccuracies in using position information to assess risk are exemplified by the fact that major investment banking firms choose to use summary statistics rather than position-by-position analysis for their firmwide risk management despite having enormous resources and computational power at their disposal.

Perhaps more importantly, position transparency also has implications for the efficient functioning of the financial markets beyond the practical problems involved in its implementation. The problems in the examination of elementary particles in the financial world are the same as in the physical world: Beyond the inherent randomness and complexity of the systems, there are simply limits to what we can know. To say that we do not know something is as much a challenge as it is a statement of the state of our knowledge. If we do not know something, that presumes that either it is not worth knowing or it is something that will be studied and eventually revealed. It is the hubris of man that all things are discoverable. But for all the progress that has been made, perhaps even more exciting than the rolling back of the boundaries of our knowledge is the identification of realms that can never be explored. A sign in Einstein’s Princeton office read, “Not everything that counts can be counted, and not everything that can be counted counts.”

The behavioral analogue to the Uncertainty Principle is obvious. There are many psychological inhibitions that lead people to behave differently when they are observed than when they are not. For traders it is a simple matter of dollars and cents that will lead them to behave differently when their trades are open to scrutiny. Beneficial though it may be for the liquidity demander and the investor, for the liquidity supplier trans- parency is bad. The liquidity supplier does not intend to hold the position for a long time, like the typical liquidity demander might. Like a market maker, the liquidity supplier will come back to the market to sell off the position – ideally when there is another investor who needs liquidity on the other side of the market. If other traders know the liquidity supplier’s positions, they will logically infer that there is a good likelihood these positions shortly will be put into the market. The other traders will be loath to be the first ones on the other side of these trades, or will demand more of a price concession if they do trade, knowing the overhang that remains in the market.

This means that increased transparency will reduce the amount of liquidity provided for any given change in prices. This is by no means a hypothetical argument. Frequently, even in the most liquid markets, broker-dealer market makers (liquidity providers) use brokers to enter their market bids rather than entering the market directly in order to preserve their anonymity.

The more information we extract to divine the behavior of traders and the resulting implications for the markets, the more the traders will alter their behavior. The paradox is that to understand and anticipate market crises, we must know positions, but knowing and acting on positions will itself generate a feedback into the market. This feedback often will reduce liquidity, making our observations less valuable and possibly contributing to a market crisis. Or, in rare instances, the observer/feedback loop could be manipulated to amass fortunes.

One might argue that the physical limits of knowledge asserted by Heisenberg’s Uncertainty Principle are critical for subatomic physics, but perhaps they are really just a curiosity for those dwelling in the macroscopic realm of the financial markets. We cannot measure an electron precisely, but certainly we still can “kind of know” the present, and if so, then we should be able to “pretty much” predict the future. Causality might be approximate, but if we can get it right to within a few wavelengths of light, that still ought to do the trick. The mathematical system may be demonstrably incomplete, and the world might not be pinned down on the fringes, but for all practical purposes the world can be known.

Unfortunately, while “almost” might work for horseshoes and hand grenades, 30 years after Gödel and Heisenberg yet a third limitation of our knowledge was in the wings, a limitation that would close the door on any attempt to block out the implications of microscopic uncertainty on predictability in our macroscopic world. Based on observations made by Edward Lorenz in the early 1960s and popularized by the so-called butterfly effect – the fanciful notion that the beating wings of a butterfly could change the predictions of an otherwise perfect weather forecasting system – this limitation arises because in some important cases immeasurably small errors can compound over time to limit prediction in the larger scale. Half a century after the limits of measurement and thus of physical knowledge were demonstrated by Heisenberg in the world of quantum mechanics, Lorenz piled on a result that showed how microscopic errors could propagate to have a stultifying impact in nonlinear dynamic systems. This limitation could come into the forefront only with the dawning of the computer age, because it is manifested in the subtle errors of computational accuracy.

The essence of the butterfly effect is that small perturbations can have large repercussions in massive, random forces such as weather. Edward Lorenz was testing and tweaking a model of weather dynamics on a rudimentary vacuum-tube computer. The program was based on a small system of simultaneous equations, but seemed to provide an inkling into the variability of weather patterns. At one point in his work, Lorenz decided to examine in more detail one of the solutions he had generated. To save time, rather than starting the run over from the beginning, he picked some intermediate conditions that had been printed out by the computer and used those as the new starting point. The values he typed in were the same as the values held in the original simulation at that point, so the results the simulation generated from that point forward should have been the same as in the original; after all, the computer was doing exactly the same operations. What he found was that as the simulated weather pattern progressed, the results of the new run diverged, first very slightly and then more and more markedly, from those of the first run. After a point, the new path followed a course that appeared totally unrelated to the original one, even though they had started at the same place.

Lorenz at first thought there was a computer glitch, but as he investigated further, he discovered the basis of a limit to knowledge that rivaled that of Heisenberg and Gödel. The problem was that the numbers he had used to restart the simulation had been reentered based on his printout from the earlier run, and the printout rounded the values to three decimal places while the computer carried the values to six decimal places. This rounding, clearly insignificant at first, promulgated a slight error in the next-round results, and this error grew with each new iteration of the program as it moved the simulation of the weather forward in time. The error doubled every four simulated days, so that after a few months the solutions were going their own separate ways. The slightest of changes in the initial conditions had traced out a wholly different pattern of weather.

Intrigued by his chance observation, Lorenz wrote an article entitled “Deterministic Nonperiodic Flow,” which stated that “nonperiodic solutions are ordinarily unstable with respect to small modifications, so that slightly differing initial states can evolve into considerably different states.” Translation: Long-range weather forecasting is worthless. For his application in the narrow scientific discipline of weather prediction, this meant that no matter how precise the starting measurements of weather conditions, there was a limit after which the residual imprecision would lead to unpredictable results, so that “long-range forecasting of specific weather conditions would be impossible.” And since this occurred in a very simple laboratory model of weather dynamics, it could only be worse in the more complex equations that would be needed to properly reflect the weather. Lorenz discovered the principle that would emerge over time into the field of chaos theory, where a deterministic system generated with simple nonlinear dynamics unravels into an unrepeated and apparently random path.

The simplicity of the dynamic system Lorenz had used suggests a far-reaching result: Because we cannot measure without some error (harking back to Heisenberg), for many dynamic systems our forecast errors will grow to the point that even an approximation will be out of our hands. We can run a purely mechanistic system that is designed with well-defined and apparently well-behaved equations, and it will move over time in ways that cannot be predicted and, indeed, that appear to be random. This gets us to Santa Fe.

The principal conceptual thread running through the Santa Fe research asks how apparently simple systems, like that discovered by Lorenz, can produce rich and complex results. Its method of analysis in some respects runs in the opposite direction of the usual path of scientific inquiry. Rather than taking the complexity of the world and distilling simplifying truths from it, the Santa Fe Institute builds a virtual world governed by simple equations that when unleashed explode into results that generate unexpected levels of complexity.

In economics and finance, institute’s agenda was to create artificial markets with traders and investors who followed simple and reasonable rules of behavior and to see what would happen. Some of the traders built into the model were trend followers, others bought or sold based on the difference between the market price and perceived value, and yet others traded at random times in response to liquidity needs. The simulations then printed out the paths of prices for the various market instruments. Qualitatively, these paths displayed all the richness and variation we observe in actual markets, replete with occasional bubbles and crashes. The exercises did not produce positive results for predicting or explaining market behavior, but they did illustrate that it is not hard to create a market that looks on the surface an awful lot like a real one, and to do so with actors who are following very simple rules. The mantra is that simple systems can give rise to complex, even unpredictable dynamics, an interesting converse to the point that much of the complexity of our world can – with suitable assumptions – be made to appear simple, summarized with concise physical laws and equations.

The systems explored by Lorenz were deterministic. They were governed definitively and exclusively by a set of equations where the value in every period could be unambiguously and precisely determined based on the values of the previous period. And the systems were not very complex. By contrast, whatever the set of equations are that might be divined to govern the financial world, they are not simple and, furthermore, they are not deterministic. There are random shocks from political and economic events and from the shifting preferences and attitudes of the actors. If we cannot hope to know the course of the deterministic systems like fluid mechanics, then no level of detail will allow us to forecast the long-term course of the financial world, buffeted as it is by the vagaries of the economy and the whims of psychology.

Kant and Non-Euclidean Geometries. Thought of the Day 94.0

ei5yC

The argument that non-Euclidean geometries contradict Kant’s doctrine on the nature of space apparently goes back to Hermann Helmholtz and was retaken by several philosophers of science such as Hans Reichenbach (The Philosophy of Space and Time) who devoted much work to this subject. In a essay written in 1870, Helmholtz argued that the axioms of geometry are not a priori synthetic judgments (in the sense given by Kant), since they can be subjected to experiments. Given that Euclidian geometry is not the only possible geometry, as was believed in Kant’s time, it should be possible to determine by means of measurements whether, for instance, the sum of the three angles of a triangle is 180 degrees or whether two straight parallel lines always keep the same distance among them. If it were not the case, then it would have been demonstrated experimentally that space is not Euclidean. Thus the possibility of verifying the axioms of geometry would prove that they are empirical and not given a priori.

Helmholtz developed his own version of a non-Euclidean geometry on the basis of what he believed to be the fundamental condition for all geometries: “the possibility of figures moving without change of form or size”; without this possibility, it would be impossible to define what a measurement is. According to Helmholtz:

the axioms of geometry are not concerned with space-relations only but also at the same time with the mechanical deportment of solidest bodies in motion.

Nevertheless, he was aware that a strict Kantian might argue that the rigidity of bodies is an a priori property, but

then we should have to maintain that the axioms of geometry are not synthetic propositions… they would merely define what qualities and deportment a body must have to be recognized as rigid.

At this point, it is worth noticing that Helmholtz’s formulation of geometry is a rudimentary version of what was later developed as the theory of Lie groups. As for the transport of rigid bodies, it is well known that rigid motion cannot be defined in the framework of the theory of relativity: since there is no absolute simultaneity of events, it is impossible to move all parts of a material body in a coordinated and simultaneous way. What is defined as the length of a body depends on the reference frame from where it is observed. Thus, it is meaningless to invoke the rigidity of bodies as the basis of a geometry that pretend to describe the real world; it is only in the mathematical realm that the rigid displacement of a figure can be defined in terms of what mathematicians call a congruence.

Arguments similar to those of Helmholtz were given by Reichenbach in his intent to refute Kant’s doctrine on the nature of space and time. Essentially, the argument boils down to the following: Kant assumed that the axioms of geometry are given a priori and he only had classical geometry in mind, Einstein demonstrated that space is not Euclidean and that this could be verified empirically, ergo Kant was wrong. However, Kant did not state that space must be Euclidean; instead, he argued that it is a pure form of intuition. As such, space has no physical reality of its own, and therefore it is meaningless to ascribe physical properties to it. Actually, Kant never mentioned Euclid directly in his work, but he did refer many times to the physics of Newton, which is based on classical geometry. Kant had in mind the axioms of this geometry which is a most powerful tool of Newtonian mechanics. Actually, he did not even exclude the possibility of other geometries, as can be seen in his early speculations on the dimensionality of space.

The important point missed by Reichenbach is that Riemannian geometry is necessarily based on Euclidean geometry. More precisely, a Riemannian space must be considered as locally Euclidean in order to be able to define basic concepts such as distance and parallel transport; this is achieved by defining a flat tangent space at every point, and then extending all properties of this flat space to the globally curved space (Luther Pfahler Eisenhart Riemannian Geometry). To begin with, the structure of a Riemannian space is given by its metric tensor gμν from which the (differential) length is defined as ds2 = gμν dxμ dxν; but this is nothing less than a generalization of the usual Pythagoras theorem in Euclidean space. As for the fundamental concept of parallel transport, it is taken directly from its analogue in Euclidean space: it refers to the transport of abstract (not material, as Helmholtz believed) figures in such a space. Thus Riemann’s geometry cannot be free of synthetic a priori propositions because it is entirely based upon concepts such as length and congruence taken form Euclid. We may conclude that Euclids geometry is the condition of possibility for a more general geometry, such as Riemann’s, simply because it is the natural geometry adapted to our understanding; Kant would say that it is our form of grasping space intuitively. The possibility of constructing abstract spaces does not refute Kant’s thesis; on the contrary, it reinforces it.

Pluralist Mathematics, Minimalist Philosophy: Hans Reichenbach. Drunken Risibility.

H_Reichenbach

Hans Reichenbach relativized the notion of the constitutive a priori. The key observation concerns the fundamental difference between definitions in pure geometry and definitions in physical geometry. In pure geometry there are two kinds of definition: first, there are the familiar explicit definitions; second, there are implicit definitions, that is the kind of definition whereby such fundamental terms as ‘point’, ‘line’, and ‘surface’ are to derive their meaning from the fundamental axioms governing them. But in physical geometry a new kind of definition emerges – that of a physical (or coordinative) definition:

The physical definition takes the meaning of the concept for granted and coordinates to it a physical thing; it is a coordinative definition. Physical definitions, therefore, consist in the coordination of a mathematical definition to a “piece of reality”; one might call them real definitions. (Reichenbach, 8)

Now there are two important points about physical definitions. First, some such correlation between a piece of mathematics and “a piece of physical reality” is necessary if one is to articulate the laws of physics (e.g. consider “force-free moving bodies travel in straight lines”). Second, given a piece of pure mathematics there is a great deal of freedom in choosing the coordinative definitions linking it to “a piece of physical reality”, since… coordinative definitions are arbitrary, and “truth” and “falsehood” are not applicable to them. So we have here a conception of the a priori which (by the first point) is constitutive (of the empirical significance of the laws of physics) and (by the second point) is relative. Moreover, on Reichenbach’s view, in choosing between two empirically equivalent theories that involve different coordinative definitions, there is no issue of “truth” – there is only the issue of simplicity. In his discussion of Einstein’s particular definition of simultaneity, after noting its simplicity, Reichenbach writes: “This simplicity has nothing to do with the truth of the theory. The truth of the axioms decides the empirical truth, and every theory compatible with them which does not add new empirical assumptions is equally true.” (p 11)

Now, Reichenbach went beyond this and he held a more radical thesis – in addition to advocating pluralism with respect to physical geometry (something made possible by the free element in coordinative definitions), he advocated pluralism with respect to pure mathematics (such as arithmetic and set theory). According to Reichenbach, this view is made possible by the axiomatic conception of Hilbert, wherein axioms are treated as “implicit definitions” of the fundamental terms:

The problem of the axioms of mathematics was solved by the discovery that they are definitions, that is, arbitrary stipulations which are neither true nor false, and that only the logical properties of a system – its consistency, independence, uniqueness, and completeness – can be subjects of critical investigation. (p 3)

It needs to be stressed here that Reichenbach is extending the Hilbertian thesis concerning implicit definitions since although Hilbert held this thesis with regard to formal geometry he did not hold it with regard to arithmetic.

On this view there is a plurality of consistent formal systems and the notions of “truth” and “falsehood” do not apply to these systems; the only issue in choosing one system over another is one of convenience for the purpose at hand and this is brought out by investigating their metamathematical properties, something that falls within the provenance of “critical investigation”, where there is a question of truth and falsehood. This radical form of pluralism came to be challenged by Gödel’s discovery of the incompleteness theorems. To begin with, through the arithmetization of syntax, the metamathematical notions that Reichenbach takes to fall within the provenance of “critical investigation” were themselves seen to be a part of arithmetic. Thus, one cannot, on pain of inconsistency, say that there is a question of truth and falsehood with regard to the former but not the latter. More importantly, the incompleteness theorems buttressed the view that truth outstrips consistency. This is most clearly seen using Rosser’s strengthening of the first incompleteness theorem as follows: Let T be an axiom system of arithmetic that (a) falls within the provenance of “critical investigation” and (b) is sufficiently strong to prove the incompleteness theorem. A natural choice for such an axiom system is Primitive Recursive Arithmetic (PRA) but much weaker systems suffice, for example, IΔ0 + exp. Either of these systems can be taken as T. Assuming that T is consistent (something which falls within the provenance of “critical investigation”), by Rosser’s strengthening of the first incompleteness theorem, there is a Π01-sentence φ such that (provably within T + Con(T )) both T + φ and T + ¬φ are consistent. However, not both systems are equally legitimate. For it is easily seen that if a Π01-sentence φ is independent from such a theory, then it must be true. The point being that T is ∑10-complete (provably so in T). So, although T + ¬φ is consistent, it proves a false arithmetical statement.

The Mystery of Modality. Thought of the Day 78.0

sixdimensionquantificationalmodallogic.01

The ‘metaphysical’ notion of what would have been no matter what (the necessary) was conflated with the epistemological notion of what independently of sense-experience can be known to be (the a priori), which in turn was identified with the semantical notion of what is true by virtue of meaning (the analytic), which in turn was reduced to a mere product of human convention. And what motivated these reductions?

The mystery of modality, for early modern philosophers, was how we can have any knowledge of it. Here is how the question arises. We think that when things are some way, in some cases they could have been otherwise, and in other cases they couldn’t. That is the modal distinction between the contingent and the necessary.

How do we know that the examples are examples of that of which they are supposed to be examples? And why should this question be considered a difficult problem, a kind of mystery? Well, that is because, on the one hand, when we ask about most other items of purported knowledge how it is we can know them, sense-experience seems to be the source, or anyhow the chief source of our knowledge, but, on the other hand, sense-experience seems able only to provide knowledge about what is or isn’t, not what could have been or couldn’t have been. How do we bridge the gap between ‘is’ and ‘could’? The classic statement of the problem was given by Immanuel Kant, in the introduction to the second or B edition of his first critique, The Critique of Pure Reason: ‘Experience teaches us that a thing is so, but not that it cannot be otherwise.’

Note that this formulation allows that experience can teach us that a necessary truth is true; what it is not supposed to be able to teach is that it is necessary. The problem becomes more vivid if one adopts the language that was once used by Leibniz, and much later re-popularized by Saul Kripke in his famous work on model theory for formal modal systems, the usage according to which the necessary is that which is ‘true in all possible worlds’. In these terms the problem is that the senses only show us this world, the world we live in, the actual world as it is called, whereas when we claim to know about what could or couldn’t have been, we are claiming knowledge of what is going on in some or all other worlds. For that kind of knowledge, it seems, we would need a kind of sixth sense, or extrasensory perception, or nonperceptual mode of apprehension, to see beyond the world in which we live to these various other worlds.

Kant concludes, that our knowledge of necessity must be what he calls a priori knowledge or knowledge that is ‘prior to’ or before or independent of experience, rather than what he calls a posteriori knowledge or knowledge that is ‘posterior to’ or after or dependant on experience. And so the problem of the origin of our knowledge of necessity becomes for Kant the problem of the origin of our a priori knowledge.

Well, that is not quite the right way to describe Kant’s position, since there is one special class of cases where Kant thinks it isn’t really so hard to understand how we can have a priori knowledge. He doesn’t think all of our a priori knowledge is mysterious, but only most of it. He distinguishes what he calls analytic from what he calls synthetic judgments, and holds that a priori knowledge of the former is unproblematic, since it is not really knowledge of external objects, but only knowledge of the content of our own concepts, a form of self-knowledge.

We can generate any number of examples of analytic truths by the following three-step process. First, take a simple logical truth of the form ‘Anything that is both an A and a B is a B’, for instance, ‘Anyone who is both a man and unmarried is unmarried’. Second, find a synonym C for the phrase ‘thing that is both an A and a B’, for instance, ‘bachelor’ for ‘one who is both a man and unmarried’. Third, substitute the shorter synonym for the longer phrase in the original logical truth to get the truth ‘Any C is a B’, or in our example, the truth ‘Any bachelor is unmarried’. Our knowledge of such a truth seems unproblematic because it seems to reduce to our knowledge of the meanings of our own words.

So the problem for Kant is not exactly how knowledge a priori is possible, but more precisely how synthetic knowledge a priori is possible. Kant thought we do have examples of such knowledge. Arithmetic, according to Kant, was supposed to be synthetic a priori, and geometry, too – all of pure mathematics. In his Prolegomena to Any Future Metaphysics, Kant listed ‘How is pure mathematics possible?’ as the first question for metaphysics, for the branch of philosophy concerned with space, time, substance, cause, and other grand general concepts – including modality.

Kant offered an elaborate explanation of how synthetic a priori knowledge is supposed to be possible, an explanation reducing it to a form of self-knowledge, but later philosophers questioned whether there really were any examples of the synthetic a priori. Geometry, so far as it is about the physical space in which we live and move – and that was the original conception, and the one still prevailing in Kant’s day – came to be seen as, not synthetic a priori, but rather a posteriori. The mathematician Carl Friedrich Gauß had already come to suspect that geometry is a posteriori, like the rest of physics. Since the time of Einstein in the early twentieth century the a posteriori character of physical geometry has been the received view (whence the need for border-crossing from mathematics into physics if one is to pursue the original aim of geometry).

As for arithmetic, the logician Gottlob Frege in the late nineteenth century claimed that it was not synthetic a priori, but analytic – of the same status as ‘Any bachelor is unmarried’, except that to obtain something like ‘29 is a prime number’ one needs to substitute synonyms in a logical truth of a form much more complicated than ‘Anything that is both an A and a B is a B’. This view was subsequently adopted by many philosophers in the analytic tradition of which Frege was a forerunner, whether or not they immersed themselves in the details of Frege’s program for the reduction of arithmetic to logic.

Once Kant’s synthetic a priori has been rejected, the question of how we have knowledge of necessity reduces to the question of how we have knowledge of analyticity, which in turn resolves into a pair of questions: On the one hand, how do we have knowledge of synonymy, which is to say, how do we have knowledge of meaning? On the other hand how do we have knowledge of logical truths? As to the first question, presumably we acquire knowledge, explicit or implicit, conscious or unconscious, of meaning as we learn to speak, by the time we are able to ask the question whether this is a synonym of that, we have the answer. But what about knowledge of logic? That question didn’t loom large in Kant’s day, when only a very rudimentary logic existed, but after Frege vastly expanded the realm of logic – only by doing so could he find any prospect of reducing arithmetic to logic – the question loomed larger.

Many philosophers, however, convinced themselves that knowledge of logic also reduces to knowledge of meaning, namely, of the meanings of logical particles, words like ‘not’ and ‘and’ and ‘or’ and ‘all’ and ‘some’. To be sure, there are infinitely many logical truths, in Frege’s expanded logic. But they all follow from or are generated by a finite list of logical rules, and philosophers were tempted to identify knowledge of the meanings of logical particles with knowledge of rules for using them: Knowing the meaning of ‘or’, for instance, would be knowing that ‘A or B’ follows from A and follows from B, and that anything that follows both from A and from B follows from ‘A or B’. So in the end, knowledge of necessity reduces to conscious or unconscious knowledge of explicit or implicit semantical rules or linguistics conventions or whatever.

Such is the sort of picture that had become the received wisdom in philosophy departments in the English speaking world by the middle decades of the last century. For instance, A. J. Ayer, the notorious logical positivist, and P. F. Strawson, the notorious ordinary-language philosopher, disagreed with each other across a whole range of issues, and for many mid-century analytic philosophers such disagreements were considered the main issues in philosophy (though some observers would speak of the ‘narcissism of small differences’ here). And people like Ayer and Strawson in the 1920s through 1960s would sometimes go on to speak as if linguistic convention were the source not only of our knowledge of modality, but of modality itself, and go on further to speak of the source of language lying in ourselves. Individually, as children growing up in a linguistic community, or foreigners seeking to enter one, we must consciously or unconsciously learn the explicit or implicit rules of the communal language as something with a source outside us to which we must conform. But by contrast, collectively, as a speech community, we do not so much learn as create the language with its rules. And so if the origin of modality, of necessity and its distinction from contingency, lies in language, it therefore lies in a creation of ours, and so in us. ‘We, the makers and users of language’ are the ground and source and origin of necessity. Well, this is not a literal quotation from any one philosophical writer of the last century, but a pastiche of paraphrases of several.

Quantum Energy Teleportation. Drunken Risibility.

dizzzdergunov

Time is one of the most difficult concepts in physics. It enters in the equations in a rather artificial way – as an external parameter. Although strictly speaking time is a quantity that we measure, it is not possible in quantum physics to define a time-observable in the same way as for the other quantities that we measure (position, momentum, etc.). The intuition that we have about time is that of a uniform flow, as suggested by the regular ticks of clocks. Time flows undisturbed by the variety of events that may occur in an irregular pattern in the world. Similarly, the quantum vacuum is the most regular state one can think of. For example, a persistent superconducting current flows at a constant speed – essentially forever. Can then one use the quantum vacuum as a clock? This is a fascinating dispute in condensed-matter physics, formulated as the problem of existence of time crystals. A time crystal, by analogy with a crystal in space, is a system that displays a time-regularity under measurement, while being in the ground (vacuum) state.

Then, if there is an energy (the zero-point energy) associated with empty space, it follows via the special theory of relativity that this energy should correspond to an inertial mass. By the principle of equivalence of the general theory of relativity, inertial mass is identical with the gravitational mass. Thus, empty space must gravitate. So, how much does empty space weigh? This question brings us to the frontiers of our knowledge of vacuum – the famous problem of the cosmological constant, a problem that Einstein was wrestling with, and which is still an open issue in modern cosmology.

Finally, although we cannot locally extract the zero-point energy of the vacuum fluctuations, the vacuum state of a field can be used to transfer energy from one place to another by using only information. This protocol has been called quantum energy teleportation and uses the fact that different spatial regions of a quantum field in the ground state are entangled. It then becomes possible to extract locally energy from the vacuum by making a measurement in one place, then communicating the result to an experimentalist in a spatially remote region, who would be able then to extract energy by making an appropriate (depending on the result communicated) measurement on her or his local vacuum. This suggests that the vacuum is the primordial essence, the ousia from which everything came into existence.

Harmonies of the Orphic Mystery: Emanation of Music

6eed0dedbc1157b72672c7b9eb52bc72

As the Buddhist sage Nagarjuna states in his Seventy Verses on Sunyata, “Being does not arise, since it exists . . .” In similar fashion it can be said that mind exists, and if we human beings manifest its qualities, then the essence and characteristics of mind must be a component of our cosmic source. David Bohm’s theory of the “implicate order” within the operations of nature suggests that observed phenomena do not operate only when they become objective to our senses. Rather, they emerge out of a subjective state or condition that contains the potentials in a latent yet really existent state that is just awaiting the necessary conditions to manifest. Thus within the explicate order of things and beings in our familiar world there is the implicate order out of which all of these emerge in their own time.

Clearly, sun and its family of planets function in accordance with natural laws. The precision of the orbital and other electromagnetic processes is awesome, drawing into one operation the functions of the smallest subparticles and the largest families of sun-stars in their galaxies, and beyond even them. These individual entities are bonded together in an evident unity that we may compare with the oceans of our planet: uncountable numbers of water molecules appear to us as a single mass of substance. In seeking the ultimate particle, the building block of the cosmos, some researchers have found themselves confronted with the mystery of what it is that holds units together in an organism — any organism!

As in music where a harmony consists of many tones bearing an inherent relationship, so must there be harmony embracing all the children of cosmos. Longing for the Harmonies: Themes and Variations from Modern Physics is a book by Frank Wilczek, an eminent physicist, and his wife Betsy Devine, an engineering scientist and freelance writer. The theme of their book is set out in their first paragraph:

From Pythagoras measuring harmonies on a lyre string to R. P. Feynman beating out salsa on his bongos, many a scientist has fallen in love with music. This love is not always rewarded with perfect mastery. Albert Einstein, an ardent amateur of the violin, provoked a more competent player to bellow at him, “Einstein, can’t you count?”

Both music and scientific research, Einstein wrote, “are nourished by the same source of longing, and they complement one another in the release they offer.” It seems to us, too, that the mysterious longing behind a scientist’s search for meaning is the same that inspires creativity in music, art, or any other enterprise of the restless human spirit. And the release they offer is to inhabit, if only for a moment, some point of union between the lonely world of subjectivity and the shared universe of external reality.

In a very lucid text, Wilczek and Devine show us that the laws of nature, and the structure of the universe and all its contributing parts, can be presented in such a way that the whole compares with a musical composition comprising themes that are fused together. One of the early chapters begins with the famous lines of the great astronomer Johannes Kepler, who in 1619 referred to the music of the spheres:

The heavenly motions are nothing but a continuous song for several voices (perceived by the intellect, not by the ear); a music which, through discordant tensions, through sincopes [sic] and cadenzas, as it were (as men employ them in imitation of those natural discords) progresses towards certain pre-designed quasi six-voiced clausuras, and thereby sets landmarks in the immeasurable flow of time. — The Harmony of the World (Harmonice mundi)

Discarding the then current superstitions and misinformed speculation, through the cloud of which Kepler had to work for his insights, Wilczek and Devine note that Kepler’s obsession with the idea of the harmony of the world is actually rooted in Pythagoras’s theory that the universe is built upon number, a concept of the Orphic mystery-religions of Greece. The idea is that “the workings of the world are governed by relations of harmony and, in particular, that music is associated with the motion of the planets — the music of the spheres” (Wilczek and Devine). Arthur Koestler, in writing of Kepler and his work, claimed that the astronomer attempted

to bare the ultimate secret of the universe in an all-embracing synthesis of geometry, music, astrology, astronomy and epistemology. The Sleepwalkers

In Longing for the Harmonies the authors refer to the “music of the spheres” as a notion that in time past was “vague, mystical, and elastic.” As the foundations of music are rhythm and harmony, they remind us that Kepler saw the planets moving around the sun “to a single cosmic rhythm.” There is some evidence that he had association with a “neo-Pythagorean” movement and that, owing to the religious-fomented opposition to unorthodox beliefs, he kept his ideas hidden under allegory and metaphor.

Shakespeare, too, phrases the thought of tonal vibrations emitted by the planets and stars as the “music of the spheres,” the notes likened to those of the “heavenly choir” of cherubim. This calls to mind that Plato’s Cratylus terms the planets theoi, from theein meaning “to run, to move.” Motion does suggest animation, or beings imbued with life, and indeed the planets are living entities so much grander than human beings that the Greeks and other peoples called them “gods.” Not the physical bodies were meant, but the essence within them, in the same way that a human being is known by the inner qualities expressed through the personality.

When classical writers spoke of planets and starry entities as “animals” they did not refer to animals such as we know on Earth, but to the fact that the celestial bodies are “animated,” embodying energies received from the sun and cosmos and transmitted with their own inherent qualities added.

Many avenues open up for our reflection upon the nature of the cosmos and ourselves, and our interrelationship, as we consider the structure of natural laws as Wilczek and Devine present them. For example, the study of particles, their interactions, their harmonizing with those laws, is illuminating intrinsically and, additionally, because of their universal application. The processes involved occur here on earth, and evidently also within the solar system and beyond, explaining certain phenomena that had been awaiting clarification.

The study of atoms here on earth and their many particles and subparticles has enabled researchers to deduce how stars are born, how and why they shine, and how they die. Now some researchers are looking at what it is, whether a process or an energy, that unites the immeasurably small with the very large cosmic bodies we now know. If nature is infinite, it must be so in a qualitative sense, not merely a quantitative.

One of the questions occupying the minds of cosmologists is whether the universal energy is running down like the mechanism of an unwinding Swiss watch, or whether there is enough mass to slow the outward thrust caused by the big bang that has been assumed to have started our cosmos going. In other words, is our universe experiencing entropy — dying as its energy is being used up — or will there be a “brake” put upon the expansion that could, conceivably, result in a return to the source of the initial explosion billions of years ago? Cosmologists have been looking for enough “dark mass” to serve as such a brake.

Among the topics treated by Wilczek and Devine in threading their way through many themes and variations in modern physics, is what is known as the mass-generating Higgs field. This is a proposition formulated by Peter Higgs, a Scottish physicist, who suggests there is an electromagnetic field that pervades the cosmos and universally provides the electron particles with mass.

The background Higgs field must have very accurately the same value throughout the universe. After all, we know — from the fact that the light from distant galaxies contains the same spectral lines we find on Earth — that electrons have the same mass throughout the universe. So if electrons are getting their mass from the Higgs field, this field had better have the same strength everywhere. What is the meaning of this all-pervasive field, which exists with no apparent source? Why is it there? (Wilczek and Devine).

What is the meaning? Why is it there? These are among the most important questions that can be asked. Though physicists may provide profound mathematical equations, they will thereby offer only more precise detail as to what is happening. We shall not receive an answer to the “What” and the “Why” without recourse to meta-physics, beyond the realm of brain-devised definitions.

The human mind is limited in its present stage of evolution. It may see the logical necessity of infinity referent to space and time; for if not infinity, what then is on the other side of the “fence” that is our outermost limit? But, being able to perceive the logical necessity of infinity, the finite mind still cannot span the limitless ranges of space, time, and substance.

If we human beings are manifold in our composition, and since we draw our very existence and sustenance from the universe at large, our conjoint nature must be drawn from the sources of life, substance, and energy, in which our and all other cosmic lives are immersed.

As the authors conclude their fascinating work:

“The worlds opened to our view are graced with wonderful symmetry and uniformity. Learning to know them, to appreciate their many harmonies, is like deepening an acquaintance with some great and meaningful piece of music — surely one of the best things life has to offer.”