The Affinity of Mirror Symmetry to Algebraic Geometry: Going Beyond Formalism



Even though formalism of homological mirror symmetry is an established case, what of other explanations of mirror symmetry which lie closer to classical differential and algebraic geometry? One way to tackle this is the so-called Strominger, Yau and Zaslow mirror symmetry or SYZ in short.

The central physical ingredient in this proposal is T-duality. To explain this, let us consider a superconformal sigma model with target space (M, g), and denote it (defined as a geometric functor, or as a set of correlation functions), as

CFT(M, g)

In physics, a duality is an equivalence

CFT(M, g) ≅ CFT(M′, g′)

which holds despite the fact that the underlying geometries (M,g) and (M′, g′) are not classically diffeomorphic.

T-duality is a duality which relates two CFT’s with toroidal target space, M ≅ M′ ≅ Td, but different metrics. In rough terms, the duality relates a “small” target space, with noncontractible cycles of length L < ls, with a “large” target space in which all such cycles have length L > ls.

This sort of relation is generic to dualities and follows from the following logic. If all length scales (lengths of cycles, curvature lengths, etc.) are greater than ls, string theory reduces to conventional geometry. Now, in conventional geometry, we know what it means for (M, g) and (M′, g′) to be non-isomorphic. Any modification to this notion must be associated with a breakdown of conventional geometry, which requires some length scale to be “sub-stringy,” with L < ls. To state T-duality precisely, let us first consider M = M′ = S1. We parameterise this with a coordinate X ∈ R making the identification X ∼ X + 2π. Consider a Euclidean metric gR given by ds2 = R2dX2. The real parameter R is usually called the “radius” from the obvious embedding in R2. This manifold is Ricci-flat and thus the sigma model with this target space is a conformal field theory, the “c = 1 boson.” Let us furthermore set the string scale ls = 1. With this, we attain a complete physical equivalence.

CFT(S1, gR) ≅ CFT(S1, g1/R)

Thus these two target spaces are indistinguishable from the point of view of string theory.

Just to give a physical picture for what this means, suppose for sake of discussion that superstring theory describes our universe, and thus that in some sense there must be six extra spatial dimensions. Suppose further that we had evidence that the extra dimensions factorized topologically and metrically as K5 × S1; then it would make sense to ask: What is the radius R of this S1 in our universe? In principle this could be measured by producing sufficiently energetic particles (so-called “Kaluza-Klein modes”), or perhaps measuring deviations from Newton’s inverse square law of gravity at distances L ∼ R. In string theory, T-duality implies that R ≥ ls, because any theory with R < ls is equivalent to another theory with R > ls. Thus we have a nontrivial relation between two (in principle) observable quantities, R and ls, which one might imagine testing experimentally. Let us now consider the theory CFT(Td, g), where Td is the d-dimensional torus, with coordinates Xi parameterising Rd/2πZd, and a constant metric tensor gij. Then there is a complete physical equivalence

CFT(Td, g) ≅ CFT(Td, g−1)

In fact this is just one element of a discrete group of T-duality symmetries, generated by T-dualities along one-cycles, and large diffeomorphisms (those not continuously connected to the identity). The complete group is isomorphic to SO(d, d; Z).

While very different from conventional geometry, T-duality has a simple intuitive explanation. This starts with the observation that the possible embeddings of a string into X can be classified by the fundamental group π1(X). Strings representing non-trivial homotopy classes are usually referred to as “winding states.” Furthermore, since strings interact by interconnecting at points, the group structure on π1 provided by concatenation of based loops is meaningful and is respected by interactions in the string theory. Now π1(Td) ≅ Zd, as an abelian group, referred to as the group of “winding numbers”.

Of course, there is another Zd we could bring into the discussion, the Pontryagin dual of the U(1)d of which Td is an affinization. An element of this group is referred to physically as a “momentum,” as it is the eigenvalue of a translation operator on Td. Again, this group structure is respected by the interactions. These two group structures, momentum and winding, can be summarized in the statement that the full closed string algebra contains the group algebra C[Zd] ⊕ C[Zd].

In essence, the point of T-duality is that if we quantize the string on a sufficiently small target space, the roles of momentum and winding will be interchanged. But the main point can be seen by bringing in some elementary spectral geometry. Besides the algebra structure, another invariant of a conformal field theory is the spectrum of its Hamiltonian H (technically, the Virasoro operator L0 + L ̄0). This Hamiltonian can be thought of as an analog of the standard Laplacian ∆g on functions on X, and its spectrum on Td with metric g is

Spec ∆= {∑i,j=1d gijpipj; pi ∈ Zd}

On the other hand, the energy of a winding string is (intuitively) a function of its length. On our torus, a geodesic with winding number w ∈ Zd has length squared

L2 = ∑i,j=1d gijwiwj

Now, the only string theory input we need to bring in is that the total Hamiltonian contains both terms,

H = ∆g + L2 + · · ·

where the extra terms … express the energy of excited (or “oscillator”) modes of the string. Then, the inversion g → g−1, combined with the interchange p ↔ w, leaves the spectrum of H invariant. This is T-duality.

There is a simple generalization of the above to the case with a non-zero B-field on the torus satisfying dB = 0. In this case, since B is a constant antisymmetric tensor, we can label CFT’s by the matrix g + B. Now, the basic T-duality relation becomes

CFT(Td, g + B) ≅ CFT(Td, (g + B)−1)

Another generalization, which is considerably more subtle, is to do T-duality in families, or fiberwise T-duality. The same arguments can be made, and would become precise in the limit that the metric on the fibers varies on length scales far greater than ls, and has curvature lengths far greater than ls. This is sometimes called the “adiabatic limit” in physics. While this is a very restrictive assumption, there are more heuristic physical arguments that T-duality should hold more generally, with corrections to the relations proportional to curvatures ls2R and derivatives ls∂ of the fiber metric, both in perturbation theory and from world-sheet instantons.

Black Hole Analogue: Extreme Blue Shift Disturbance. Thought of the Day 141.0

One major contribution of the theoretical study of black hole analogues has been to help clarify the derivation of the Hawking effect, which leads to a study of Hawking radiation in a more general context, one that involves, among other features, two horizons. There is an apparent contradiction in Hawking’s semiclassical derivation of black hole evaporation, in that the radiated fields undergo arbitrarily large blue-shifting in the calculation, thus acquiring arbitrarily large masses, which contravenes the underlying assumption that the gravitational effects of the quantum fields may be ignored. This is known as the trans-Planckian problem. A similar issue arises in condensed matter analogues such as the sonic black hole.


Sonic horizons in a moving fluid, in which the speed of sound is 1. The velocity profile of the fluid, v(z), attains the value −1 at two values of z; these are horizons for sound waves that are right-moving with respect to the fluid. At the right-hand horizon right-moving waves are trapped, with waves just to the left of the horizon being swept into the supersonic flow region v < −1; no sound can emerge from this region through the horizon, so it is reminiscent of a black hole. At the left-hand horizon right-moving waves become frozen and cannot enter the supersonic flow region; this is reminiscent of a white hole.

Considering the sonic horizons in one-dimensional fluid flow, the velocity profile of the fluid as depicted in the figure above, the two horizons are formed for sound waves that propagate to the right with respect to the fluid. The horizon on the right of the supersonic flow region v < −1 behaves like a black hole horizon for right-moving waves, while the horizon on the left of the supersonic flow region behaves like a white hole horizon for these waves. In such a system, the equation for a small perturbation φ of the velocity potential is

(∂t + ∂zv)(∂t + v∂z)φ − ∂z2φ = 0 —– (1)

In terms of a new coordinate τ defined by

dτ := dt + v/(1 – v2) dz

(1) is the equation φ = 0 of a scalar field in the black-hole-type metric

ds2 = (1 – v2)dτ2 – dz2/(1 – v2)

Each horizon will produce a thermal spectrum of phonons with a temperature determined by the quantity that corresponds to the surface gravity at the horizon, namely the absolute value of the slope of the velocity profile:

kBT = ħα/2π, α := |dv/dz|v=-1 —– (2)


Hawking phonons in the fluid flow: Real phonons have positive frequency in the fluid-element frame and for right-moving phonons this frequency (ω − vk) is ω/(1 + v) = k. Thus in the subsonic-flow regions ω (conserved 1 + v for each ray) is positive, whereas in the supersonic-flow region it is negative; k is positive for all real phonons. The frequency in the fluid-element frame diverges at the horizons – the trans-Planckian problem.

The trajectories of the created phonons are formally deduced from the dispersion relation of the sound equation (1). Geometrical acoustics applied to (1) gives the dispersion relation

ω − vk = ±k —– (3)

and the Hamiltonians

dz/dt = ∂ω/∂k = v ± 1 —– (4)

dk/dt = -∂ω/∂z = − v′k —– (5)

The left-hand side of (3) is the frequency in the frame co-moving with a fluid element, whereas ω is the frequency in the laboratory frame; the latter is constant for a time-independent fluid flow (“time-independent Hamiltonian” dω/dt = ∂ω/∂t = 0). Since the Hawking radiation is right-moving with respect to the fluid, we clearly must choose the positive sign in (3) and hence in (4) also. By approximating v(z) as a linear function near the horizons we obtain from (4) and (5) the ray trajectories. The disturbing feature of the rays is the behavior of the wave vector k: at the horizons the radiation is exponentially blue-shifted, leading to a diverging frequency in the fluid-element frame. These runaway frequencies are unphysical since (1) asserts that sound in a fluid element obeys the ordinary wave equation at all wavelengths, in contradiction with the atomic nature of fluids. Moreover the conclusion that this Hawking radiation is actually present in the fluid also assumes that (1) holds at all wavelengths, as exponential blue-shifting of wave packets at the horizon is a feature of the derivation. Similarly, in the black-hole case the equation does not hold at arbitrarily high frequencies because it ignores the gravity of the fields. For the black hole, a complete resolution of this difficulty will require inputs from the gravitational physics of quantum fields, i.e. quantum gravity, but for the dumb hole the physics is available for a more realistic treatment.


How Black Holes Emitting Hawking Radiation At Best Give Non-Trivial Information About Planckian Physics: Towards Entanglement Entropy.

The analogy between quantised sound waves in fluids and quantum fields in curved space-times facilitates an interdisciplinary knowhow transfer in both directions. On the one hand, one may use the microscopic structure of the fluid as a toy model for unknown high-energy (Planckian) effects in quantum gravity, for example, and investigate the influence of the corresponding cut-off. Examining the derivation of the Hawking effect for various dispersion relations, one reproduces Hawking radiation for a rather large class of scenarios, but there are also counter-examples, which do not appear to be unphysical or artificial, displaying strong deviations from Hawkings result. Therefore, whether real black holes emit Hawking radiation remains an open question and could give non-trivial information about Planckian physics.


On the other hand, the emergence of an effective geometry/metric allows us to apply the vast amount of universal tools and concepts developed for general relativity (such as horizons), which provide a unified description and better understanding of (classical and quantum) non-equilibrium phenomena (e.g., freezing and amplification of quantum fluctuations) in condensed matter systems. As an example for such a universal mechanism, the Kibble-Zurek effect describes the generation of topological effects due to the amplification of classical/thermal fluctuations in non-equilibrium thermal phase transitions. The loss of causal connection underlying the Kibble-Zurek mechanism can be understood in terms of an effective horizon – which clearly indicates the departure from equilibrium. The associated breakdown of adiabaticity leads to an amplification of thermal fluctuations (as in the Kibble-Zurek mechanism) as well as quantum fluctuations (at zero temperature). The zero-temperature version of this amplification mechanism is completely analogous to the early universe and becomes particularly important for the new and rapidly developing field of quantum phase transitions.

Furthermore, these analogue models might provide the exciting opportunity of measuring the analogues of these exotic effects – such as Hawking radiation or the generation of the seeds for structure formation during inflation – in actual laboratory experiments, i.e., experimental quantum simulations of black hole physics or the early universe. Even though the detection of these exotic quantum effects is partially very hard and requires ultra-low temperatures etc., there is no (known) principal objection against it. The analogue models range from black and/or white hole event horizons in flowing fluids and other laboratory systems over apparent horizons in expanding Bose–Einstein condensates, for example, to particle horizons in quantum phase transitions etc.

However, one should stress that the analogy reproduces the kinematics (quantum fields in curved space-times with horizons etc.) but not the dynamics, i.e., the effective geometry/metric is not described by the Einstein equations in general. An important and strongly related problem is the correct description of the back-reaction of the quantum fluctuations (e.g., phonons) onto the background (e.g., fluid flow). In gravity, the impact of the (classical or quantum) matter is usually incorporated by the (expectation value of) energy-momentum tensor. Since this quantity can be introduced at a purely kinematic level, one may use the same construction for phonons in flowing fluids, for example, the pseudo energy-momentum tensor. The relevant component of this tensor describing the energy density (which is conserved for stationary flows) may become negative as soon as the flow velocity exceeds the sound speed. These negative contributions explain the energy balance of the Hawking radiation in black hole analogues as well as super-radiant scattering. However, the (expectation value of the) pseudo energy-momentum tensor does not determine the quantum back-reaction correctly.

One should not neglect to mention the occurrence of a horizon in the laboratory – the Unruh effect. A uniformly accelerated observer cannot see half of the (1+1- dimensional) space-time, the two Rindler wedges are completely causally disconnected by the horizon(s). In each wedge, one may introduce a set of observables corresponding to the measurements made by the observers confined to this wedge – thereby obtaining two equivalent copies of observables in one wedge. In terms of these two copies, the Minkowski vacuum is an entangled state which yields the usual phenomena (thermo-field formalism) including the Unruh effect – i.e., the uniformly accelerated observer experiences the Minkowski vacuum as a thermal bath: For rather general quantum fields (Bisognano-Wichmann theorem), the quantum state ρ obtained by restricting the Minkowski vacuum to one of the Rindler wedges behaves as a mixed state ρ = exp{−2πHˆτ/κ}/Z, where Hˆτ corresponds to the Hamiltonian generating the proper (co-moving wristwatch) time τ measured by the accelerated observer and κ is the analogue to the surface gravity and determines the acceleration.


Space-time diagram with a trajectory of a uniformly accelerated observer and the resulting particle horizons. The observer is confined to the right Rindler wedge (region x > |ct| between the two horizons) and cannot influence or be influenced by all events in the left Rindler wedge (x < |ct|), which is completely causally disconnected.

The thermal character of this restricted state ρ arises from the quantum correlations of the Minkowski vacuum in the two Rindler wedges, i.e., the Minkowski vacuum is a multi-mode squeezed state with respect the two equivalent copies of observables in each wedge. This is a quite general phenomenon associated with doubling the degrees of freedom and describes the underlying idea of the thermo-field formalism, for example. The entropy of the thermal radiation in the Unruh and the Hawking effect can be understood as an entanglement entropy: For the Unruh effect, it is caused by averaging over the quantum correlations between the two Rindler wedges. In the black hole case, each particle of the outgoing Hawking radiation has its infalling partner particle (with a negative energy with respect to spatial infinity) and the entanglement between the two generates the entropy flux of the Hawking radiation. Instead of accelerating a detector and measuring its excitations, one could replace the accelerated observer by an accelerated scatterer. This device would scatter (virtual) particles from the thermal bath and thereby create real particles – which can be interpreted as a signature of Unruh effect.

Philosophizing Loops – Why Spin Foam Constraints to 3D Dynamics Evolution?


The philosophy of loops is canonical, i.e., an analysis of the evolution of variables defined classically through a foliation of spacetime by a family of space-like three- surfaces ∑t. The standard choice is the three-dimensional metric gij, and its canonical conjugate, related to the extrinsic curvature. If the system is reparametrization invariant, the total hamiltonian vanishes, and this hamiltonian constraint is usually called the Wheeler-DeWitt equation. Choosing the canonical variables is fundamental, to say the least.

Abhay Ashtekar‘s insights stems from the definition of an original set of variables stemming from Einstein-Hilbert Lagrangian written in the form,

S = ∫ea ∧ eb ∧ Rcdεabcd —– (1)

where, eare the one-forms associated to the tetrad,

ea ≡ eaμdxμ —– (2)

The associated SO(1, 3) connection one-form ϖab is called the spin connection. Its field strength is the curvature expressed as a two form:

Rab ≡ dϖab + ϖac ∧ ϖcb —– (3)

Ashtekar’s variables are actually based on the SU(2) self-dual connection

A = ϖ − i ∗ ϖ —– (4)

Its field strength is

F ≡ dA + A ∧ A —– (5)

The dynamical variables are then (Ai, Ei ≡ F0i). The main virtue of these variables is that constraints are then linearized. One of them is exactly analogous to Gauss’ law:

DiEi = 0 —– (6)

There is another one related to three-dimensional diffeomorphisms invariance,

trFijEi = 0 —– (7)

and, finally, there is the Hamiltonian constraint,

trFijEiEj = 0 —– (8)

On a purely mathematical basis, there is no doubt that Astekhar’s variables are of a great ingenuity. As a physical tool to describe the metric of space, they are not real in general. This forces a reality condition to be imposed, which is akward. For this reason it is usually prefered to use the Barbero-Immirzi formalism in which the connection depends on a free parameter, γ

Aia + ϖia + γKia —– (9)

ϖ being the spin connection, and K the extrinsic curvature. When γ = i, Ashtekar’s formalism is recovered, for other values of γ, the explicit form of the constraints is more complicated. Even if there is a Hamiltonian constraint that seems promising, was isn’t particularly clear is if the quantum constraint algebra is isomorphic to the classical algebra.

Some states which satisfy the Astekhar constraints are given by the loop representation, which can be introduced from the construct (depending both on the gauge field A and on a parametrized loop γ)

W (γ, A) ≡ trPeφγA —– (10)

and a functional transform mapping functionals of the gauge field ψ(A) into functionals of loops, ψ(γ):

ψ(γ) ≡ ∫DAW(γ, A) ψ(A) —– (11)

When one divides by diffeomorphisms, it is found that functions of knot classes (diffeomorphisms classes of smooth, non self-intersecting loops) satisfy all the constraints. Some particular states sought to reproduce smooth spaces at coarse graining are the Weaves. It is not clear to what extent they also approach the conjugate variables (that is, the extrinsic curvature) as well.

In the presence of a cosmological constant the hamiltonian constraint reads:

εijkEaiEbj(Fkab + λ/3εabcEck) = 0 —– (12)

A particular class of solutions expounded by Lee Smolin of the constraint are self-dual solutions of the form

Fiab = -λ/3εabcEci —– (13)

Loop states in general (suitable symmetrized) can be represented as spin network states: colored lines (carrying some SU(2) representation) meeting at nodes where intertwining SU(2) operators act. There is also a path integral representation, known as spin foam, a topological theory of colored surfaces representing the evolution of a spin network. Spin foams can also be considered as an independent approach to the quantization of the gravitational field. In addition to its specific problems, the hamiltonian constraint does not say in what sense (with respect to what) the three-dimensional dynamics evolve.

US Stock Market Interaction Network as Learned by the Boltzmann Machine


Price formation on a financial market is a complex problem: It reflects opinion of investors about true value of the asset in question, policies of the producers, external regulation and many other factors. Given the big number of factors influencing price, many of which unknown to us, describing price formation essentially requires probabilistic approaches. In the last decades, synergy of methods from various scientific areas has opened new horizons in understanding the mechanisms that underlie related problems. One of the popular approaches is to consider a financial market as a complex system, where not only a great number of constituents plays crucial role but also non-trivial interaction properties between them. For example, related interdisciplinary studies of complex financial systems have revealed their enhanced sensitivity to fluctuations and external factors near critical events with overall change of internal structure. This can be complemented by the research devoted to equilibrium and non-equilibrium phase transitions.

In general, statistical modeling of the state space of a complex system requires writing down the probability distribution over this space using real data. In a simple version of modeling, the probability of an observable configuration (state of a system) described by a vector of variables s can be given in the exponential form

p(s) = Z−1 exp {−βH(s)} —– (1)

where H is the Hamiltonian of a system, β is inverse temperature (further β ≡ 1 is assumed) and Z is a statistical sum. Physical meaning of the model’s components depends on the context and, for instance, in the case of financial systems, s can represent a vector of stock returns and H can be interpreted as the inverse utility function. Generally, H has parameters defined by its series expansion in s. Basing on the maximum entropy principle, expansion up to the quadratic terms is usually used, leading to the pairwise interaction models. In the equilibrium case, the Hamiltonian has form

H(s) = −hTs − sTJs —– (2)

where h is a vector of size N of external fields and J is a symmetric N × N matrix of couplings (T denotes transpose). The energy-based models represented by (1) play essential role not only in statistical physics but also in neuroscience (models of neural networks) and machine learning (generative models, also known as Boltzmann machines). Given topological similarities between neural and financial networks, these systems can be considered as examples of complex adaptive systems, which are characterized by the adaptation ability to changing environment, trying to stay in equilibrium with it. From this point of view, market structural properties, e.g. clustering and networks, play important role for modeling of the distribution of stock prices. Adaptation (or learning) in these systems implies change of the parameters of H as financial and economic systems evolve. Using statistical inference for the model’s parameters, the main goal is to have a model capable of reproducing the same statistical observables given time series for a particular historical period. In the pairwise case, the objective is to have

⟨sidata = ⟨simodel —– (3a)

⟨sisjdata = ⟨sisjmodel —– (3b)

where angular brackets denote statistical averaging over time. Having specified general mathematical model, one can also discuss similarities between financial and infinite- range magnetic systems in terms of phenomena related, e.g. extensivity, order parameters and phase transitions, etc. These features can be captured even in the simplified case, when si is a binary variable taking only two discrete values. Effect of the mapping to a binarized system, when the values si = +1 and si = −1 correspond to profit and loss respectively. In this case, diagonal elements of the coupling matrix, Jii, are zero because s2i = 1 terms do not contribute to the Hamiltonian….

US stock market interaction network as learned by the Boltzmann Machine

Conjuncted: Canonical Hamiltonian as Road to Gauge Freedom.

Theories of Fields: Gravitational Field as “the More Equal Among Equals”


Descartes, in Le Monde, gave a fully relational definition of localization (space) and motion. According to Descartes, there is no “empty space”. There are only objects, and it makes sense to say that an object A is contiguous to an object B. The “location” of an object A is the set of the objects to which A is contiguous. “Motion” is change in location. That is, when we say that A moves we mean that A goes from the contiguity of an object B to the contiguity of an object C3. A consequence of this relationalism is that there is no meaning in saying “A moves”, except if we specify with respect to which other objects (B, C,. . . ) it is moving. Thus, there is no “absolute” motion. This is the same definition of space, location, and motion, that we find in Aristotle. Aristotle insists on this point, using the example of the river that moves with respect to the ground, in which there is a boat that moves with respect to the water, on which there is a man that walks with respect to the boat . . . . Aristotle’s relationalism is tempered by the fact that there is, after all, a preferred set of objects that we can use as universal reference: the Earth at the center of the universe, the celestial spheres, the fixed stars. Thus, we can say, if we desire so, that something is moving “in absolute terms”, if it moves with respect to the Earth. Of course, there are two preferred frames in ancient cosmology: the one of the Earth and the one of the fixed stars; the two rotates with respect to each other. It is interesting to notice that the thinkers of the middle ages did not miss this point, and discussed whether we can say that the stars rotate around the Earth, rather than being the Earth that rotates under the fixed stars. Buridan concluded that, on ground of reason, in no way one view is more defensible than the other. For Descartes, who writes, of course, after the great Copernican divide, the Earth is not anymore the center of the Universe and cannot offer a naturally preferred definition of stillness. According to malignants, Descartes, fearing the Church and scared by what happened to Galileo’s stubborn defense of the idea that “the Earth moves”, resorted to relationalism, in Le Monde, precisely to be able to hold Copernicanism without having to commit himself to the absolute motion of the Earth!

Relationalism, namely the idea that motion can be defined only in relation to other objects, should not be confused with Galilean relativity. Galilean relativity is the statement that “rectilinear uniform motion” is a priori indistinguishable from stasis. Namely that velocity (but just velocity!), is relative to other bodies. Relationalism holds that any motion (however zigzagging) is a priori indistinguishable from stasis. The very formulation of Galilean relativity requires a nonrelational definition of motion (“rectilinear and uniform” with respect to what?).

Newton took a fully different course. He devotes much energy to criticise Descartes’ relationalism, and to introduce a different view. According to him, space exists. It exists even if there are no bodies in it. Location of an object is the part of space that the object occupies. Motion is change of location. Thus, we can say whether an object moves or not, irrespectively from surrounding objects. Newton argues that the notion of absolute motion is necessary for constructing mechanics. His famous discussion of the experiment of the rotating bucket in the Principia is one of the arguments to prove that motion is absolute.

This point has often raised confusion because one of the corollaries of Newtonian mechanics is that there is no detectable preferred referential frame. Therefore the notion of absolute velocity is, actually, meaningless, in Newtonian mechanics. The important point, however, is that in Newtonian mechanics velocity is relative, but any other feature of motion is not relative: it is absolute. In particular, acceleration is absolute. It is acceleration that Newton needs to construct his mechanics; it is acceleration that the bucket experiment is supposed to prove to be absolute, against Descartes. In a sense, Newton overdid a bit, introducing the notion of absolute position and velocity (perhaps even just for explanatory purposes?). Many people have later criticised Newton for his unnecessary use of absolute position. But this is irrelevant for the present discussion. The important point here is that Newtonian mechanics requires absolute acceleration, against Aristotle and against Descartes. Precisely the same does special relativistic mechanics.

Similarly, Newton introduced absolute time. Newtonian space and time or, in modern terms, spacetime, are like a stage over which the action of physics takes place, the various dynamical entities being the actors. The key feature of this stage, Newtonian spacetime, is its metrical structure. Curves have length, surfaces have area, regions of spacetime have volume. Spacetime points are at fixed distance the one from the other. Revealing, or measuring, this distance, is very simple. It is sufficient to take a rod and put it between two points. Any two points which are one rod apart are at the same distance. Using modern terminology, physical space is a linear three-dimensional (3d) space, with a preferred metric. On this space there exist preferred coordinates xi, i = 1,2,3, in terms of which the metric is just δij. Time is described by a single variable t. The metric δij determines lengths, areas and volumes and defines what we mean by straight lines in space. If a particle deviates with respect to this straight line, it is, according to Newton, accelerating. It is not accelerating with respect to this or that dynamical object: it is accelerating in absolute terms.

Special relativity changes this picture only marginally, loosing up the strict distinction between the “space” and the “time” components of spacetime. In Newtonian spacetime, space is given by fixed 3d planes. In special relativistic spacetime, which 3d plane you call space depends on your state of motion. Spacetime is now a 4d manifold M with a flat Lorentzian metric ημν. Again, there are preferred coordinates xμ, μ = 0, 1, 2, 3, in terms of which ημν = diag[1, −1, −1, −1]. This tensor, ημν , enters all physical equations, representing the determinant influence of the stage and of its metrical properties on the motion of anything. Absolute acceleration is deviation of the world line of a particle from the straight lines defined by ημν. The only essential novelty with special relativity is that the “dynamical objects”, or “bodies” moving over spacetime now include the fields as well. Example: a violent burst of electromagnetic waves coming from a distant supernova has traveled across space and has reached our instruments. For the rest, the Newtonian construct of a fixed background stage over which physics happen is not altered by special relativity.

The profound change comes with general relativity (GTR). The central discovery of GR, can be enunciated in three points. One of these is conceptually simple, the other two are tremendous. First, the gravitational force is mediated by a field, very much like the electromagnetic field: the gravitational field. Second, Newton’s spacetime, the background stage that Newton introduced introduced, against most of the earlier European tradition, and the gravitational field, are the same thing. Third, the dynamics of the gravitational field, of the other fields such as the electromagnetic field, and any other dynamical object, is fully relational, in the Aristotelian-Cartesian sense. Let me illustrate these three points.

First, the gravitational field is represented by a field on spacetime, gμν(x), just like the electromagnetic field Aμ(x). They are both very concrete entities: a strong electromagnetic wave can hit you and knock you down; and so can a strong gravitational wave. The gravitational field has independent degrees of freedom, and is governed by dynamical equations, the Einstein equations.

Second, the spacetime metric ημν disappears from all equations of physics (recall it was ubiquitous). At its place – we are instructed by GTR – we must insert the gravitational field gμν(x). This is a spectacular step: Newton’s background spacetime was nothing but the gravitational field! The stage is promoted to be one of the actors. Thus, in all physical equations one now sees the direct influence of the gravitational field. How can the gravitational field determine the metrical properties of things, which are revealed, say, by rods and clocks? Simply, the inter-atomic separation of the rods’ atoms, and the frequency of the clock’s pendulum are determined by explicit couplings of the rod’s and clock’s variables with the gravitational field gμν(x), which enters the equations of motion of these variables. Thus, any measurement of length, area or volume is, in reality, a measurement of features of the gravitational field.

But what is really formidable in GTR, the truly momentous novelty, is the third point: the Einstein equations, as well as all other equations of physics appropriately modified according to GTR instructions, are fully relational in the Aristotelian-Cartesian sense. This point is independent from the previous one. Let me give first a conceptual, then a technical account of it.

The point is that the only physically meaningful definition of location that makes physical sense within GTR is relational. GTR describes the world as a set of interacting fields and, possibly, other objects. One of these interacting fields is gμν(x). Motion can be defined only as positioning and displacements of these dynamical objects relative to each other.

To describe the motion of a dynamical object, Newton had to assume that acceleration is absolute, namely it is not relative to this or that other dynamical object. Rather, it is relative to a background space. Faraday, Maxwell and Einstein extended the notion of “dynamical object”: the stuff of the world is fields, not just bodies. Finally, GTR tells us that the background space is itself one of these fields. Thus, the circle is closed, and we are back to relationalism: Newton’s motion with respect to space is indeed motion with respect to a dynamical object: the gravitational field.

All this is coded in the active diffeomorphism invariance (diff invariance) of GR. Active diff invariance should not be confused with passive diff invariance, or invariance under change of coordinates. GTR can be formulated in a coordinate free manner, where there are no coordinates, and no changes of coordinates. In this formulation, there field equations are still invariant under active diffs. Passive diff invariance is a property of a formulation of a dynamical theory, while active diff invariance is a property of the dynamical theory itself. A field theory is formulated in manner invariant under passive diffs (or change of coordinates), if we can change the coordinates of the manifold, re-express all the geometric quantities (dynamical and non-dynamical) in the new coordinates, and the form of the equations of motion does not change. A theory is invariant under active diffs, when a smooth displacement of the dynamical fields (the dynamical fields alone) over the manifold, sends solutions of the equations of motion into solutions of the equations of motion. Distinguishing a truly dynamical field, namely a field with independent degrees of freedom, from a nondynamical filed disguised as dynamical (such as a metric field g with the equations of motion Riemann[g]=0) might require a detailed analysis (for instance, Hamiltonian) of the theory. Because active diff invariance is a gauge, the physical content of GTR is expressed only by those quantities, derived from the basic dynamical variables, which are fully independent from the points of the manifold.

In introducing the background stage, Newton introduced two structures: a spacetime manifold, and its non-dynamical metric structure. GTR gets rid of the non-dynamical metric, by replacing it with the gravitational field. More importantly, it gets rid of the manifold, by means of active diff invariance. In GTR, the objects of which the world is made do not live over a stage and do not live on spacetime: they live, so to say, over each other’s shoulders.

Of course, nothing prevents us, if we wish to do so, from singling out the gravitational field as “the more equal among equals”, and declaring that location is absolute in GTR, because it can be defined with respect to it. But this can be done within any relationalism: we can always single out a set of objects, and declare them as not-moving by definition. The problem with this attitude is that it fully misses the great Einsteinian insight: that Newtonian spacetime is just one field among the others. More seriously, this attitude sends us into a nightmare when we have to deal with the motion of the gravitational field itself (which certainly “moves”: we are spending millions for constructing gravity wave detectors to detect its tiny vibrations). There is no absolute referent of motion in GTR: the dynamical fields “move” with respect to each other.

Notice that the third step was not easy for Einstein, and came later than the previous two. Having well understood the first two, but still missing the third, Einstein actively searched for non-generally covariant equations of motion for the gravitational field between 1912 and 1915. With his famous “hole argument” he had convinced himself that generally covariant equations of motion (and therefore, in this context, active diffeomorphism invariance) would imply a truly dramatic revolution with respect to the Newtonian notions of space and time. In 1912 he was not able to take this profoundly revolutionary step, but in 1915 he took this step, and found what Landau calls “the most beautiful of the physical theories”.

Loop Quantum Gravity and Nature of Reality. Briefer.


To some “loop quantum gravity is an attempt to define a quantization of gravity paying special attention to the conceptual lessons of general relativity”, while to others it does not have to be about the quantization of gravity but should be “at least conceivable that such a theory marries a classical understanding of gravity with a quantum understanding of matter”

The term ‘loop’ comes from the solution written for every line closed on itself on the proposed structure of quanta’s interactions. John Archibald Wheeler was one of the pioneers in constructing a representation of space which had a granular structure on a very small scale. Together with Bryce DeWitt they produced a mathematical formula known as Wheeler-DeWitt equation, “an equation which should determine the probability of one or another curved space”. The starting point was spacetime of general relativity having “loop-like states”. Having a quantum approach to gravity on closed loops, which are threads of the Faraday lines of the quantum field, constitutes a gravitational field which looks like a spiderweb. A solution could be written for every line closed on itself. Moreover, every line determining a solution of the Wheeler-DeWitt equation describes one of the threads of the spiderweb created by Faraday force lines of the quantum field which are the threads with which the space is woven. The physical Hilbert space as the space of all quantum states of the theory solves all the constraints and thus ought to be considered as the physical states. This implies that the physical Hilbert space of Loop Quantum Gravity is not yet known. The larger space of states which satisfy the first two families of constraints is often termed the kinematical Hilbert space. The one constraint that has so far resisted resolution is the Hamiltonian constraint equation with the seemingly simple form Hˆ|ψ⟩ = 0, the Wheeler-DeWitt equation, where Hˆ is the Hamiltonian operator usually interpreted to generate the dynamical evolution and |ψ⟩ is a quantum state in the kinematical Hilbert space. Of course, the Hamiltonian operator Hˆ is a complicated function(al) of the basic operators corresponding to the basic canonical variables. In fact, the very functional form of Hˆ is debated as several inequivalent candidates are on the table. Insofar as the physical Hilbert space has thus not yet been constructed, Loop Quantum Gravity remains incomplete.

Space, then, is defined based on the nodes on this spiderweb, which is called a spin network, and time, which already lost its fundamental status with special and general relativity, vanishes from the picture of the universe altogether.

Loop quantum gravity combines the dynamic spacetime approach of general relativity with quanta nature of gravity fields. Accordingly, space that bends and stretches are made up of very small particles which are called quanta of space. If one had eyes capable zooming into the space and seeing magnetic fields and quanta, then, by observing the space, one would first witness the quantum field, and then would end up seeing quanta which are extremely small and granular.

Conjuncted: Forward, Futures Contracts and Options: Top Down or bottom Up Modeling?


In the top down description of theoretical finance, a security S(t) follows a random walk described by a Ito-Weiner process (or Langevin equation) as

dS(t)/S(t) = φdt + σ R(t) dt —– (1)

where R(t) is a Gaussian white noise with zero mean and uncorrelated values at time t and t′ ⟨R(t)R(t′)⟩ = δ(t−t′). φ is the drift term or expected return, while σ is a constant factor multiplying the random source R(t), termed volatility.

As a consequence of Ito calculus, differentials of functions of random variables, say f(S,t), do not satisfy Leibnitz’s rule, and for a Ito-Weiner process with drift (2) one easily obtains for the time derivative of f(S,t)

df/dt = ∂f/∂t + 1/2 σ2 S2 ∂2f/∂S2 + φS∂f/∂S + σS∂f/∂S R —– (2)

The Black-Scholes model is obtained by removing the randomness of the stochastic process shown above by introducing a random process correlated to equation 2. This operation, termed hedging, allows to remove the dependence on the white noise function R(t), by constructing a portfolio Π, whose evolution is given by the short-term risk free interest rate r

dΠ/dt = rΠ —– (3)

A possibility is to choose Π = f – S∂f/∂S. This is a portfolio in which an investor holds an option f and short sells an amount of the underlying security S proportional to ∂f/∂S. A combination of equations 2 and 3 yields the Black-Scholes equation

∂f/∂t + 1/2 σ2 S2 ∂2f/∂S2 + rS∂f/∂S = rf —– (4)

There are some assumptions underlying this result. We have assumed absence of arbitrage, constant spot rate r, continuous balance of the portfolio, no transaction costs and infinite divisibility of the stock. The quantum mechanical version of this equation is obtained by a change of variable S = ex, with x a real variable. This yields

∂f/∂t = HBSf —– (5)

with an Hamiltonian HBS given by

HBS = – σ2/2 ∂2/∂x2 + (1/2 σ2 – r) ∂/∂x + r —– (6)

Notice that one can introduce a quantum mechanical formalism and interpret the option price as a ket |f⟩ in the basis of |x⟩, the underlying security price. Using Dirac notation, we can formally reinterpret f (x, t) = ⟨x|f (t)⟩, as a projection of an abstract quantum state |f(t)⟩ on the chosen basis.

In this notation, the evolution of the option price can be formally written as |f, t⟩ = etH |f, 0⟩, for an appropriate Hamiltonian H.

In general, the description of these processes is driven by two correlated white noise functions R1 and R2

dV/dt = λ + μV + ζVαR1

dV/dt = φS + σ√V + μV + ζVαR2 —– (7)

with V = √σ and ⟨R1(t)R2(t′)⟩ = 1/ρ δ(t − t′)

ρ being the correlation parameter. However, since volatility is not traded in the market (the market is said to be incomplete), perfect hedging is not possible, and an additional term, the market price of volatility risk β(S,V,t,r), is in this case introduced. β can be modeled appropriately. In some models, a redefinition of the drift term μ in (7) in the evolution of the volatility is sufficient to hedge such more complex portfolios, which amounts to an implicit choice of β(S, V, t, r). We just quote the result for the evolution of an option price in the presence of stochastic volatility, which, in the Hamiltonian formulation are given by

∂f/∂t = HMGf —– (8),


HMG = -(r – ey/2) ∂/∂x – (λe-y + μ – ζ2/2 e2y(α – 1)) ∂/∂y – ey/2 ∂2/∂x2 -ρζ ey(α – 1)/2 ∂2/∂x∂y – ζ2 e2y(α – 1) /2 ∂2/∂y2 + r —– (9)

which is nonlinear in the variables x = log(S) and y = log(V ). For general values of the parameters, the best way to obtain the pricing of the options in this model is by a simulation of the path integral.

Comment on Purely Random Correlations of the Matrix, or Studying Noise in Neural Networks


In the presence of two-body interactions the many-body Hamiltonian matrix elements vJα,α′ of good total angular momentum J in the shell-model basis |α⟩ generated by the mean field, can be expressed as follows:

vJα,α′ = ∑J’ii’ cJαα’J’ii’ gJ’ii’ —– (4)

The summation runs over all combinations of the two-particle states |i⟩ coupled to the angular momentum J′ and connected by the two-body interaction g. The analogy of this structure to the one schematically captured by the eq. (2) is evident. gJ’ii’ denote here the radial parts of the corresponding two-body matrix elements while cJαα’J’ii’ globally represent elements of the angular momentum recoupling geometry. gJ’ii’ are drawn from a Gaussian distribution while the geometry expressed by cJαα’J’ii’ enters explicitly. This originates from the fact that a quasi-random coupling of individual spins results in the so-called geometric chaoticity and thus cJαα’ coefficients are also Gaussian distributed. In this case, these two (gJ’ii’ and c) essentially random ingredients lead however to an order of magnitude larger separation of the ground state from the remaining states as compared to a pure Random Matrix Theory (RMT) limit. Due to more severe selection rules the effect of geometric chaoticity does not apply for J = 0. Consistently, the ground state energy gaps measured relative to the average level spacing characteristic for a given J is larger for J > 0 than for J = 0, and also J > 0 ground states are more orderly than those for J = 0, as it can be quantified in terms of the information entropy.

Interestingly, such reductions of dimensionality of the Hamiltonian matrix can also be seen locally in explicit calculations with realistic (non-random) nuclear interactions. A collective state, the one which turns out coherent with some operator representing physical external field, is always surrounded by a reduced density of states, i.e., it repells the other states. In all those cases, the global fluctuation characteristics remain however largely consistent with the corresponding version of the random matrix ensemble.

Recently, a broad arena of applicability of the random matrix theory opens in connection with the most complex systems known to exist in the universe. With no doubt, the most complex is the human’s brain and those phenomena that result from its activity. From the physics point of view the financial world, reflecting such an activity, is of particular interest because its characteristics are quantified directly in terms of numbers and a huge amount of electronically stored financial data is readily available. An access to a single brain activity is also possible by detecting the electric or magnetic fields generated by the neuronal currents. With the present day techniques of electro- or magnetoencephalography, in this way it is possible to generate the time series which resolve neuronal activity down to the scale of 1 ms.

One may debate over what is more complex, the human brain or the financial world, and there is no unique answer. It seems however to us that it is the financial world that is even more complex. After all, it involves the activity of many human brains and it seems even less predictable due to more frequent changes between different modes of action. Noise is of course owerwhelming in either of these systems, as it can be inferred from the structure of eigen-spectra of the correlation matrices taken across different space areas at the same time, or across different time intervals. There however always exist several well identifiable deviations, which, with help of reference to the universal characteristics of the random matrix theory, and with the methodology briefly reviewed above, can be classified as real correlations or collectivity. An easily identifiable gap between the corresponding eigenvalues of the correlation matrix and the bulk of its eigenspectrum plays the central role in this connection. The brain when responding to the sensory stimulations develops larger gaps than the brain at rest. The correlation matrix formalism in its most general asymmetric form allows to study also the time-delayed correlations, like the ones between the oposite hemispheres. The time-delay reflecting the maximum of correlation (time needed for an information to be transmitted between the different sensory areas in the brain is also associated with appearance of one significantly larger eigenvalue. Similar effects appear to govern formation of the heteropolymeric biomolecules. The ones that nature makes use of are separated by an energy gap from the purely random sequences.