# Black Hole Entropy in terms of Mass. Note Quote. If M-theory is compactified on a d-torus it becomes a D = 11 – d dimensional theory with Newton constant

GD = G11/Ld = l911/Ld —– (1)

A Schwartzschild black hole of mass M has a radius

Rs ~ M(1/(D-3)) GD(1/(D-3)) —– (2)

According to Bekenstein and Hawking the entropy of such a black hole is

S = Area/4GD —– (3)

where Area refers to the D – 2 dimensional hypervolume of the horizon:

Area ~ RsD-2 —– (4)

Thus

S ~ 1/GD (MGD)(D-2)/(D-3) ~ M(D-2)/(D-3) GD1/(D-3) —– (5)

From the traditional relativists’ point of view, black holes are extremely mysterious objects. They are described by unique classical solutions of Einstein’s equations. All perturbations quickly die away leaving a featureless “bald” black hole with ”no hair”. On the other hand Bekenstein and Hawking have given persuasive arguments that black holes possess thermodynamic entropy and temperature which point to the existence of a hidden microstructure. In particular, entropy generally represents the counting of hidden microstates which are invisible in a coarse grained description. An ultimate exact treatment of objects in matrix theory requires a passage to the infinite N limit. Unfortunately this limit is extremely difficult. For the study of Schwarzchild black holes, the optimal value of N (the value which is large enough to obtain an adequate description without involving many redundant variables) is of order the entropy, S, of the black hole.

Considering the minimum such value for N, we have

Nmin(S) = MRs = M(MGD)1/D-3 = S —– (6)

We see that the value of Nmin in every dimension is proportional to the entropy of the black hole. The thermodynamic properties of super Yang Mills theory can be estimated by standard arguments only if S ≤ N. Thus we are caught between conflicting requirements. For N >> S we don’t have tools to compute. For N ~ S the black hole will not fit into the compact geometry. Therefore we are forced to study the black hole using N = Nmin = S.

Matrix theory compactified on a d-torus is described by d + 1 super Yang Mills theory with 16 real supercharges. For d = 3 we are dealing with a very well known and special quantum field theory. In the standard 3+1 dimensional terminology it is U(N) Yang Mills theory with 4 supersymmetries and with all fields in the adjoint repersentation. This theory is very special in that, in addition to having electric/magnetic duality, it enjoys another property which makes it especially easy to analyze, namely it is exactly scale invariant.

Let us begin by considering it in the thermodynamic limit. The theory is characterized by a “moduli” space defined by the expectation values of the scalar fields φ. Since the φ also represents the positions of the original DO-branes in the non compact directions, we choose them at the origin. This represents the fact that we are considering a single compact object – the black hole- and not several disconnected pieces.

The equation of state of the system, defined by giving the entropy S as a function of temperature. Since entropy is extensive, it is proportional to the volume ∑3 of the dual torus. Furthermore, the scale invariance insures that S has the form

S = constant T33 —– (7)

The constant in this equation counts the number of degrees of freedom. For vanishing coupling constant, the theory is described by free quanta in the adjoint of U(N). This means that the number of degrees of freedom is ~ N2.

From the standard thermodynamic relation,

dE = TdS —– (8)

and the energy of the system is

E ~ N2T43 —– (9)

In order to relate entropy and mass of the black hole, let us eliminate temperature from (7) and (9).

S = N23((E/N23))3/4 —– (10)

Now the energy of the quantum field theory is identified with the light cone energy of the system of DO-branes forming the black hole. That is

E ≈ M2/N R —– (11)

Plugging (11) into (10)

S = N23(M2R/N23)3/4 —– (12)

This makes sense only when N << S, as when N >> S computing the equation of state is slightly trickier. At N ~ S, this is precisely the correct form for the black hole entropy in terms of the mass.

# Superfluid He-3. Thought of the Day 130.0 At higher temperatures 3He is a gas, while below temperature of 3K – due to van der Walls forces – 3He is a normal liquid with all symmetries which a condensed matter system can have: translation, gauge symmetry U(1) and two SO(3) symmetries for the spin (SOS(3)) and orbital (SOL(3)) rotations. At temperatures below 100 mK, 3He behaves as a strongly interacting Fermi liquid. Its physical properties are well described by Landau’s theory. Quasi-particles of the 3He (i.e. 3He atoms “dressed” into mutual interactions) have spin equal to 1/2 and similar to the electrons, they can create Cooper pairs as well. However, different from electrons in a metal, 3He is a liquid without a lattice and the electron-phonon interaction, responsible for superconductivity, can not be applied here. As the 3He quasiparticles have spin, the magnetic interaction between spins rises up when the temperature falls down until, at a certain temperature, Cooper pairs are created – the coupled pairs of 3He quasiparticles – and the normal 3He liquid becomes a superfluid. The Cooper pairs produce a superfluid component and the rest, unpaired 3He quasiparticles, generate a normal component (N -phase).

A physical picture of the superfluid 3He is more complicated than for superconducting electrons. First, the 3He quasiparticles are bare atoms and creating the Cooper pair they have to rotate around its common center of mass, generating an orbital angular momentum of the pair (L = 1). Secondly, the spin of the Cooper pair is equal to one (S = 1), thus superfluid 3He has magnetic properties. Thirdly, the orbital and spin angular momenta of the pair are coupled via a dipole-dipole interaction.

It is evident that the phase transition of 3He into the superfluid state is accompanied by spontaneously broken symmetry: orbital, spin and gauge: SOL(3)× SOS(3) × U(1), except the translational symmetry, as the superfluid 3He is still a liquid. Finally, an energy gap ∆ appears in the energy spectrum separating the Cooper pairs (ground state) from unpaired quasiparticles – Fermi excitations.

In superfluid 3He the density of Fermi excitations decreases upon further cooling. For temperatures below around 0.25Tc (where Tc is the superfluid transition temperature), the density of the Fermi excitations is so low that the excitations can be regarded as a non-interacting gas because almost all of them are paired and occupy the ground state. Therefore, at these very low temperatures, the superfluid phases of helium-3 represent well defined models of the quantum vacua, which allows us to study any influences of various external forces on the ground state and excitations from this state as well.

The ground state of superfluid 3He is formed by the Cooper pairs having both spin (S = 1) and orbital momentum (L = 1). As a consequence of this spin-triplet, orbital p-wave pairing, the order parameter (or wave function) is far more complicated than that of conventional superconductors and superfluid 4He. The order parameter of the superfluid 3He joins two spaces: the orbital (or k space) and spin and can be expressed as:

Ψ(k) = Ψ↑↑(kˆ)|↑↑⟩ + Ψ↓↓(kˆ)|↓↓⟩ + √2Ψ↑↓(kˆ)(|↑↓⟩ + |↓↑⟩) —– (1)

where kˆ is a unit vector in k space defining a position on the Fermi surface, Ψ↑↑(kˆ), Ψ↓↓(kˆ) a Ψ↑↓(kˆ) are amplitudes of the spin sub-states operators determined by its projection |↑↑⟩, |↓↓⟩ a (|↑↓⟩ + |↓↑⟩) on a quantization axis z.

The order parameter is more often written in a vector representation as a vector d(k) in spin space. For any orientation of the k on the Fermi surface, d(k) is in the direction for which the Cooper pairs have zero spin projection. Moreover, the amplitude of the superfluid condensate at the same point is defined by |d(k)|2 = 1/2tr(ΨΨH). The vector form of the order parameter d(k) for its components can be written as:

dν(k) = ∑μ Aνμkμ —– (2)

where ν (1,2,3) are orthogonal directions in spin space and μ (x,y,z) are those for orbital space. The matrix components Aνμ are complex and theoretically each of them represents possible superfluid phase of 3He. Experimentally, however, only three are stable. Looking at the phase diagram of 3He we can see the presence of two main superfluid phases: A – phase and B – phase. While B – phase consists of all three spin components, the A – phase does not have the component (|↑↓⟩ + |↓↑⟩). There is also a narrow region of the A1 superfluid phase which exists only at higher pressures and temperatures and in nonzero magnetic field. The A1 – phase has only one spin component |↑↑⟩. The phase transition from N – phase to the A or B – phase is a second order transition, while the phase transition between the superfluid A and B phases is of first order.

The B – phase occupies a low field region and it is stable down to the lowest temperatures. In zero field, the B – phase is a pure manifestation of p-wave superfluidity. Having equal numbers of all possible spin and angular momentum projections, the energy gap separating ground state from excitation is isotropic in k space.

The A – phase is preferable at higher pressures and temperatures in zero field. In limit T → 0K, the A – phase can exist at higher magnetic fields (above 340 mT) at zero pressure and this critical field needed for creation of the A – phase rises up as the pressure increases. In this phase, all Cooper pairs have orbital momenta orientated in a common direction defined by the vector lˆ, that is the direction in which the energy gap is reduced to zero. It results in a remarkable difference between these superfluid phases. The B – phase has an isotropic gap, while the A – phase energy spectrum consists of two Fermi points i.e. points with zero energy gap. The difference in the gap structure leads to the different thermodynamic properties of quasiparticle excitations in the limit T → 0K. The density of excitation in the B – phase falls down exponentially with temperature as exp(−∆/kBT), where kB is the Boltzman constant. At the lowest temperatures their density is so low that the excitations can be regarded as a non-interacting gas with a mean free path of the order of kilometers. On the other hand, in A – phase the Fermi points (or nodes) are far more populated with quasiparticle excitations. The nodes orientation in the lˆ direction make the A – phase excitations almost perfectly one-dimensional. The presence of the nodes in the energy spectrum leads to a T3 temperature dependence of the density of excitations and entropy. As a result, as T → 0K, the specific heat of the A – phase is far greater than that of the B – phase. In this limit, the A – phase represents a model system for a vacuum of the Standard model and B – phase is a model system for a Dirac vacuum.

In experiments with superfluid 3He phases, application of different external forces can excite the collective modes of the order parameter representing so called Bose excitations, while the Fermi excitations are responsible for the energy dissipation. Coexistence and mutual interactions of these excitations in the limit T → 0K (in limit of low energies), can be described by quantum field theory, where Bose and Fermi excitations represent Bose and Fermi quantum fields. Thus, 3He has a much broader impact by offering the possibility of experimentally investigating quantum field/cosmological theories via their analogies with the superfluid phases of 3He.

# Philosophy of Dimensions: M-Theory. Thought of the Day 85.0 Superstrings provided a perturbatively finite theory of gravity which, after compactification down to 3+1 dimensions, seemed potentially capable of explaining the strong, weak and electromagnetic forces of the Standard Model, including the required chiral representations of quarks and leptons. However, there appeared to be not one but five seemingly different but mathematically consistent superstring theories: the E8 × E8 heterotic string, the SO(32) heterotic string, the SO(32) Type I string, and Types IIA and IIB strings. Each of these theories corresponded to a different way in which fermionic degrees of freedom could be added to the string worldsheet.

Supersymmetry constrains the upper limit on the number of spacetime dimensions to be eleven. Why, then, do superstring theories stop at ten? In fact, before the “first string revolution” of the mid-1980’s, many physicists sought superunification in eleven-dimensional supergravity. Solutions to this most primitive supergravity theory include the elementary supermembrane and its dual partner, the solitonic superfivebrane. These are supersymmetric objects extended over two and five spatial dimensions, respectively. This brings to mind another question: why do superstring theories generalize zero-dimensional point particles only to one-dimensional strings, rather than p-dimensional objects?

During the “second superstring revolution” of the mid-nineties it was found that, in addition to the 1+1-dimensional string solutions, string theory contains soliton-like Dirichlet branes. These Dp-branes have p + 1-dimensional worldvolumes, which are hyperplanes in 9 + 1-dimensional spacetime on which strings are allowed to end. If a closed string collides with a D-brane, it can turn into an open string whose ends move along the D-brane. The end points of such an open string satisfy conventional free boundary conditions along the worldvolume of the D-brane, and fixed (Dirichlet) boundary conditions are obeyed in the 9 − p dimensions transverse to the D-brane.

D-branes make it possible to probe string theories non-perturbatively, i.e., when the interactions are no longer assumed to be weak. This more complete picture makes it evident that the different string theories are actually related via a network of “dualities.” T-dualities relate two different string theories by interchanging winding modes and Kaluza-Klein states, via R → α′/R. For example, Type IIA string theory compactified on a circle of radius R is equivalent to Type IIB string theory compactified on a circle of radius 1/R. We have a similar relation between E8 × E8 and SO(32) heterotic string theories. While T-dualities remain manifest at weak-coupling, S-dualities are less well-established strong/weak-coupling relationships. For example, the SO(32) heterotic string is believed to be S-dual to the SO(32) Type I string, while the Type IIB string is self-S-dual. There is a duality of dualities, in which the T-dual of one theory is the S-dual of another. Compactification on various manifolds often leads to dualities. The heterotic string compactified on a six-dimensional torus T6 is believed to be self-S-dual. Also, the heterotic string on T4 is dual to the type II string on four-dimensional K3. The heterotic string on T6 is dual to the Type II string on a Calabi-Yau manifold. The Type IIA string on a Calabi-Yau manifold is dual to the Type IIB string on the mirror Calabi-Yau manifold.

This led to the discovery that all five string theories are actually different sectors of an eleven-dimensional non-perturbative theory, known as M-theory. When M-theory is compactified on a circle S1 of radius R11, it leads to the Type IIA string, with string coupling constant gs = R3/211. Thus, the illusion that this string theory is ten-dimensional is a remnant of weak-coupling perturbative methods. Similarly, if M-theory is compactified on a line segment S1/Z2, then the E8 × E8 heterotic string is recovered.

Just as a given string theory has a corresponding supergravity in its low-energy limit, eleven-dimensional supergravity is the low-energy limit of M-theory. Since we do not yet know what the full M-theory actually is, many different names have been attributed to the “M,” including Magical, Mystery, Matrix, and Membrane! Whenever we refer to “M-theory,” we mean the theory which subsumes all five string theories and whose low-energy limit is eleven-dimensional supergravity. We now have an adequate framework with which to understand a wealth of non-perturbative phenomena. For example, electric-magnetic duality in D = 4 is a consequence of string-string duality in D = 6, which in turn is the result of membrane-fivebrane duality in D = 11. Furthermore, the exact electric-magnetic duality has been extended to an effective duality of non-conformal N = 2 Seiberg-Witten theory, which can be derived from M-theory. In fact, it seems that all supersymmetric quantum field theories with any gauge group could have a geometrical interpretation through M-theory, as worldvolume fields propagating on a common intersection of stacks of p-branes wrapped around various cycles of compactified manifolds.

In addition, while perturbative string theory has vacuum degeneracy problems due to the billions of Calabi-Yau vacua, the non-perturbative effects of M-theory lead to smooth transitions from one Calabi-Yau manifold to another. Now the question to ask is not why do we live in one topology but rather why do we live in a particular corner of the unique topology. M-theory might offer a dynamical explanation of this. While supersymmetry ensures that the high-energy values of the Standard Model coupling constants meet at a common value, which is consistent with the idea of grand unification, the gravitational coupling constant just misses this meeting point. In fact, M-theory may resolve long-standing cosmological and quantum gravitational problems. For example, M-theory accounts for a microscopic description of black holes by supplying the necessary non-perturbative components, namely p-branes. This solves the problem of counting black hole entropy by internal degrees of freedom.

# Something Out of Almost Nothing. Drunken Risibility.

Kant’s first antinomy makes the error of the excluded third option, i.e. it is not impossible that the universe could have both a beginning and an eternal past. If some kind of metaphysical realism is true, including an observer-independent and relational time, then a solution of the antinomy is conceivable. It is based on the distinction between a microscopic and a macroscopic time scale. Only the latter is characterized by an asymmetry of nature under a reversal of time, i.e. the property of having a global (coarse-grained) evolution – an arrow of time – or many arrows, if they are independent from each other. Thus, the macroscopic scale is by definition temporally directed – otherwise it would not exist.

On the microscopic scale, however, only local, statistically distributed events without dynamical trends, i.e. a global time-evolution or an increase of entropy density, exist. This is the case if one or both of the following conditions are satisfied: First, if the system is in thermodynamic equilibrium (e.g. there is degeneracy). And/or second, if the system is in an extremely simple ground state or meta-stable state. (Meta-stable states have a local, but not a global minimum in their potential landscape and, hence, they can decay; ground states might also change due to quantum uncertainty, i.e. due to local tunneling events.) Some still speculative theories of quantum gravity permit the assumption of such a global, macroscopically time-less ground state (e.g. quantum or string vacuum, spin networks, twistors). Due to accidental fluctuations, which exceed a certain threshold value, universes can emerge out of that state. Due to some also speculative physical mechanism (like cosmic inflation) they acquire – and, thus, are characterized by – directed non-equilibrium dynamics, specific initial conditions, and, hence, an arrow of time.

It is a matter of debate whether such an arrow of time is

1) irreducible, i.e. an essential property of time,

2) governed by some unknown fundamental and not only phenomenological law,

3) the effect of specific initial conditions or

4) of consciousness (if time is in some sense subjective), or

5) even an illusion.

Many physicists favour special initial conditions, though there is no consensus about their nature and form. But in the context at issue it is sufficient to note that such a macroscopic global time-direction is the main ingredient of Kant’s first antinomy, for the question is whether this arrow has a beginning or not.

Time’s arrow is inevitably subjective, ontologically irreducible, fundamental and not only a kind of illusion, thus if some form of metaphysical idealism for instance is true, then physical cosmology about a time before time is mistaken or quite irrelevant. However, if we do not want to neglect an observer-independent physical reality and adopt solipsism or other forms of idealism – and there are strong arguments in favor of some form of metaphysical realism -, Kant’s rejection seems hasty. Furthermore, if a Kantian is not willing to give up some kind of metaphysical realism, namely the belief in a “Ding an sich“, a thing in itself – and some philosophers actually insisted that this is superfluous: the German idealists, for instance -, he has to admit that time is a subjective illusion or that there is a dualism between an objective timeless world and a subjective arrow of time. Contrary to Kant’s thoughts: There are reasons to believe that it is possible, at least conceptually, that time has both a beginning – in the macroscopic sense with an arrow – and is eternal – in the microscopic notion of a steady state with statistical fluctuations.

Is there also some physical support for this proposal?

Surprisingly, quantum cosmology offers a possibility that the arrow has a beginning and that it nevertheless emerged out of an eternal state without any macroscopic time-direction. (Note that there are some parallels to a theistic conception of the creation of the world here, e.g. in the Augustinian tradition which claims that time together with the universe emerged out of a time-less God; but such a cosmological argument is quite controversial, especially in a modern form.) So this possible overcoming of the first antinomy is not only a philosophical conceivability but is already motivated by modern physics. At least some scenarios of quantum cosmology, quantum geometry/loop quantum gravity, and string cosmology can be interpreted as examples for such a local beginning of our macroscopic time out of a state with microscopic time, but with an eternal, global macroscopic timelessness.

To put it in a more general, but abstract framework and get a sketchy illustration, consider the figure. Physical dynamics can be described using “potential landscapes” of fields. For simplicity, here only the variable potential (or energy density) of a single field is shown. To illustrate the dynamics, one can imagine a ball moving along the potential landscape. Depressions stand for states which are stable, at least temporarily. Due to quantum effects, the ball can “jump over” or “tunnel through” the hills. The deepest depression represents the ground state.

In the common theories the state of the universe – the product of all its matter and energy fields, roughly speaking – evolves out of a metastable “false vacuum” into a “true vacuum” which has a state of lower energy (potential). There might exist many (perhaps even infinitely many) true vacua which would correspond to universes with different constants or laws of nature. It is more plausible to start with a ground state which is the minimum of what physically can exist. According to this view an absolute nothingness is impossible. There is something rather than nothing because something cannot come out of absolutely nothing, and something does obviously exist. Thus, something can only change, and this change might be described with physical laws. Hence, the ground state is almost “nothing”, but can become thoroughly “something”. Possibly, our universe – and, independent from this, many others, probably most of them having different physical properties – arose from such a phase transition out of a quasi atemporal quantum vacuum (and, perhaps, got disconnected completely). Tunneling back might be prevented by the exponential expansion of this brand new space. Because of this cosmic inflation the universe not only became gigantic but simultaneously the potential hill broadened enormously and got (almost) impassable. This preserves the universe from relapsing into its non-existence. On the other hand, if there is no physical mechanism to prevent the tunneling-back or makes it at least very improbable, respectively, there is still another option: If infinitely many universes originated, some of them could be long-lived only for statistical reasons. But this possibility is less predictive and therefore an inferior kind of explanation for not tunneling back.

Another crucial question remains even if universes could come into being out of fluctuations of (or in) a primitive substrate, i.e. some patterns of superposition of fields with local overdensities of energy: Is spacetime part of this primordial stuff or is it also a product of it? Or, more specifically: Does such a primordial quantum vacuum have a semi-classical spacetime structure or is it made up of more fundamental entities? Unique-universe accounts, especially the modified Eddington models – the soft bang/emergent universe – presuppose some kind of semi-classical spacetime. The same is true for some multiverse accounts describing our universe, where Minkowski space, a tiny closed, finite space or the infinite de Sitter space is assumed. The same goes for string theory inspired models like the pre-big bang account, because string and M- theory is still formulated in a background-dependent way, i.e. requires the existence of a semi-classical spacetime. A different approach is the assumption of “building-blocks” of spacetime, a kind of pregeometry also the twistor approach of Roger Penrose, and the cellular automata approach of Stephen Wolfram. The most elaborated accounts in this line of reasoning are quantum geometry (loop quantum gravity). Here, “atoms of space and time” are underlying everything.

Though the question whether semiclassical spacetime is fundamental or not is crucial, an answer might be nevertheless neutral with respect of the micro-/macrotime distinction. In both kinds of quantum vacuum accounts the macroscopic time scale is not present. And the microscopic time scale in some respect has to be there, because fluctuations represent change (or are manifestations of change). This change, reversible and relationally conceived, does not occur “within” microtime but constitutes it. Out of a total stasis nothing new and different can emerge, because an uncertainty principle – fundamental for all quantum fluctuations – would not be realized. In an almost, but not completely static quantum vacuum however, macroscopically nothing changes either, but there are microscopic fluctuations.

The pseudo-beginning of our universe (and probably infinitely many others) is a viable alternative both to initial and past-eternal cosmologies and philosophically very significant. Note that this kind of solution bears some resemblance to a possibility of avoiding the spatial part of Kant’s first antinomy, i.e. his claimed proof of both an infinite space without limits and a finite, limited space: The theory of general relativity describes what was considered logically inconceivable before, namely that there could be universes with finite, but unlimited space, i.e. this part of the antinomy also makes the error of the excluded third option. This offers a middle course between the Scylla of a mysterious, secularized creatio ex nihilo, and the Charybdis of an equally inexplicable eternity of the world.

In this context it is also possible to defuse some explanatory problems of the origin of “something” (or “everything”) out of “nothing” as well as a – merely assumable, but never provable – eternal cosmos or even an infinitely often recurring universe. But that does not offer a final explanation or a sufficient reason, and it cannot eliminate the ultimate contingency of the world.

# Universal Turing Machine: Algorithmic Halting A natural number x will be identified with the x’th binary string in lexicographic order (Λ,0,1,00,01,10,11,000…), and a set X of natural numbers will be identified with its characteristic sequence, and with the real number between 0 and 1 having that sequence as its dyadic expansion. The length of a string x will be denoted |x|, the n’th bit of an infinite sequence X will be noted X(n), and the initial n bits of X will be denoted Xn. Concatenation of strings p and q will be denoted pq.

We now define the information content (and later the depth) of finite strings using a universal Turing machine U. A universal Turing machine may be viewed as a partial recursive function of two arguments. It is universal in the sense that by varying one argument (“program”) any partial recursive function of the other argument (“data”) can be obtained. In the usual machine formats, program, data and output are all finite strings, or, equivalently, natural numbers. However, it is not possible to take a uniformly weighted average over a countably infinite set. Chaitin’s universal machine has two tapes: a read-only one-way tape containing the infinite program; and an ordinary two-way read/write tape, which is used for data input, intermediate work, and output, all of which are finite strings. Our machine differs from Chaitin’s in having some additional auxiliary storage (e.g. another read/write tape) which is needed only to improve the time efficiency of simulations.

We consider only terminating computations, during which, of course, only a finite portion of the program tape can be read. Therefore, the machine’s behavior can still be described by a partial recursive function of two string arguments U(p, w), if we use the first argument to represent that portion of the program that is actually read in the course of a particular computation. The expression U (p, w) = x will be used to indicate that the U machine, started with any infinite sequence beginning with p on its program tape and the finite string w on its data tape, performs a halting computation which reads exactly the initial portion p of the program, and leaves output data x on the data tape at the end of the computation. In all other cases (reading less than p, more than p, or failing to halt), the function U(p, w) is undefined. Wherever U(p, w) is defined, we say that p is a self-delimiting program to compute x from w, and we use T(p, w) to represent the time (machine cycles) of the computation. Often we will consider computations without input data; in that case we abbreviate U(p, Λ) and T(p, Λ) as U(p) and T(p) respectively.

The self-delimiting convention for the program tape forces the domain of U and T, for each data input w, to be a prefix set, that is, a set of strings no member of which is the extension of any other member. Any prefix set S obeys the Kraft inequality

p∈S 2−|p| ≤ 1

Besides being self-delimiting with regard to its program tape, the U machine must be efficiently universal in the sense of being able to simulate any other machine of its kind (Turing machines with self-delimiting program tape) with at most an additive constant constant increase in program size and a linear increase in execution time.

Without loss of generality we assume that there exists for the U machine a constant prefix r which has the effect of stacking an instruction to restart the computation when it would otherwise end. This gives the machine the ability to concatenate programs to run consecutively: if U(p, w) = x and U(q, x) = y, then U(rpq, w) = y. Moreover, this concatenation should be efficient in the sense that T (rpq, w) should exceed T (p, w) + T (q, x) by at most O(1). This efficiency of running concatenated programs can be realized with the help of the auxiliary storage to stack the restart instructions.

Sometimes we will generalize U to have access to an “oracle” A, i.e. an infinite look-up table which the machine can consult in the course of its computation. The oracle may be thought of as an arbitrary 0/1-valued function A(x) which the machine can cause to be evaluated by writing the argument x on a special tape and entering a special state of the finite control unit. In the next machine cycle the oracle responds by sending back the value A(x). The time required to evaluate the function is thus linear in the length of its argument. In particular we consider the case in which the information in the oracle is random, each location of the look-up table having been filled by an independent coin toss. Such a random oracle is a function whose values are reproducible, but otherwise unpredictable and uncorrelated.

Let {φAi (p, w): i = 0,1,2…} be an acceptable Gödel numbering of A-partial recursive functions of two arguments and {φAi (p, w)} an associated Blum complexity measure, henceforth referred to as time. An index j is called self-delimiting iff, for all oracles A and all values w of the second argument, the set { x : φAj (x, w) is defined} is a prefix set. A self-delimiting index has efficient concatenation if there exists a string r such that for all oracles A and all strings w, x, y, p, and q,if φAj (p, w) = x and φAj (q, x) = y, then φAj(rpq, w) = y and φAj (rpq, w) = φAj (p, w) + φAj (q, x) + O(1). A self-delimiting index u with efficient concatenation is called efficiently universal iff, for every self-delimiting index j with efficient concatenation, there exists a simulation program s and a linear polynomial L such that for all oracles A and all strings p and w, and

φAu(sp, w) = φAj (p, w)

and

ΦAu(sp, w) ≤ L(ΦAj (p, w))

The functions UA(p,w) and TA(p,w) are defined respectively as φAu(p, w) and ΦAu(p, w), where u is an efficiently universal index.

For any string x, the minimal program, denoted x∗, is min{p : U(p) = x}, the least self-delimiting program to compute x. For any two strings x and w, the minimal program of x relative to w, denoted (x/w)∗, is defined similarly as min{p : U(p,w) = x}.

By contrast to its minimal program, any string x also has a print program, of length |x| + O(log|x|), which simply transcribes the string x from a verbatim description of x contained within the program. The print program is logarithmically longer than x because, being self-delimiting, it must indicate the length as well as the contents of x. Because it makes no effort to exploit redundancies to achieve efficient coding, the print program can be made to run quickly (e.g. linear time in |x|, in the present formalism). Extra information w may help, but cannot significantly hinder, the computation of x, since a finite subprogram would suffice to tell U to simply erase w before proceeding. Therefore, a relative minimal program (x/w)∗ may be much shorter than the corresponding absolute minimal program x∗, but can only be longer by O(1), independent of x and w.

A string is compressible by s bits if its minimal program is shorter by at least s bits than the string itself, i.e. if |x∗| ≤ |x| − s. Similarly, a string x is said to be compressible by s bits relative to a string w if |(x/w)∗| ≤ |x| − s. Regardless of how compressible a string x may be, its minimal program x∗ is compressible by at most an additive constant depending on the universal computer but independent of x. [If (x∗)∗ were much smaller than x∗, then the role of x∗ as minimal program for x would be undercut by a program of the form “execute the result of executing (x∗)∗.”] Similarly, a relative minimal program (x/w)∗ is compressible relative to w by at most a constant number of bits independent of x or w.

The algorithmic probability of a string x, denoted P(x), is defined as {∑2−|p| : U(p) = x}. This is the probability that the U machine, with a random program chosen by coin tossing and an initially blank data tape, will halt with output x. The time-bounded algorithmic probability, Pt(x), is defined similarly, except that the sum is taken only over programs which halt within time t. We use P(x/w) and Pt(x/w) to denote the analogous algorithmic probabilities of one string x relative to another w, i.e. for computations that begin with w on the data tape and halt with x on the data tape.

The algorithmic entropy H(x) is defined as the least integer greater than −log2P(x), and the conditional entropy H(x/w) is defined similarly as the least integer greater than − log2P(x/w). Among the most important properties of the algorithmic entropy is its equality, to within O(1), with the size of the minimal program:

∃c∀x∀wH(x/w) ≤ |(x/w)∗| ≤ H(x/w) + c

The first part of the relation, viz. that algorithmic entropy should be no greater than minimal program size, is obvious, because of the minimal program’s own contribution to the algorithmic probability. The second half of the relation is less obvious. The approximate equality of algorithmic entropy and minimal program size means that there are few near-minimal programs for any given input/output pair (x/w), and that every string gets an O(1) fraction of its algorithmic probability from its minimal program.

Finite strings, such as minimal programs, which are incompressible or nearly so are called algorithmically random. The definition of randomness for finite strings is necessarily a little vague because of the ±O(1) machine-dependence of H(x) and, in the case of strings other than self-delimiting programs, because of the question of how to count the information encoded in the string’s length, as opposed to its bit sequence. Roughly speaking, an n-bit self-delimiting program is considered random (and therefore not ad-hoc as a hypothesis) iff its information content is about n bits, i.e. iff it is incompressible; while an externally delimited n-bit string is considered random iff its information content is about n + H(n) bits, enough to specify both its length and its contents.

For infinite binary sequences (which may be viewed also as real numbers in the unit interval, or as characteristic sequences of sets of natural numbers) randomness can be defined sharply: a sequence X is incompressible, or algorithmically random, if there is an O(1) bound to the compressibility of its initial segments Xn. Intuitively, an infinite sequence is random if it is typical in every way of sequences that might be produced by tossing a fair coin; in other words, if it belongs to no informally definable set of measure zero. Algorithmically random sequences constitute a larger class, including sequences such as Ω which can be specified by ineffective definitions.

The busy beaver function B(n) is the greatest number computable by a self-delimiting program of n bits or fewer. The halting set K is {x : φx(x) converges}. This is the standard representation of the halting problem.

The self-delimiting halting set K0 is the (prefix) set of all self-delimiting programs for the U machine that halt: {p : U(p) converges}.

K and K0 are readily computed from one another (e.g. by regarding the self-delimiting programs as a subset of ordinary programs, the first 2n bits of K0 can be recovered from the first 2n+O(1) bits of K; by encoding each n-bit ordinary program as a self-delimiting program of length n + O(log n), the first 2n bits of K can be recovered from the first 2n+O(log n) bits of K0.)

The halting probability Ω is defined as {2−|p| : U(p) converges}, the probability that the U machine would halt on an infinite input supplied by coin tossing. Ω is thus a real number between 0 and 1.

The first 2n bits of K0 can be computed from the first n bits of Ω, by enumerating halting programs until enough have halted to account for all but 2−n of the total halting probability. The time required for this decoding (between B(n − O(1)) and B(n + H(n) + O(1)) grows faster than any computable function of n. Although K0 is only slowly computable from Ω, the first n bits of Ω can be rapidly computed from the first 2n+H(n)+O(1) bits of K0, by asking about the halting of programs of the form “enumerate halting programs until (if ever) their cumulative weight exceeds q, then halt”, where q is an n-bit rational number…

# Ricci-flow as an “intrinsic-Ricci-flat” Space-time.

A Ricci flow solution {(Mm, g(t)), t ∈ I ⊂ R} is a smooth family of metrics satisfying the evolution equation

∂/∂t g = −2Rc —– (1)

where Mm is a complete manifold of dimension m. We assume that supM |Rm|g(t) < ∞ for each time t ∈ I. This condition holds automatically if M is a closed manifold. It is very often to put an extra term on the right hand side of (1) to obtain the following rescaled Ricci flow

∂/∂t g = −2 {Rc + λ(t)g} —– (2)

where λ(t) is a function depending only on time. Typically, λ(t) is chosen as the average of the scalar curvature, i.e. , 1/m ∱Rdv or some fixed constant independent of time. In the case that M is closed and λ(t) = 1/m ∱Rdv, the flow is called the normalized Ricci flow. Starting from a positive Ricci curvature metric on a 3-manifold, Richard Hamilton showed that the normalized Ricci flow exists forever and converges to a space form metric. Hamilton developed the maximum principle for tensors to study the Ricci flow initiated from some metric with positive curvature conditions. For metrics without positive curvature condition, the study of Ricci flow was profoundly affected by the celebrated work of Grisha Perelman. He introduced new tools, i.e., the entropy functionals μ, ν, the reduced distance and the reduced volume, to investigate the behavior of the Ricci flow. Perelman’s new input enabled him to revive Hamilton’s program of Ricci flow with surgery, leading to solutions of the Poincaré conjecture and Thurston’s geometrization conjecture.

In the general theory of the Ricci flow developed by Perelman in, the entropy functionals μ and ν are of essential importance. Perelman discovered the monotonicity of such functionals and applied them to prove the no-local-collapsing theorem, which removes the stumbling block for Hamilton’s program of Ricci flow with surgery. By delicately using such monotonicity, he further proved the pseudo-locality theorem, which claims that the Ricci flow can not quickly turn an almost Euclidean region into a very curved one, no matter what happens far away. Besides the functionals, Perelman also introduced the reduced distance and reduced volume. In terms of them, the Ricci flow space-time admits a remarkable comparison geometry picture, which is the foundation of his “local”-version of the no-local-collapsing theorem. Each of the tools has its own advantages and shortcomings. The functionals μ and ν have the advantage that their definitions only require the information for each time slice (M, g(t)) of the flow. However, they are global invariants of the underlying manifold (M, g(t)). It is not convenient to apply them to study the local behavior around a given point x. Correspondingly, the reduced volume and the reduced distance reflect the natural comparison geometry picture of the space-time. Around a base point (x, t), the reduced volume and the reduced distance are closely related to the “local” geometry of (x, t). Unfortunately, it is the space-time “local”, rather than the Riemannian geometry “local” that is concerned by the reduced volume and reduced geodesic. In order to apply them, some extra conditions of the space-time neighborhood of (x, t) are usually required. However, such strong requirement of space-time is hard to fulfill. Therefore, it is desirable to have some new tools to balance the advantages of the reduced volume, the reduced distance and the entropy functionals.

Let (Mm, g) be a complete Ricci-flat manifold, x0 is a point on M such that d(x0, x) < A. Suppose the ball B(x0, r0) is A−1−non-collapsed, i.e., r−m0|B(x0, r0)| ≥ A−1, can we obtain uniform non-collapsing for the ball B(x, r), whenever 0 < r < r0 and d(x, x0) < Ar0? This question can be answered easily by applying triangle inequalities and Bishop-Gromov volume comparison theorems. In particular, there exists a κ = κ(m, A) ≥ 3−mA−m−1 such that B(x, r) is κ-non-collapsed, i.e., r−m|B(x, r)| ≥ κ. Consequently, there is an estimate of propagation speed of non-collapsing constant on the manifold M. This is illustrated by Figure We now regard (M, g) as a trivial space-time {(M, g(t)), −∞ < t < ∞} such that g(t) ≡ g. Clearly, g(t) is a static Ricci flow solution by the Ricci-flatness of g. Then the above estimate can be explained as the propagation of volume non-collapsing constant on the space-time. However, in a more intrinsic way, it can also be interpreted as the propagation of non-collapsing constant of Perelman’s reduced volume. On the Ricci flat space-time, Perelman’s reduced volume has a special formula

V((x, t)r2) = (4π)-m/2 r-m ∫M e-d2(y, x)/4r2 dvy —– (3)

which is almost the volume ratio of Bg(t)(x, r). On a general Ricci flow solution, the reduced volume is also well-defined and has monotonicity with respect to the parameter r2, if one replace d2(y, x)/4r2 in the above formula by the reduced distance l((x, t), (y, t − r2)). Therefore, via the comparison geometry of Bishop-Gromov type, one can regard a Ricci-flow as an “intrinsic-Ricci-flat” space-time. However, the disadvantage of the reduced volume explanation is also clear: it requires the curvature estimate in a whole space-time neighborhood around the point (x, t), rather than the scalar curvature estimate of a single time slice t.

# US Stock Market Interaction Network as Learned by the Boltzmann Machine Price formation on a financial market is a complex problem: It reflects opinion of investors about true value of the asset in question, policies of the producers, external regulation and many other factors. Given the big number of factors influencing price, many of which unknown to us, describing price formation essentially requires probabilistic approaches. In the last decades, synergy of methods from various scientific areas has opened new horizons in understanding the mechanisms that underlie related problems. One of the popular approaches is to consider a financial market as a complex system, where not only a great number of constituents plays crucial role but also non-trivial interaction properties between them. For example, related interdisciplinary studies of complex financial systems have revealed their enhanced sensitivity to fluctuations and external factors near critical events with overall change of internal structure. This can be complemented by the research devoted to equilibrium and non-equilibrium phase transitions.

In general, statistical modeling of the state space of a complex system requires writing down the probability distribution over this space using real data. In a simple version of modeling, the probability of an observable configuration (state of a system) described by a vector of variables s can be given in the exponential form

p(s) = Z−1 exp {−βH(s)} —– (1)

where H is the Hamiltonian of a system, β is inverse temperature (further β ≡ 1 is assumed) and Z is a statistical sum. Physical meaning of the model’s components depends on the context and, for instance, in the case of financial systems, s can represent a vector of stock returns and H can be interpreted as the inverse utility function. Generally, H has parameters defined by its series expansion in s. Basing on the maximum entropy principle, expansion up to the quadratic terms is usually used, leading to the pairwise interaction models. In the equilibrium case, the Hamiltonian has form

H(s) = −hTs − sTJs —– (2)

where h is a vector of size N of external fields and J is a symmetric N × N matrix of couplings (T denotes transpose). The energy-based models represented by (1) play essential role not only in statistical physics but also in neuroscience (models of neural networks) and machine learning (generative models, also known as Boltzmann machines). Given topological similarities between neural and financial networks, these systems can be considered as examples of complex adaptive systems, which are characterized by the adaptation ability to changing environment, trying to stay in equilibrium with it. From this point of view, market structural properties, e.g. clustering and networks, play important role for modeling of the distribution of stock prices. Adaptation (or learning) in these systems implies change of the parameters of H as financial and economic systems evolve. Using statistical inference for the model’s parameters, the main goal is to have a model capable of reproducing the same statistical observables given time series for a particular historical period. In the pairwise case, the objective is to have

⟨sidata = ⟨simodel —– (3a)

⟨sisjdata = ⟨sisjmodel —– (3b)

where angular brackets denote statistical averaging over time. Having specified general mathematical model, one can also discuss similarities between financial and infinite- range magnetic systems in terms of phenomena related, e.g. extensivity, order parameters and phase transitions, etc. These features can be captured even in the simplified case, when si is a binary variable taking only two discrete values. Effect of the mapping to a binarized system, when the values si = +1 and si = −1 correspond to profit and loss respectively. In this case, diagonal elements of the coupling matrix, Jii, are zero because s2i = 1 terms do not contribute to the Hamiltonian….

US stock market interaction network as learned by the Boltzmann Machine

# Thermodynamics of Creation. Note Quote. Just like the early-time cosmic acceleration associated with inflation, a negative pressure can be seen as a possible driving mechanism for the late-time accelerated expansion of the Universe as well. One of the earliest alternatives that could provide a mechanism producing such accelerating phase of the Universe is through a negative pressure produced by viscous or particle production effects. The viscous pressure contributions can be seen as small nonequilibrium contributions for the energy-momentum tensor for nonideal fluids.

Let us posit the thermodynamics of matter creation for a single fluid. To describe the thermodynamic states of a relativistic simple fluid we use the following macroscopic variables: the energy-momentum tensor Tαβ ; the particle flux vector Nα; and the entropy flux vector sα. The energy-momentum tensor satisfies the conservation law, Tαβ = 0, and here we consider situations in which it has the perfect-fluid form

Tαβ = (ρ+P)uαuβ − P gαβ

In the above equation ρ is the energy density, P is the isotropic dynamical pressure, gαβ is the metric tensor and uα is the fluid four-velocity (with normalization uαuα = 1).

The dynamical pressure P is decomposed as

P = p + Π

where p is the equilibrium (thermostatic) pressure and Π is a term present in scalar dissipative processes. Usually, it is associated with the so-called bulk pressure. In the cosmological context, besides this meaning, Π can also be relevant when particle number is not conserved. In this case, Π ≡ pc is called the “creation pressure”. The bulk pressure,  can be seen as a correction to the thermostatic pressure when near to equilibrium, thus, it should be always smaller than the thermostatic pressure, |Π| < p. This restriction, however, does not apply for the creation pressure. So, when we have matter creation, the total pressure P may become negative and, in principle, drive an accelerated expansion.

The particle flux vector is assumed to have the following form

Nα = nuα

where n is the particle number density. Nα satisfies the balance equation Nα = nΓ, where Γ is the particle production rate. If Γ > 0, we have particle creation, particle destruction occurs when Γ < 0 and if Γ = 0 particle number is conserved.

The entropy flux vector is given by

sα = nσuα

where σ is the specific (per particle) entropy. Note that the entropy must satisfy the second law of thermodynamics sα ≥ 0. Here we consider adiabatic matter creation, that is, we analyze situations in which σ is constant. With this condition, by using the Gibbs relation, it follows that the creation pressure is related to Γ by

pc = − (ρ+p)/3H Γ

where H = a ̇/a is the Hubble parameter, a is the scale factor of the Friedmann-Robertson-Walker (FRW) metric and the overdot means differentiation with respect to the cosmic time. If σ is constant, the second law of thermodynamics implies that Γ ≥ 0 and, as a consequence, particle destruction (Γ < 0) is thermodynamically forbidden. Since Γ ≥ 0, it follows that, in an expanding universe (H > 0), the creation pressure pc cannot be positive.

# Harmonies of the Orphic Mystery: Emanation of Music As the Buddhist sage Nagarjuna states in his Seventy Verses on Sunyata, “Being does not arise, since it exists . . .” In similar fashion it can be said that mind exists, and if we human beings manifest its qualities, then the essence and characteristics of mind must be a component of our cosmic source. David Bohm’s theory of the “implicate order” within the operations of nature suggests that observed phenomena do not operate only when they become objective to our senses. Rather, they emerge out of a subjective state or condition that contains the potentials in a latent yet really existent state that is just awaiting the necessary conditions to manifest. Thus within the explicate order of things and beings in our familiar world there is the implicate order out of which all of these emerge in their own time.

Clearly, sun and its family of planets function in accordance with natural laws. The precision of the orbital and other electromagnetic processes is awesome, drawing into one operation the functions of the smallest subparticles and the largest families of sun-stars in their galaxies, and beyond even them. These individual entities are bonded together in an evident unity that we may compare with the oceans of our planet: uncountable numbers of water molecules appear to us as a single mass of substance. In seeking the ultimate particle, the building block of the cosmos, some researchers have found themselves confronted with the mystery of what it is that holds units together in an organism — any organism!

As in music where a harmony consists of many tones bearing an inherent relationship, so must there be harmony embracing all the children of cosmos. Longing for the Harmonies: Themes and Variations from Modern Physics is a book by Frank Wilczek, an eminent physicist, and his wife Betsy Devine, an engineering scientist and freelance writer. The theme of their book is set out in their first paragraph:

From Pythagoras measuring harmonies on a lyre string to R. P. Feynman beating out salsa on his bongos, many a scientist has fallen in love with music. This love is not always rewarded with perfect mastery. Albert Einstein, an ardent amateur of the violin, provoked a more competent player to bellow at him, “Einstein, can’t you count?”

Both music and scientific research, Einstein wrote, “are nourished by the same source of longing, and they complement one another in the release they offer.” It seems to us, too, that the mysterious longing behind a scientist’s search for meaning is the same that inspires creativity in music, art, or any other enterprise of the restless human spirit. And the release they offer is to inhabit, if only for a moment, some point of union between the lonely world of subjectivity and the shared universe of external reality.

In a very lucid text, Wilczek and Devine show us that the laws of nature, and the structure of the universe and all its contributing parts, can be presented in such a way that the whole compares with a musical composition comprising themes that are fused together. One of the early chapters begins with the famous lines of the great astronomer Johannes Kepler, who in 1619 referred to the music of the spheres:

The heavenly motions are nothing but a continuous song for several voices (perceived by the intellect, not by the ear); a music which, through discordant tensions, through sincopes [sic] and cadenzas, as it were (as men employ them in imitation of those natural discords) progresses towards certain pre-designed quasi six-voiced clausuras, and thereby sets landmarks in the immeasurable flow of time. — The Harmony of the World (Harmonice mundi)

Discarding the then current superstitions and misinformed speculation, through the cloud of which Kepler had to work for his insights, Wilczek and Devine note that Kepler’s obsession with the idea of the harmony of the world is actually rooted in Pythagoras’s theory that the universe is built upon number, a concept of the Orphic mystery-religions of Greece. The idea is that “the workings of the world are governed by relations of harmony and, in particular, that music is associated with the motion of the planets — the music of the spheres” (Wilczek and Devine). Arthur Koestler, in writing of Kepler and his work, claimed that the astronomer attempted

to bare the ultimate secret of the universe in an all-embracing synthesis of geometry, music, astrology, astronomy and epistemology. The Sleepwalkers

In Longing for the Harmonies the authors refer to the “music of the spheres” as a notion that in time past was “vague, mystical, and elastic.” As the foundations of music are rhythm and harmony, they remind us that Kepler saw the planets moving around the sun “to a single cosmic rhythm.” There is some evidence that he had association with a “neo-Pythagorean” movement and that, owing to the religious-fomented opposition to unorthodox beliefs, he kept his ideas hidden under allegory and metaphor.

Shakespeare, too, phrases the thought of tonal vibrations emitted by the planets and stars as the “music of the spheres,” the notes likened to those of the “heavenly choir” of cherubim. This calls to mind that Plato’s Cratylus terms the planets theoi, from theein meaning “to run, to move.” Motion does suggest animation, or beings imbued with life, and indeed the planets are living entities so much grander than human beings that the Greeks and other peoples called them “gods.” Not the physical bodies were meant, but the essence within them, in the same way that a human being is known by the inner qualities expressed through the personality.

When classical writers spoke of planets and starry entities as “animals” they did not refer to animals such as we know on Earth, but to the fact that the celestial bodies are “animated,” embodying energies received from the sun and cosmos and transmitted with their own inherent qualities added.

Many avenues open up for our reflection upon the nature of the cosmos and ourselves, and our interrelationship, as we consider the structure of natural laws as Wilczek and Devine present them. For example, the study of particles, their interactions, their harmonizing with those laws, is illuminating intrinsically and, additionally, because of their universal application. The processes involved occur here on earth, and evidently also within the solar system and beyond, explaining certain phenomena that had been awaiting clarification.

The study of atoms here on earth and their many particles and subparticles has enabled researchers to deduce how stars are born, how and why they shine, and how they die. Now some researchers are looking at what it is, whether a process or an energy, that unites the immeasurably small with the very large cosmic bodies we now know. If nature is infinite, it must be so in a qualitative sense, not merely a quantitative.

One of the questions occupying the minds of cosmologists is whether the universal energy is running down like the mechanism of an unwinding Swiss watch, or whether there is enough mass to slow the outward thrust caused by the big bang that has been assumed to have started our cosmos going. In other words, is our universe experiencing entropy — dying as its energy is being used up — or will there be a “brake” put upon the expansion that could, conceivably, result in a return to the source of the initial explosion billions of years ago? Cosmologists have been looking for enough “dark mass” to serve as such a brake.

Among the topics treated by Wilczek and Devine in threading their way through many themes and variations in modern physics, is what is known as the mass-generating Higgs field. This is a proposition formulated by Peter Higgs, a Scottish physicist, who suggests there is an electromagnetic field that pervades the cosmos and universally provides the electron particles with mass.

The background Higgs field must have very accurately the same value throughout the universe. After all, we know — from the fact that the light from distant galaxies contains the same spectral lines we find on Earth — that electrons have the same mass throughout the universe. So if electrons are getting their mass from the Higgs field, this field had better have the same strength everywhere. What is the meaning of this all-pervasive field, which exists with no apparent source? Why is it there? (Wilczek and Devine).

What is the meaning? Why is it there? These are among the most important questions that can be asked. Though physicists may provide profound mathematical equations, they will thereby offer only more precise detail as to what is happening. We shall not receive an answer to the “What” and the “Why” without recourse to meta-physics, beyond the realm of brain-devised definitions.

The human mind is limited in its present stage of evolution. It may see the logical necessity of infinity referent to space and time; for if not infinity, what then is on the other side of the “fence” that is our outermost limit? But, being able to perceive the logical necessity of infinity, the finite mind still cannot span the limitless ranges of space, time, and substance.

If we human beings are manifold in our composition, and since we draw our very existence and sustenance from the universe at large, our conjoint nature must be drawn from the sources of life, substance, and energy, in which our and all other cosmic lives are immersed.

As the authors conclude their fascinating work:

“The worlds opened to our view are graced with wonderful symmetry and uniformity. Learning to know them, to appreciate their many harmonies, is like deepening an acquaintance with some great and meaningful piece of music — surely one of the best things life has to offer.”