The Affinity of Mirror Symmetry to Algebraic Geometry: Going Beyond Formalism



Even though formalism of homological mirror symmetry is an established case, what of other explanations of mirror symmetry which lie closer to classical differential and algebraic geometry? One way to tackle this is the so-called Strominger, Yau and Zaslow mirror symmetry or SYZ in short.

The central physical ingredient in this proposal is T-duality. To explain this, let us consider a superconformal sigma model with target space (M, g), and denote it (defined as a geometric functor, or as a set of correlation functions), as

CFT(M, g)

In physics, a duality is an equivalence

CFT(M, g) ≅ CFT(M′, g′)

which holds despite the fact that the underlying geometries (M,g) and (M′, g′) are not classically diffeomorphic.

T-duality is a duality which relates two CFT’s with toroidal target space, M ≅ M′ ≅ Td, but different metrics. In rough terms, the duality relates a “small” target space, with noncontractible cycles of length L < ls, with a “large” target space in which all such cycles have length L > ls.

This sort of relation is generic to dualities and follows from the following logic. If all length scales (lengths of cycles, curvature lengths, etc.) are greater than ls, string theory reduces to conventional geometry. Now, in conventional geometry, we know what it means for (M, g) and (M′, g′) to be non-isomorphic. Any modification to this notion must be associated with a breakdown of conventional geometry, which requires some length scale to be “sub-stringy,” with L < ls. To state T-duality precisely, let us first consider M = M′ = S1. We parameterise this with a coordinate X ∈ R making the identification X ∼ X + 2π. Consider a Euclidean metric gR given by ds2 = R2dX2. The real parameter R is usually called the “radius” from the obvious embedding in R2. This manifold is Ricci-flat and thus the sigma model with this target space is a conformal field theory, the “c = 1 boson.” Let us furthermore set the string scale ls = 1. With this, we attain a complete physical equivalence.

CFT(S1, gR) ≅ CFT(S1, g1/R)

Thus these two target spaces are indistinguishable from the point of view of string theory.

Just to give a physical picture for what this means, suppose for sake of discussion that superstring theory describes our universe, and thus that in some sense there must be six extra spatial dimensions. Suppose further that we had evidence that the extra dimensions factorized topologically and metrically as K5 × S1; then it would make sense to ask: What is the radius R of this S1 in our universe? In principle this could be measured by producing sufficiently energetic particles (so-called “Kaluza-Klein modes”), or perhaps measuring deviations from Newton’s inverse square law of gravity at distances L ∼ R. In string theory, T-duality implies that R ≥ ls, because any theory with R < ls is equivalent to another theory with R > ls. Thus we have a nontrivial relation between two (in principle) observable quantities, R and ls, which one might imagine testing experimentally. Let us now consider the theory CFT(Td, g), where Td is the d-dimensional torus, with coordinates Xi parameterising Rd/2πZd, and a constant metric tensor gij. Then there is a complete physical equivalence

CFT(Td, g) ≅ CFT(Td, g−1)

In fact this is just one element of a discrete group of T-duality symmetries, generated by T-dualities along one-cycles, and large diffeomorphisms (those not continuously connected to the identity). The complete group is isomorphic to SO(d, d; Z).

While very different from conventional geometry, T-duality has a simple intuitive explanation. This starts with the observation that the possible embeddings of a string into X can be classified by the fundamental group π1(X). Strings representing non-trivial homotopy classes are usually referred to as “winding states.” Furthermore, since strings interact by interconnecting at points, the group structure on π1 provided by concatenation of based loops is meaningful and is respected by interactions in the string theory. Now π1(Td) ≅ Zd, as an abelian group, referred to as the group of “winding numbers”.

Of course, there is another Zd we could bring into the discussion, the Pontryagin dual of the U(1)d of which Td is an affinization. An element of this group is referred to physically as a “momentum,” as it is the eigenvalue of a translation operator on Td. Again, this group structure is respected by the interactions. These two group structures, momentum and winding, can be summarized in the statement that the full closed string algebra contains the group algebra C[Zd] ⊕ C[Zd].

In essence, the point of T-duality is that if we quantize the string on a sufficiently small target space, the roles of momentum and winding will be interchanged. But the main point can be seen by bringing in some elementary spectral geometry. Besides the algebra structure, another invariant of a conformal field theory is the spectrum of its Hamiltonian H (technically, the Virasoro operator L0 + L ̄0). This Hamiltonian can be thought of as an analog of the standard Laplacian ∆g on functions on X, and its spectrum on Td with metric g is

Spec ∆= {∑i,j=1d gijpipj; pi ∈ Zd}

On the other hand, the energy of a winding string is (intuitively) a function of its length. On our torus, a geodesic with winding number w ∈ Zd has length squared

L2 = ∑i,j=1d gijwiwj

Now, the only string theory input we need to bring in is that the total Hamiltonian contains both terms,

H = ∆g + L2 + · · ·

where the extra terms … express the energy of excited (or “oscillator”) modes of the string. Then, the inversion g → g−1, combined with the interchange p ↔ w, leaves the spectrum of H invariant. This is T-duality.

There is a simple generalization of the above to the case with a non-zero B-field on the torus satisfying dB = 0. In this case, since B is a constant antisymmetric tensor, we can label CFT’s by the matrix g + B. Now, the basic T-duality relation becomes

CFT(Td, g + B) ≅ CFT(Td, (g + B)−1)

Another generalization, which is considerably more subtle, is to do T-duality in families, or fiberwise T-duality. The same arguments can be made, and would become precise in the limit that the metric on the fibers varies on length scales far greater than ls, and has curvature lengths far greater than ls. This is sometimes called the “adiabatic limit” in physics. While this is a very restrictive assumption, there are more heuristic physical arguments that T-duality should hold more generally, with corrections to the relations proportional to curvatures ls2R and derivatives ls∂ of the fiber metric, both in perturbation theory and from world-sheet instantons.

Something Out of Almost Nothing. Drunken Risibility.

Kant’s first antinomy makes the error of the excluded third option, i.e. it is not impossible that the universe could have both a beginning and an eternal past. If some kind of metaphysical realism is true, including an observer-independent and relational time, then a solution of the antinomy is conceivable. It is based on the distinction between a microscopic and a macroscopic time scale. Only the latter is characterized by an asymmetry of nature under a reversal of time, i.e. the property of having a global (coarse-grained) evolution – an arrow of time – or many arrows, if they are independent from each other. Thus, the macroscopic scale is by definition temporally directed – otherwise it would not exist.

On the microscopic scale, however, only local, statistically distributed events without dynamical trends, i.e. a global time-evolution or an increase of entropy density, exist. This is the case if one or both of the following conditions are satisfied: First, if the system is in thermodynamic equilibrium (e.g. there is degeneracy). And/or second, if the system is in an extremely simple ground state or meta-stable state. (Meta-stable states have a local, but not a global minimum in their potential landscape and, hence, they can decay; ground states might also change due to quantum uncertainty, i.e. due to local tunneling events.) Some still speculative theories of quantum gravity permit the assumption of such a global, macroscopically time-less ground state (e.g. quantum or string vacuum, spin networks, twistors). Due to accidental fluctuations, which exceed a certain threshold value, universes can emerge out of that state. Due to some also speculative physical mechanism (like cosmic inflation) they acquire – and, thus, are characterized by – directed non-equilibrium dynamics, specific initial conditions, and, hence, an arrow of time.

It is a matter of debate whether such an arrow of time is

1) irreducible, i.e. an essential property of time,

2) governed by some unknown fundamental and not only phenomenological law,

3) the effect of specific initial conditions or

4) of consciousness (if time is in some sense subjective), or

5) even an illusion.

Many physicists favour special initial conditions, though there is no consensus about their nature and form. But in the context at issue it is sufficient to note that such a macroscopic global time-direction is the main ingredient of Kant’s first antinomy, for the question is whether this arrow has a beginning or not.

Time’s arrow is inevitably subjective, ontologically irreducible, fundamental and not only a kind of illusion, thus if some form of metaphysical idealism for instance is true, then physical cosmology about a time before time is mistaken or quite irrelevant. However, if we do not want to neglect an observer-independent physical reality and adopt solipsism or other forms of idealism – and there are strong arguments in favor of some form of metaphysical realism -, Kant’s rejection seems hasty. Furthermore, if a Kantian is not willing to give up some kind of metaphysical realism, namely the belief in a “Ding an sich“, a thing in itself – and some philosophers actually insisted that this is superfluous: the German idealists, for instance -, he has to admit that time is a subjective illusion or that there is a dualism between an objective timeless world and a subjective arrow of time. Contrary to Kant’s thoughts: There are reasons to believe that it is possible, at least conceptually, that time has both a beginning – in the macroscopic sense with an arrow – and is eternal – in the microscopic notion of a steady state with statistical fluctuations.

Is there also some physical support for this proposal?

Surprisingly, quantum cosmology offers a possibility that the arrow has a beginning and that it nevertheless emerged out of an eternal state without any macroscopic time-direction. (Note that there are some parallels to a theistic conception of the creation of the world here, e.g. in the Augustinian tradition which claims that time together with the universe emerged out of a time-less God; but such a cosmological argument is quite controversial, especially in a modern form.) So this possible overcoming of the first antinomy is not only a philosophical conceivability but is already motivated by modern physics. At least some scenarios of quantum cosmology, quantum geometry/loop quantum gravity, and string cosmology can be interpreted as examples for such a local beginning of our macroscopic time out of a state with microscopic time, but with an eternal, global macroscopic timelessness.

To put it in a more general, but abstract framework and get a sketchy illustration, consider the figure.


Physical dynamics can be described using “potential landscapes” of fields. For simplicity, here only the variable potential (or energy density) of a single field is shown. To illustrate the dynamics, one can imagine a ball moving along the potential landscape. Depressions stand for states which are stable, at least temporarily. Due to quantum effects, the ball can “jump over” or “tunnel through” the hills. The deepest depression represents the ground state.

In the common theories the state of the universe – the product of all its matter and energy fields, roughly speaking – evolves out of a metastable “false vacuum” into a “true vacuum” which has a state of lower energy (potential). There might exist many (perhaps even infinitely many) true vacua which would correspond to universes with different constants or laws of nature. It is more plausible to start with a ground state which is the minimum of what physically can exist. According to this view an absolute nothingness is impossible. There is something rather than nothing because something cannot come out of absolutely nothing, and something does obviously exist. Thus, something can only change, and this change might be described with physical laws. Hence, the ground state is almost “nothing”, but can become thoroughly “something”. Possibly, our universe – and, independent from this, many others, probably most of them having different physical properties – arose from such a phase transition out of a quasi atemporal quantum vacuum (and, perhaps, got disconnected completely). Tunneling back might be prevented by the exponential expansion of this brand new space. Because of this cosmic inflation the universe not only became gigantic but simultaneously the potential hill broadened enormously and got (almost) impassable. This preserves the universe from relapsing into its non-existence. On the other hand, if there is no physical mechanism to prevent the tunneling-back or makes it at least very improbable, respectively, there is still another option: If infinitely many universes originated, some of them could be long-lived only for statistical reasons. But this possibility is less predictive and therefore an inferior kind of explanation for not tunneling back.

Another crucial question remains even if universes could come into being out of fluctuations of (or in) a primitive substrate, i.e. some patterns of superposition of fields with local overdensities of energy: Is spacetime part of this primordial stuff or is it also a product of it? Or, more specifically: Does such a primordial quantum vacuum have a semi-classical spacetime structure or is it made up of more fundamental entities? Unique-universe accounts, especially the modified Eddington models – the soft bang/emergent universe – presuppose some kind of semi-classical spacetime. The same is true for some multiverse accounts describing our universe, where Minkowski space, a tiny closed, finite space or the infinite de Sitter space is assumed. The same goes for string theory inspired models like the pre-big bang account, because string and M- theory is still formulated in a background-dependent way, i.e. requires the existence of a semi-classical spacetime. A different approach is the assumption of “building-blocks” of spacetime, a kind of pregeometry also the twistor approach of Roger Penrose, and the cellular automata approach of Stephen Wolfram. The most elaborated accounts in this line of reasoning are quantum geometry (loop quantum gravity). Here, “atoms of space and time” are underlying everything.

Though the question whether semiclassical spacetime is fundamental or not is crucial, an answer might be nevertheless neutral with respect of the micro-/macrotime distinction. In both kinds of quantum vacuum accounts the macroscopic time scale is not present. And the microscopic time scale in some respect has to be there, because fluctuations represent change (or are manifestations of change). This change, reversible and relationally conceived, does not occur “within” microtime but constitutes it. Out of a total stasis nothing new and different can emerge, because an uncertainty principle – fundamental for all quantum fluctuations – would not be realized. In an almost, but not completely static quantum vacuum however, macroscopically nothing changes either, but there are microscopic fluctuations.

The pseudo-beginning of our universe (and probably infinitely many others) is a viable alternative both to initial and past-eternal cosmologies and philosophically very significant. Note that this kind of solution bears some resemblance to a possibility of avoiding the spatial part of Kant’s first antinomy, i.e. his claimed proof of both an infinite space without limits and a finite, limited space: The theory of general relativity describes what was considered logically inconceivable before, namely that there could be universes with finite, but unlimited space, i.e. this part of the antinomy also makes the error of the excluded third option. This offers a middle course between the Scylla of a mysterious, secularized creatio ex nihilo, and the Charybdis of an equally inexplicable eternity of the world.

In this context it is also possible to defuse some explanatory problems of the origin of “something” (or “everything”) out of “nothing” as well as a – merely assumable, but never provable – eternal cosmos or even an infinitely often recurring universe. But that does not offer a final explanation or a sufficient reason, and it cannot eliminate the ultimate contingency of the world.

Disjointed Regularity in Open Classes of Elementary Topology


Let x, y, … denote first-order structures in St𝜏, x ≈ y will denote isomorphism.

x ∼n,𝜏 y means that there is a sequence 0 ≠ I0 ⊆ …. ⊆ In of sets of 𝜏-partial isomorphism of finite domain so that, for i < j ≤ n, f ∈ Ii and a ∈ x (respectively, b ∈ y), there is g ∈ Ij such that g ⊇ f and a ∈ Dom(g) (respectively, b ∈ Im(g)). The later is called the extension property.

x ∼𝜏 y means the above holds for an infinite chain 0 ≠ I0 ⊆ …. ⊆ In ⊆ …

Fraïssé’s characterization of elementary equivalence says that for finite relational vocabularies: x ≡ y iff x ∼n,𝜏 y. To have it available for vocabularies containing function symbols add the complexity of terms in atomic formulas to the quantifier rank. It is well known that for countable x, y : x ∼𝜏 y implies x ≈ y.

Given a vocabulary 𝜏 let 𝜏 be a disjoint renaming of 𝜏. If x, y ∈ St𝜏 have the same power, let y be an isomorphic copy of y sharing the universe with x and renamed to be of type 𝜏. In this context, (x, y) will denote the 𝜏 ∪ 𝜏-structure that results of expanding x with the relations of y.

Lemma: There is a vocabulary 𝜏+ ⊇ 𝜏 ∪ 𝜏 such that for each finite vocabulary 𝜏0 ⊆ 𝜏 there is a sequence of elementary classes 𝛥1 ⊇ 𝛥2 ⊇ 𝛥3 ⊇ …. in St𝜏+ such that if 𝜋 = 𝜋𝜏+,𝜏∪𝜏 then (1) 𝜋(𝛥𝑛) = {(x,y) : |x| = |y| ≥ 𝜔, x ≡n,𝜏0 y}, (2) 𝜋(⋂n 𝛥n) = {(x, y) : |x| = |y| ≥ 𝜔, x ∼𝜏0 y}. Moreover, ⋂n𝛥n is the reduct of an elementary class.

Proof. Let 𝛥 be the class of structures (x, y, <, a, I) where < is a discrete linear order with minimum but no maximum and I codes for each c ≤ a a family Ic = {I(c, i, −, −)}i∈x of partial 𝜏0-𝜏0–isomorphisms from x into y, such that for c < c’ ≤ a : Ic ⊆ Ic and the extension property holds. Describe this by a first-order sentence 𝜃𝛥 of type 𝜏+ ⊇ 𝜏0 ∪ 𝜏0 and set 𝛥𝑛 = ModL(𝜃𝛥 ∧ ∃≥n x(x ≤ a)}. Then condition (1) in the Lemma is granted by Fraïssé’s characterization and the fact that x being (2) is granted because (x, y, <, a, I) ∈ ⋂n𝛥n iff < contains an infinite increasing 𝜔-chain below a, a ∑11 condition.

A topology on St𝜏 is invariant if its open (closed) classes are closed under isomorphic structures. Of course, it is superfluous if we identify isomorphic structures.

Theorem: Let Γ be a regular compact topology finer than the elementary topology on each class St𝜏 such that the countable structures are dense in St𝜏 and reducts and renamings are continuous for these topologies. Then Γ𝜏 is the elementary topology ∀ 𝜏.

Proof: We show that any pair of disjoint closed classes C1, C2 of Γ𝜏 may be separated by an elementary class. Assume this is not the case since Ci are compact in the topology Γ𝜏 then they are compact for the elementary topology and, by regularity of the latter, ∃ xi ∈ Ci such that x1 ≡ x2 in L𝜔𝜔(𝜏). The xi must be infinite, otherwise they would be isomorphic contradicting the disjointedness of the Ci. By normality of Γ𝜏, there are towers Ui ⊆ Ci ⊆ Ui ⊆ Ci, i = 1,2, separating the Ci with Ui, Ui open and Ci, Ci closed in Γ𝜏 and disjoint. Let I be a first-order sentence of type 𝜏 ⊇ 𝜏 such that (z, ..) |= I ⇔ z is infinite, and let π be the corresponding reduct operation. For fixed n ∈ ω and the finite 𝜏0  ⊆ 𝜏, let t be a first-order sentence describing the common ≡n,𝜏0 – equivalence class of x1, x2. As,

(xi,..) ∈ Mod𝜏(I) ∩ π-1 Mod(t) ∩ π-1Ui, i = 1, 2,..

and this class is open in Γ𝜏‘ by continuity of π, then by the density hypothesis there are countable xi ∈ Ui , i = 1, 2, such that x1n,𝜏 x2. Thus for some expansion of (x1, x2),

(x, x,..) ∈ 𝛥n,𝜏0 ∩ 𝜋1−1(𝐶1) ∩ (𝜌𝜋2)−1(C2) —– (1)

where 𝛥𝑛,𝜏0 is the class of Lemma, 𝜋1, 𝜋2 are reducts, and 𝜌 is a renaming:

𝜋1(x1, x2, …) = x1 𝜋1 : St𝜏+ → St𝜏∪𝜏 → St𝜏

𝜋2(x1, x2, …) = x2 𝜋2 : St𝜏+ → St𝜏∪𝜏 → St𝜏

𝜌(x2) = x2 𝜌 : St𝜏 → St𝜏

Since the classes (1) are closed by continuity of the above functors then ⋂n𝛥n,𝜏0 ∩ 𝜋1−1(C1) ∩ (𝜌𝜋2)−1(C2) is non-emtpy by compactness of Γ𝜏+. But ⋂n𝛥n,𝜏0 = 𝜋(V) with V elementary of type 𝜏++ ⊇ 𝜏+. Then

V ∩ π-1π1-1(U1) ∩ π-1(ρπ2)-1 (U2) ≠ 0

is open for ΓL++ and the density condition it must contain a countable structure (x1, x*2, ..). Thus (x1, x*2, ..) ∈ ∩n 𝛥𝑛,𝜏0, with xi ∈ Ui ⊆ Ci. It follows that x1 ~𝜏0 x2 and thus x1 |𝜏0 ≈ x2 |𝜏0. Let δ𝜏0 be a first-order sentence of type 𝜏 ∪ 𝜏* ∪{h} such that (x, y*, h) |= δ𝜏0 ⇔ h : x |𝜏0 ≈ y|𝜏0. By compactness,

(∩𝜏0fin𝜏 Mod𝜏∪𝜏*∪{f}𝜏0)) ∩ π1-1(C1) ∩ (ρπ2)-1(C2) ≠ 0

and we have h : x1 ≈ x2, xi ∈ Ci, contradicting the disjointedness of Ci. Finally, if C is a closed class of Γ𝜏 and x ∉ C, clΓ𝜏{x} is disjoint from C by regularity of Γ𝜏. Then clΓ𝜏{x} and C may be separated by open classes of elementary topology, which implies C is closed in this topology.

Grothendieck’s Universes and Wiles Proof (Fermat’s Last Theorem). Thought of the Day 77.0


In formulating the general theory of cohomology Grothendieck developed the concept of a universe – a collection of sets large enough to be closed under any operation that arose. Grothendieck proved that the existence of a single universe is equivalent over ZFC to the existence of a strongly inaccessible cardinal. More precisely, 𝑈 is the set 𝑉𝛼 of all sets with rank below 𝛼 for some uncountable strongly inaccessible cardinal.

Colin McLarty summarised the general situation:

Large cardinals as such were neither interesting nor problematic to Grothendieck and this paper shares his view. For him they were merely legitimate means to something else. He wanted to organize explicit calculational arithmetic into a geometric conceptual order. He found ways to do this in cohomology and used them to produce calculations which had eluded a decade of top mathematicians pursuing the Weil conjectures. He thereby produced the basis of most current algebraic geometry and not only the parts bearing on arithmetic. His cohomology rests on universes but weaker foundations also suffice at the loss of some of the desired conceptual order.

The applications of cohomology theory implicitly rely on universes. Most number theorists regard the applications as requiring much less than their ‘on their face’ strength and in particular believe the large cardinal appeals are ‘easily eliminable’. There are in fact two issues. McLarty writes:

Wiles’s proof uses hard arithmetic some of which is on its face one or two orders above PA, and it uses functorial organizing tools some of which are on their face stronger than ZFC.

There are two current programs for verifying in detail the intuition that the formal requirements for Wiles proof of Fermat’s last theorem can be substantially reduced. On the one hand, McLarty’s current work aims to reduce the ‘on their face’ strength of the results in cohomology from large cardinal hypotheses to finite order Peano. On the other hand Macintyre aims to reduce the ‘on their face’ strength of results in hard arithmetic to Peano. These programs may be complementary or a full implementation of Macintyre’s might avoid the first.

McLarty reduces

  1. ‘ all of SGA (Revêtements Étales et Groupe Fondamental)’ to Bounded Zermelo plus a Universe.
  2. “‘the currently existing applications” to Bounded Zermelo itself, thus the con-sistency strength of simple type theory.’ The Grothendieck duality theorem and others like it become theorem schema.

The essential insight of the McLarty’s papers on cohomology is the role of replacement in giving strength to the universe hypothesis. A 𝑍𝐶-universe is defined to be a transitive set U modeling 𝑍𝐶 such that every subset of an element of 𝑈 is itself an element of 𝑈. He remarks that any 𝑉𝛼 for 𝛼 a limit ordinal is provable in 𝑍𝐹𝐶 to be a 𝑍𝐶-universe. McLarty then asserts the essential use of replacement in the original Grothendieck formulation is to prove: For an arbitrary ring 𝑅 every module over 𝑅 embeds in an injective 𝑅-module and thus injective resolutions exist for all 𝑅-modules. But he gives a proof in a system with the proof theoretic strength of finite order arithmetic that every sheaf of modules on any small site has an infinite resolution.

Angus Macintyre dismisses with little comment the worries about the use of ‘large-structure’ tools in Wiles proof. He begins his appendix,

At present, all roads to a proof of Fermat’s Last Theorem pass through some version of a Modularity Theorem (generically MT) about elliptic curves defined over Q . . . A casual look at the literature may suggest that in the formulation of MT (or in some of the arguments proving whatever version of MT is required) there is essential appeal to higher-order quantification, over one of the following.

He then lists such objects as C, modular forms, Galois representations …and summarises that a superficial formulation of MT would be 𝛱1m for some small 𝑚. But he continues,

I hope nevertheless that the present account will convince all except professional sceptics that MT is really 𝛱01.

There then follows a 13 page highly technical sketch of an argument for the proposition that MT can be expressed by a sentence in 𝛱01 along with a less-detailed strategy for proving MT in PA.

Macintyre’s complexity analysis is in traditional proof theoretic terms. But his remark that ‘genus’ is more a useful geometric classification of curves than the syntactic notion of degree suggests that other criteria may be relevant. McLarty’s approach is not really a meta-theorem, but a statement that there was only one essential use of replacement and it can be eliminated. In contrast, Macintyre argues that ‘apparent second order quantification’ can be replaced by first order quantification. But the argument requires deep understanding of the number theory for each replacement in a large number of situations. Again, there is no general theorem that this type of result is provable in PA.

Orgies of the Atheistic Materialism: Barthes Contra Sade. Drunken Risibility.

The language and style of Justine are inextricably tied to sexual pleasure. Sade makes it impossible for the reader to ignore this aspect of the text. Roland Barthes, whose essays in Sade, Fourier, Loyola describe the innovative language of each author, underscores the importance of pleasure when discussing the Sadian voyage:

If the Sadian novel is excluded from our literature, it is because in it novelistic peregrination is never a quest for the Unique (temporal essence, truth, happiness), but a repetition of pleasure; Sadian errancy is unseemly, not because it is vicious and criminal, but because it is dull and somehow insignificant, withdrawn from transcendency, void of term: it does not re­veal, does not transform, does not develop, does not edu­cate, does not sublimate, does not accomplish, recuperates nothing, save for the present itself, cut up, glittering, repeated; no patience, no experience; everything is carried immediately to the acme of knowledge, of power, of ejacula­tion; time does not arrange or derange it, it repeats, recalls, recommences, there is no scansion other than that which al­ternates the formation and the expenditure of sperm.

Barthes’s observation reflects La Mettrie’s influence on Sade, whose libertine characters parrot in both speech and action the philosopher’s view that the pursuit of pleasure is man’s raison d’être. Sexuality permeates a great many linguistic and stylistic features of Justine, for example, names of characters (onomastics), literal and figurative language, grammatical structures, cultural and class references, dramatic effects, repetition and exaggeration, and use of parody and caricature. Justine is traditionally the name of a female domestic (soubrette), connoting a person of the lower classes, who falls prey to promiscuous behavior. Near the beginning of Justine, Sade renames the heroine the moment she accepts employment at the home of the miserly Monsieur Du Harpin, surname evocative of Molière’s Harpagon. Sophie, the wise example of womanly Christian virtue in the first version, becomes Thérèse, the anti- philosophe in the second, who chooses to ignore the brutally realistic counsel of her libertine persecutors. Sade’s Thérèse recalls the heroine of Thérèse philosophe who, unlike his protagonist, profited from an erotic lifestyle.

Sade may manipulate language to enhance erotic description but he also relies upon his observation of everyday life and class division of the ancien régime to provide him with models for his libertine characters, their mores, and their lifestyles. In Justine, he presents a socio-cultural microcosm of France during the reign of Louis XV. The power brokers of Sade’s youth who, for the most part, enriched themselves in his Majesty’s wars by means of corruption and influence, resurface in print as Justine’s exploiters. The noblemen, the financiers, the legal and medical professionals, the clergymen, and the thieves-robber barons representative of each social class-sexually maneuver their subjects to establish control. While we learn what the classes of mid-eighteenth-century France ate, how they dressed, where they lived, we also witness the ongoing struggle between victim and victimizer, the former personified by Justine, an ordinary bourgeois individual who can never vanquish the tyrant who maintains authority through sexual prowess rather than through wealth.

Barthes tells us that Sade’s passion was not erotic but theatrical. The marquis’s infatuation with the theater was inspired early on by the lavish productions staged by the Jesuits during his three and a half years at the Collège Louis-le-Grand. Later, his romantic dalliances with actresses and his own involvements in acting, writing, and production attest to his enormous attraction to the theater. In his libertine works, Sade incorporates theatricality, especially in his orgiastic scenes; in his own way, he creates the necessary horror and suspense to first seduce the reader and then to maintain his/her attention. Like a spectator in the audience, the reader observes well-rehearsed productions whose decor, script, and players have been predetermined, and where they are shown her various props in the form of “sadistic” paraphernalia.

Sade makes certain that the lesson given by her libertine victimizers following her forced participation in their orgies is not forgotten. Once again, Sade relies on man’s innate need for sexual pleasure to intellectualize the universe in a manner similar to his own. By using sexual desire as a ploy, Sade inculcates the atheistic materialism he so strongly proclaims into both an attentive Justine and reader. Justine cooperates with her depraved persecutors but refuses to adopt their way of thinking and thus continues to suffer at the hands of society’s exploiters. Sade, however, seizes the opportunity to convince his invisible readership that his concept of the universe is the right one. No matter how monotonous it may seem, repetition, whether in the form of licentious behavior or pseudo-philosophical diatribe, serves as a time-tested, powerful didactic tool.

Conjuncted: Operations of Truth. Thought of the Day 47.1


Conjuncted here.

Let us consider only the power set of the set of all natural numbers, which is the smallest infinite set – the countable infinity. By a model of set theory we understand a set in which  – if we restrict ourselves to its elements only – all axioms of set theory are satisfied. It follows from Gödel’s completeness theorem that as long as set theory is consistent, no statement which is true in some model of set theory can contradict logical consequences of its axioms. If the cardinality of p(N) was such a consequence, there would exist a cardinal number κ such that the sentence the cardinality of p(N) is κ would be true in all the models. However, for every cardinal κ the technique of forcing allows for finding a model M where the cardinality of p(N) is not equal to κ. Thus, for no κ, the sentence the cardinality of p(N) is κ is a consequence of the axioms of set theory, i.e. they do not decide the cardinality of p(N).

The starting point of forcing is a model M of set theory – called the ground model – which is countably infinite and transitive. As a matter of fact, the existence of such a model cannot be proved but it is known that there exists a countable and transitive model for every finite subset of axioms.

A characteristic subtlety can be observed here. From the perspective of an inhabitant of the universe, that is, if all the sets are considered, the model M is only a small part of this universe. It is deficient in almost every respect; for example all of its elements are countable, even though the existence of uncountable sets is a consequence of the axioms of set theory. However, from the point of view of an inhabitant of M, that is, if elements outside of M are disregarded, everything is in order. Some of M because in this model there are no functions establishing a one-to-one correspondence between them and ω0. One could say that M simulates the properties of the whole universe.

The main objective of forcing is to build a new model M[G] based on M, which contains M, and satisfies certain additional properties. The model M[G] is called the generic extension of M. In order to accomplish this goal, a particular set is distinguished in M and its elements are referred to as conditions which will be used to determine basic properties of the generic extension. In case of the forcing that proves the undecidability of the cardinality of p(N), the set of conditions codes finite fragments of a function witnessing the correspondence between p(N) and a fixed cardinal κ.

In the next step, an appropriately chosen set G is added to M as well as other sets that are indispensable in order for M[G] to satisfy the axioms of set theory. This set – called generic – is a subset of the set of conditions that always lays outside of M. The construction of M[G] is exceptional in the sense that its key properties can be described and proved using M only, or just the conditions, thus, without referring to the generic set. This is possible for three reasons. First of all, every element x of M[G] has a name existing already in M (that is, an element in M that codes x in some particular way). Secondly, based on these names, one can design a language called the forcing language or – as Badiou terms it – the subject language that is powerful enough to express every sentence of set theory referring to the generic extension. Finally, it turns out that the validity of sentences of the forcing language in the extension M[G] depends on the set of conditions: the conditions force validity of sentences of the forcing language in a precisely specified sense. As it has already been said, the generic set G consists of some of the conditions, so even though G is outside of M, its elements are in M. Recognizing which of them will end up in G is not possible for an inhabitant of M, however in some cases the following can be proved: provided that the condition p is an element of G, the sentence S is true in the generic extension constructed using this generic set G. We say then that p forces S.

In this way, with an aid of the forcing language, one can prove that every generic set of the Cohen forcing codes an entire function defining a one-to-one correspondence between elements of p(N) and a fixed (uncountable) cardinal number – it turns out that all the conditions force the sentence stating this property of G, so regardless of which conditions end up in the generic set, it is always true in the generic extension. On the other hand, the existence of a generic set in the model M cannot follow from axioms of set theory, otherwise they would decide the cardinality of p(N).

The method of forcing is of fundamental importance for Badious philosophy. The event escapes ontology; it is “that-which-is-not-being-qua-being”, so it has no place in set theory or the forcing construction. However, the post-evental truth that enters, and modifies the situation, is presented by forcing in the form of a generic set leading to an extension of the ground model. In other words, the situation, understood as the ground model M, is transformed by a post-evental truth identified with a generic set G, and becomes the generic model M[G]. Moreover, the knowledge of the situation is interpreted as the language of set theory, serving to discern elements of the situation; and as axioms of set theory, deciding validity of statements about the situation. Knowledge, understood in this way, does not decide the existence of a generic set in the situation nor can it point to its elements. A generic set is always undecidable and indiscernible.

Therefore, from the perspective of knowledge, it is not possible to establish, whether a situation is still the ground-model or it has undergone a generic extension resulting from the occurrence of an event; only the subject can interventionally decide this. And it is only the subject who decides about the belonging of particular elements to the generic set (i.e. the truth). A procedure of truth or procedure of fidelity (Alain Badiou – Being and Event) supported in this way gives rise to the subject language. It consists of sentences of set theory, so in this respect it is a part of knowledge, although the veridicity of the subject language originates from decisions of the faithful subject. Consequently, a procedure of fidelity forces statements about the situation as it is after being extended, and modified by the operation of truth.

Categorial Logic – Paracompleteness versus Paraconsistency. Thought of the Day 46.2


The fact that logic is content-dependent opens a new horizon concerning the relationship of logic to ontology (or objectology). Although the classical concepts of a priori and a posteriori propositions (or judgments) has lately become rather blurred, there is an undeniable fact: it is certain that the far origin of mathematics is based on empirical practical knowledge, but nobody can claim that higher mathematics is empirical.

Thanks to category theory, it is an established fact that some sort of very important logical systems: the classical and the intuitionistic (with all its axiomatically enriched subsystems), can be interpreted through topoi. And these possibility permits to consider topoi, be it in a Noneist or in a Platonist way, as universes, that is, as ontologies or as objectologies. Now, the association of a topos with its correspondent ontology (or objectology) is quite different from the association of theoretical terms with empirical concepts. Within the frame of a physical theory, if a new fact is discovered in the laboratory, it must be explained through logical deduction (with the due initial conditions and some other details). If a logical conclusion is inferred from the fundamental hypotheses, it must be corroborated through empirical observation. And if the corroboration fails, the theory must be readjusted or even rejected.

In the case of categorial logic, the situation has some similarity with the former case; but we must be careful not to be influenced by apparent coincidences. If we add, as an axiom, the tertium non datur to the formalized intuitionistic logic we obtain classical logic. That is, we can formally pass from the one to the other, just by adding or suppressing the tertium. This fact could induce us to think that, just as in physics, if a logical theory, let’s say, intuitionistic logic, cannot include a true proposition, then its axioms must be readjusted, to make it possible to include it among its theorems. But there is a radical difference: in the semantics of intuitionistic logic, and of any logic, the point of departure is not a set of hypothetical propositions that must be corroborated through experiment; it is a set of propositions that are true under some interpretation. This set can be axiomatic or it can consist in rules of inference, but the theorems of the system are not submitted to verification. The derived propositions are just true, and nothing more. The logician surely tries to find new true propositions but, when they are found (through some effective method, that can be intuitive, as it is in Gödel’s theorem) there are only three possible cases: they can be formally derivable, they can be formally underivable, they can be formally neither derivable nor underivable, that is, undecidable. But undecidability does not induce the logician to readjust or to reject the theory. Nobody tries to add axioms or to diminish them. In physics, when we are handling a theory T, and a new describable phenomenon is found that cannot be deduced from the axioms (plus initial or some other conditions), T must be readjusted or even rejected. A classical logician will never think of changing the axioms or rules of inference of classical logic because it is undecidable. And an intuitionist logician would not care at all to add the tertium to the axioms of Heyting’s system because it cannot be derived within it.

The foregoing considerations sufficiently show that in logic and mathematics there is something that, with full right, can be called “a priori“. And although, as we have said, we must acknowledge that the concepts of a priori and a posteriori are not clear-cut, in some cases, we can rightly speak of synthetical a priori knowledge. For instance, the Gödel’s proposition that affirms its own underivabilty is synthetical and a priori. But there are other propositions, for instance, mathematical induction, that can also be considered as synthetical and a priori. And a great deal of mathematical definitions, that are not abbreviations, are synthetical. For instance, the definition of a monoid action is synthetical (and, of course, a priori) because the concept of a monoid does not have among its characterizing traits the concept of an action, and vice versa.

Categorial logic is, the deepest knowledge of logic that has ever been achieved. But its scope does not encompass the whole field of logic. There are other kinds of logic that are also important and, if we intend to know, as much as possible, what logic is and how it is related to mathematics and ontology (or objectology), we must pay attention to them. From a mathematical and a philosophical point of view, the most important logical non-paracomplete systems are the paraconsistent ones. These systems are something like a dual to paracomplete logics. They are employed in inconsistent theories without producing triviality (in this sense also relevant logics are paraconsistent). In intuitionist logic there are interpretations that, with respect to some topoi, include two false contradictory propositions; whereas in paraconsistent systems we can find interpretations in which there are two contradictory true propositions.

There is, though, a difference between paracompleteness and paraconsistency. Insofar as mathematics is concerned, paracomplete systems had to be coined to cope with very deep problems. The paraconsistent ones, on the other hand, although they have been applied with success to mathematical theories, were conceived for purely philosophical and, in some cases, even for political and ideological motivations. The common point of them all was the need to construe a logical system able to cope with contradictions. That means: to have at one’s disposal a deductive method which offered the possibility of deducing consistent conclusions from inconsistent premisses. Of course, the inconsistency of the premisses had to comply with some (although very wide) conditions to avoid triviality. But these conditions made it possible to cope with paradoxes or antinomies with precision and mathematical sense.

But, philosophically, paraconsistent logic has another very important property: it is used in a spontaneous way to formalize the naive set theory, that is, the kind of theory that pre-Zermelian mathematicians had always employed. And it is, no doubt, important to try to develop mathematics within the frame of naive, spontaneous, mathematical thought, without falling into the artificiality of modern set theory. The formalization of the naive way of mathematical thinking, although every formalization is unavoidably artificial, has opened the possibility of coping with dialectical thought.

The Sibyl’s Prophecy/Nordic Creation. Note Quote.



The Prophecy of the Tenth Sibyl, a medieval best-seller, surviving in over 100 manuscripts from the 11th to the 16th century, predicts, among other things, the reign of evil despots, the return of the Antichrist and the sun turning to blood.

The Tenth or Tiburtine Sibyl was a pagan prophetess perhaps of Etruscan origin. To quote Lactantus in his general account of the ten sibyls in the introduction, ‘The Tiburtine Sibyl, by name Albunea, is worshiped at Tibur as a goddess, near the banks of the Anio in which stream her image is said to have been found, holding a book in her hand’.

The work interprets the Sibyl’s dream in which she foresees the downfall and apocalyptic end of the world; 9 suns appear in the sky, each one more ugly and bloodstained than the last, representing the 9 generations of mankind and ending with Judgment Day. The original Greek version dates from the end of the 4th century and the earliest surviving manuscript in Latin is dated 1047. The Tiburtine Sibyl is often depicted with Emperor Augustus, who asks her if he should be worshipped as a god.

The foremost lay of the Elder Edda is called Voluspa (The Sibyl’s Prophecy). The volva, or sibyl, represents the indelible imprint of the past, wherein lie the seeds of the future. Odin, the Allfather, consults this record to learn of the beginning, life, and end of the world. In her response, she addresses Odin as a plurality of “holy beings,” indicating the omnipresence of the divine principle in all forms of life. This also hints at the growth of awareness gained by all living, learning entities during their evolutionary pilgrimage through spheres of existence.

Hear me, all ye holy beings, greater as lesser sons of Heimdal! You wish me to tell of Allfather’s works, tales of the origin, the oldest I know. Giants I remember, born in the foretime, they who long ago nurtured me. Nine worlds I remember, nine trees of life, before this world tree grew from the ground.

Paraphrased, this could be rendered as:

Learn, all ye living entities, imbued with the divine essence of Odin, ye more and less evolved sons of the solar divinity (Heimdal) who stands as guardian between the manifest worlds of the solar system and the realm of divine consciousness. You wish to learn of what has gone before. I am the record of long ages past (giants), that imprinted their experience on me. I remember nine periods of manifestation that preceded the present system of worlds.

Time being inextricably a phenomenon of manifestation, the giant ages refer to the matter-side of creation. Giants represent ages of such vast duration that, although their extent in space and time is limited, it is of a scope that can only be illustrated as gigantic. Smaller cycles within the greater are referred to in the Norse myths as daughters of their father-giant. Heimdal is the solar deity in the sign of Aries – of beginnings for our system – whose “sons” inhabit, in fact compose, his domain.

Before a new manifestation of a world, whether a cosmos or a lesser system, all its matter is frozen in a state of immobility, referred to in the Edda as a frost giant. The gods – consciousnesses – are withdrawn into their supernal, unimaginable abstraction of Nonbeing, called in Sanskrit “paranirvana.” Without a divine activating principle, space itself – the great container – is a purely theoretical abstraction where, for lack of any organizing energic impulse of consciousness, matter cannot exist.

This was the origin of ages when Ymer built. No soil was there, no sea, no cool waves. Earth was not, nor heaven above; Gaping Void alone, no growth. Until the sons of Bur raised the tables; they who created beautiful Midgard. The sun shone southerly on the stones of the court; then grew green herbs in fertile soil.

To paraphrase again:

Before time began, the frost giant (Ymer) prevailed. No elements existed for there were ‘no waves,’ no motion, hence no organized form nor any temporal events, until the creative divine forces emanated from Space (Bur — a principle, not a locality) and organized latent protosubstance into the celestial bodies (tables at which the gods feast on the mead of life-experience). Among these tables is Middle Court (Midgard), our own beautiful planet. The life-giving sun sheds its radiant energies to activate into life all the kingdoms of nature which compose it.

The Gaping Void (Ginnungagap) holds “no cool waves” throughout illimitable depths during the age of the frost giant. Substance has yet to be created. Utter wavelessness negates it, for all matter is the effect of organized, undulating motion. As the cosmic hour strikes for a new manifestation, the ice of Home of Nebulosity (Niflhem) is melted by the heat from Home of Fire (Muspellshem), resulting in vapor in the void. This is Ymer, protosubstance as yet unformed, the nebulae from which will evolve the matter components of a new universe, as the vital heat of the gods melts and vivifies the formless immobile “ice.”

When the great age of Ymer has run its course, the cow Audhumla, symbol of fertility, “licking the salt from the ice blocks,” uncovers the head of Buri, first divine principle. From this infinite, primal source emanates Bur, whose “sons” are the creative trinity: Divine Allfather, Will, and Sanctity (Odin, Vile, and Vi). This triune power “kills” the frost giant by transforming it into First Sound (Orgalmer), or keynote, whose overtones vibrate through the planes of sleeping space and organize latent protosubstance into the multifarious forms which will be used by all “holy beings” as vehicles for gaining experience in worlds of matter.

Beautiful Midgard, our physical globe earth, is but one of the “tables” raised by the creative trinity, whereat the gods shall feast. The name Middle Court is suggestive, for the ancient traditions place our globe in a central position in the series of spheres that comprise the terrestrial being’s totality. All living entities, man included, comprise besides the visible body a number of principles and characteristics not cognized by the gross physical senses. In the Lay of Grimner (Grimnismal), wherein Odin in the guise of a tormented prisoner on earth instructs a human disciple, he enumerates twelve spheres or worlds, all but one of which are unseen by our organs of sight. As to the formation of Midgard, he relates:

Of Ymer’s flesh was the earth formed, the billows of his blood, the mountains of his bones, bushes of his hair, and of his brainpan heaven. With his eyebrows beneficent powers enclosed Midgard for man; but of his brain were surely all dark skies created.

The trinity of immanent powers organize Ymer into the forms wherein they dwell, shaping the chaos or frost giant into living globes on many planes of being. The “eyebrows” that gird the earth and protect it suggest the Van Allen belts that shield the planet from inimical radiation. The brain of Ymer – material thinking – is surely all too evident in the thought atmosphere wherein man participates.

The formation of the physical globe is described as the creation of “dwarfs” – elemental forces which shape the body of the earth-being and which include the mineral. vegetable, and animal kingdoms.

The mighty drew to their judgment seats, all holy gods to hold counsel: who should create a host of dwarfs from the blood of Brimer and the limbs of the dead. Modsogne there was, mightiest of all the dwarfs, Durin the next; there were created many humanoid dwarfs from the earth, as Durin said.

Brimer is the slain Ymer, a kenning for the waters of space. Modsogne is the Force-sucker, Durin the Sleeper, and later comes Dvalin the Entranced. They are “dwarf”-consciousnesses, beings that are miðr than human – the Icelandic miðr meaning both “smaller” and “less.” By selecting the former meaning, popular concepts have come to regard them as undersized mannikins, rather than as less evolved natural species that have not yet reached the human condition of intelligence and self-consciousness.

During the life period or manifestation of a universe, the governing giant or age is named Sound of Thor (Trudgalmer), the vital force which sustains activity throughout the cycle of existence. At the end of this age the worlds become Sound of Fruition (Bargalmer). This giant is “placed on a boat-keel and saved,” or “ground on the mill.” Either version suggests the karmic end product as the seed of future manifestation, which remains dormant throughout the ensuing frost giant of universal dissolution, when cosmic matter is ground into a formless condition of wavelessness, dissolved in the waters of space.

There is an inescapable duality of gods-giants in all phases of manifestation: gods seek experience in worlds of substance and feast on the mead at stellar and planetary tables; giants, formed into vehicles inspired with the divine impetus, rise through cycles of this association on the ladder of conscious awareness. All states being relative and bipolar, there is in endless evolution an inescapable link between the subjective and objective progress of beings. Odin as the “Opener” is paired with Orgalmer, the keynote on which a cosmos is constructed; Odin as the “Closer” is equally linked with Bargalmer, the fruitage of a life cycle. During the manifesting universe, Odin-Allfather corresponds to Trudgalmer, the sustainer of life.

A creative trinity plays an analogical part in the appearance of humanity. Odin remains the all-permeant divine essence, while on this level his brother-creators are named Honer and Lodur, divine counterparts of water or liquidity, and fire or vital heat and motion. They “find by the shore, of little power” the Ash and the Elm and infuse into these earth-beings their respective characteristics, making a human image or reflection of themselves. These protohumans, miniatures of the world tree, the cosmic Ash, Yggdrasil, in addition to their earth-born qualities of growth force and substance, receive the divine attributes of the gods. By Odin man is endowed with spirit, from Honer comes his mind, while Lodur gives him will and godlike form. The essentially human qualities are thus potentially divine. Man is capable of blending with the earth, whose substances form his body, yet is able to encompass in his consciousness the vision native to his divine source. He is in fact a minor world tree, part of the universal tree of life, Yggdrasil.

Ygg in conjunction with other words has been variously translated as Eternal, Awesome or Terrible, and Old. Sometimes Odin is named Yggjung, meaning the Ever-Young, or Old-Young. Like the biblical “Ancient of Days” it is a concept that mind can grasp only in the wake of intuition. Yggdrasil is the “steed” or the “gallows” of Ygg, whereon Odin is mounted or crucified during any period of manifested life. The world tree is rooted in Nonbeing and ramifies through the planes of space, its branches adorned with globes wherein the gods imbody. The sibyl spoke of ours as the tenth in a series of such world trees, and Odin confirms this in The Song of the High One (Den Hoges Sang):

I know that I hung in the windtorn tree nine whole nights, spear-pierced, given to Odin, my self to my Self above me in the tree, whose root none knows whence it sprang. None brought me bread, none served me drink; I searched the depths, spied runes of wisdom, raised them with song, and fell once more from the tree. Nine powerful songs I learned from the wise son of Boltorn, Bestla’s father; a draught I drank of precious mead ladled from Odrorer. I began to grow, to grow wise, to grow greater and enjoy; for me words from words led to new words, for me deeds from deeds led to new deeds.

Numerous ancient tales relate the divine sacrifice and crucifixion of the Silent Watcher whose realm or protectorate is a world in manifestation. Each tree of life, of whatever scope, constitutes the cross whereon the compassionate deity inherent in that hierarchy remains transfixed for the duration of the cycle of life in matter. The pattern of repeated imbodiments for the purpose of gaining the precious mead is clear, as also the karmic law of cause and effect as words and deeds bring their results in new words and deeds.

Yggdrasil is said to have three roots. One extends into the land of the frost giants, whence flow twelve rivers of lives or twelve classes of beings; another springs from and is watered by the well of Origin (Urd), where the three Norns, or fates, spin the threads of destiny for all lives. “One is named Origin, the second Becoming. These two fashion the third, named Debt.” They represent the inescapable law of cause and effect. Though they have usually been roughly translated as Past, Present, and Future, the dynamic concept in the Edda is more complete and philosophically exact. The third root of the world tree reaches to the well of the “wise giant Mimer,” owner of the well of wisdom. Mimer represents material existence and supplies the wisdom gained from experience of life. Odin forfeited one eye for the privilege of partaking of these waters of life, hence he is represented in manifestation as one-eyed and named Half-Blind. Mimer, the matter-counterpart, at the same time receives partial access to divine vision.

The lays make it very clear that the purpose of existence is for the consciousness-aspect of all beings to gain wisdom through life, while inspiring the substantial side of itself to growth in inward awareness and spirituality. At the human level, self-consciousness and will are aroused, making it possible for man to progress willingly and purposefully toward his divine potential, aided by the gods who have passed that way before him, rather than to drift by slow degrees and many detours along the road of inevitable evolution. Odin’s instructions to a disciple, Loddfafner, the dwarf-nature in man, conclude with:

Now is sung the High One’s song in the High One’s hall. Useful to sons of men, useless to sons of giants. Hail Him who sang! Hail him who kens! Rejoice they who understand! Happy they who heed!

Without Explosions, WE Would NOT Exist!


The matter and radiation in the universe gets hotter and hotter as we go back in time towards the initial quantum state, because it was compressed into a smaller volume. In this Hot Big Bang epoch in the early universe, we can use standard physical laws to examine the processes going on in the expanding mixture of matter and radiation. A key feature is that about 300,000 years after the start of the Hot Big Bang epoch, nuclei and electrons combined to form atoms. At earlier times when the temperature was higher, atoms could not exist, as the radiation then had so much energy it disrupted any atoms that tried to form into their constituent parts (nuclei and electrons). Thus at earlier times matter was ionized, consisting of negatively charged electrons moving independently of positively charged atomic nuclei. Under these conditions, the free electrons interact strongly with radiation by Thomson scattering. Consequently matter and radiation were tightly coupled in equilibrium at those times, and the Universe was opaque to radiation. When the temperature dropped through the ionization temperature of about 4000K, atoms formed from the nuclei and electrons, and this scattering ceased: the Universe became very transparent. The time when this transition took place is known as the time of decoupling – it was the time when matter and radiation ceased to be tightly coupled to each other, at a redshift zdec ≃ 1100 (Scott Dodelson (Auth.)-Modern Cosmology-Academic Press). By

μbar ∝ S−3, μrad ∝ S−4, Trad ∝ S−1 —– (1)

The scale factor S(t) obeys the Raychaudhuri equation

3S ̈/S = -1/2 κ(μ +3p/c2) + Λ —– (2)

where κ is the gravitational constant and Λ the cosmological constant.

, the universe was radiation dominated (μrad ≫ μmat) at early times and matter dominated (μrad ≪ μmat) at late times; matter-radiation density equality occurred significantly before decoupling (the temperature Teq when this equality occurred was Teq ≃ 104K; at that time the scale factor was Seq ≃ 104S0, where S0 is the present-day value). The dynamics of both the background model and of perturbations about that model differ significantly before and after Seq.

Radiation was emitted by matter at the time of decoupling, thereafter travelling freely to us through the intervening space. When it was emitted, it had the form of blackbody radiation, because this is a consequence of matter and radiation being in thermodynamic equilibrium at earlier times. Thus the matter at z = zdec forms the Last Scattering Surface (LSS) in the early universe, emitting Cosmic Blackbody Background Radiation (‘CBR’) at 4000K, that since then has travelled freely with its temperature T scaling inversely with the scale function of the universe. As the radiation travelled towards us, the universe expanded by a factor of about 1100; consequently by the time it reaches us, it has cooled to 2.75 K (that is, about 3 degrees above absolute zero, with a spectrum peaking in the microwave region), and so is extremely hard to observe. It was however detected in 1965, and its spectrum has since been intensively investigated, its blackbody nature being confirmed to high accuracy (R. B. Partridge-3K_ The Cosmic Microwave Background Radiation). Its existence is now taken as solid proof both that the Universe has indeed expanded from a hot early phase, and that standard physics applied unchanged at that era in the early universe.

The thermal capacity of the radiation is hugely greater than that of the matter. At very early times before decoupling, the temperatures of the matter and radiation were the same (because they were in equilibrium with each other), scaling as 1/S(t) (Equation 1 above). The early universe exceeded any temperature that can ever be attained on Earth or even in the centre of the Sun; as it dropped towards its present value of 3 K, successive physical reactions took place that determined the nature of the matter we see around us today. At very early times and high temperatures, only elementary particles can survive and even neutrinos had a very small mean free path; as the universe cooled down, neutrinos decoupled from the matter and streamed freely through space. At these times the expansion of the universe was radiation dominated, and we can approximate the universe then by models with {k = 0, w = 1/3, Λ = 0}, the resulting simple solution of

3S ̇2/S2 = A/S3 + B/S4 + Λ/3 – 3k/S2 —– (3)

uniquely relating time to temperature:

S(t)=S0t1/2 , t=1.92sec [T/1010K]−2 —– (4)

(There are no free constants in the latter equation).

At very early times, even neutrinos were tightly coupled and in equilibrium with the radiation; they decoupled at about 1010K, resulting in a relic neutrino background density in the universe today of about Ων0 ≃ 10−5 if they are massless (but it could be higher depending on their masses). Key events in the early universe are associated with out of equilibrium phenomena. An important event was the era of nucleosynthesis, the time when the light elements were formed. Above about 109K, nuclei could not exist because the radiation was so energetic that as fast as they formed, they were disrupted into their constituent parts (protons and neutrons). However below this temperature, if particles collided with each other with sufficient energy for nuclear reactions to take place, the resultant nuclei remained intact (the radiation being less energetic than their binding energy and hence unable to disrupt them). Thus the nuclei of the light elements  – deuterium, tritium, helium, and lithium – were created by neutron capture. This process ceased when the temperature dropped below about 108K (the nuclear reaction threshold). In this way, the proportions of these light elements at the end of nucleosynthesis were determined; they have remained virtually unchanged since. The rate of reaction was extremely high; all this took place within the first three minutes of the expansion of the Universe. One of the major triumphs of Big Bang theory is that theory and observation are in excellent agreement provided the density of baryons is low: Ωbar0 ≃ 0.044. Then the predicted abundances of these elements (25% Helium by weight, 75% Hydrogen, the others being less than 1%) agrees very closely with the observed abundances. Thus the standard model explains the origin of the light elements in terms of known nuclear reactions taking place in the early Universe. However heavier elements cannot form in the time available (about 3 minutes).

In a similar way, physical processes in the very early Universe (before nucleosynthesis) can be invoked to explain the ratio of matter to anti-matter in the present-day Universe: a small excess of matter over anti-matter must be created then in the process of baryosynthesis, without which we could not exist today (if there were no such excess, matter and antimatter would have all annihilated to give just radiation). However other quantities (such as electric charge) are believed to have been conserved even in the extreme conditions of the early Universe, so their present values result from given initial conditions at the origin of the Universe, rather than from physical processes taking place as it evolved. In the case of electric charge, the total conserved quantity appears to be zero: after quarks form protons and neutrons at the time of baryosynthesis, there are equal numbers of positively charged protons and negatively charged electrons, so that at the time of decoupling there were just enough electrons to combine with the nuclei and form uncharged atoms (it seems there is no net electrical charge on astronomical bodies such as our galaxy; were this not true, electromagnetic forces would dominate cosmology, rather than gravity).

After decoupling, matter formed large scale structures through gravitational instability which eventually led to the formation of the first generation of stars and is probably associated with the re-ionization of matter. However at that time planets could not form for a very important reason: there were no heavy elements present in the Universe. The first stars aggregated matter together by gravitational attraction, the matter heating up as it became more and more concentrated, until its temperature exceeded the thermonuclear ignition point and nuclear reactions started burning hydrogen to form helium. Eventually more complex nuclear reactions started in concentric spheres around the centre, leading to a build-up of heavy elements (carbon, nitrogen, oxygen for example), up to iron. These elements can form in stars because there is a long time available (millions of years) for the reactions to take place. Massive stars burn relatively rapidly, and eventually run out of nuclear fuel. The star becomes unstable, and its core rapidly collapses because of gravitational attraction. The consequent rise in temperature blows it apart in a giant explosion, during which time new reactions take place that generate elements heavier than iron; this explosion is seen by us as a Supernova (“New Star”) suddenly blazing in the sky, where previously there was just an ordinary star. Such explosions blow into space the heavy elements that had been accumulating in the star’s interior, forming vast filaments of dust around the remnant of the star. It is this material that can later be accumulated, during the formation of second generation stars, to form planetary systems around those stars. Thus the elements of which we are made (the carbon, nitrogen, oxygen and iron nuclei for example) were created in the extreme heat of stellar interiors, and made available for our use by supernova explosions. Without these explosions, we could not exist.

Beginning of Matter, Start to Existence Itself


When the inequality

μ+3p/c2 >0 ⇔ w > −1/3

is satisfied, one obtains directly from the Raychaudhuri equation

3S ̈/S = -1/2 κ(μ +3p/c2) + Λ

the Friedmann-Lemaître (FL) Universe Singularity Theorem, which states that:

In a FL universe with Λ ≤ 0 and μ + 3p/c2 > 0 at all times, at any instant t0 when H0 ≡ (S ̇/S)0 > 0 there is a finite time t: t0 − (1/H0) < t < t0, such that S(t) → 0 as t → t; the universe starts at a space-time singularity there, with μ → ∞ and T → ∞ if μ + p/c2 > 0.

This is not merely a start to matter – it is a start to space, to time, to physics itself. It is the most dramatic event in the history of the universe: it is the start of existence of everything. The underlying physical feature is the non-linear nature of the Einstein’s Field Equations (EFE): going back into the past, the more the universe contracts, the higher the active gravitational density, causing it to contract even more. The pressure p that one might have hoped would help stave off the collapse makes it even worse because (consequent on the form of the EFE) p enters algebraically into the Raychaudhuri equation with the same sign as the energy density μ. Note that the Hubble constant gives an estimate of the age of the universe: the time τ0 = t0 − t since the start of the universe is less than 1/H0.

This conclusion can in principle be avoided by a cosmological constant, but in practice this cannot work because we know the universe has expanded by at least a ratio of 11, as we have seen objects at a redshift 6 of 10, the cosmological constant would have to have an effective magnitude at least 113 = 1331 times the present matter density to dominate and cause a turn-around then or at any earlier time, and so would be much bigger than its observed present upper limit (of the same order as the present matter density). Accordingly, no turnaround is possible while classical physics holds. However energy-violating matter components such as a scalar field can avoid this conclusion, if they dominate at early enough times; but this can only be when quantum fields are significant, when the universe was at least 1012 smaller than at present.

Because Trad ∝ S−1, a major conclusion is that a Hot Big Bang must have occurred; densities and temperatures must have risen at least to high enough energies that quantum fields were significant, at something like the GUT energy. The universe must have reached those extreme temperatures and energies at which classical theory breaks down.