Ergodic Theory. Thought of the Day 51.0


Classical dynamical systems have a particularly rich set of time symmetries. Let (X, φ) be a dynamical system. A classical dynamical system consists of a set X (the state space) and a function φ from X into itself that determines how the state changes over time (the dynamics). Let T={0,1,2,3,….}. Given any state x in X (the initial conditions), the orbit of x is the history h defined by h(0) = x, h(1) = φ(x), h(2) = φ(φ(x)), and so on. Let Ω be the set of all orbits determined by (X, φ) in this way. Let {Pr’E}E⊆X be any conditional probability structure on X. For any events E and D in Ω, we define PrE(D) = Pr’E’(D’), where E’ is the set of all states x in X whose orbits lie in E, and D’ is the set of all states x in X whose orbits lie in D. Then {PrE}E⊆Ω is a conditional probability structure on Ω. Thus, Ω and {PrE}E⊆Ω together form a temporally evolving system. However, not every temporally evolving system arises in this way. Suppose the function φ (which maps from X into itself) is surjective, i.e., for all x in X, there is some y in X such that φ(y)=x. Then the set Ω of orbits is invariant under all time-shifts. Let {Pr’E}E⊆X be a conditional probability structure on X, and let {PrE}E⊆Ω be the conditional probability structure it induces on Ω. Suppose that {Pr’E}E⊆X is φ-invariant, i.e., for any subsets E and D of X, if E’ = φ–1(E) and D’ = φ–1(D), then Pr’E’(D’) = Pr’E(D). Then every time shift is a temporal symmetry of the resulting temporally evolving system. The study of dynamical systems equipped with invariant probability measures is the purview of ergodic theory.

Dialectics: Mathematico-Philosophical Sequential Quantification. Drunken Risibility.


Figure: Graphical representation of the quantification of dialectics.

A sequence S of P philosophers along a given period of time would incorporate the P most prominent and visible philosophers in that interval. The use of such a criterion to build the time-sequence for the philosophers implies in not necessarily uniform time-intervals between each pair of subsequent entries.

The set of C measurements used to characterize the philosophers define a C−dimensional feature space which will be henceforth referred to as the philosophical space. The characteristic vector v⃗i of each philosopher i defines a respective philosophical state in the philosophical space. Given a set of P philosophers, the average state at time i, i ≤ P, is defined as

a⃗i = 1/i ∑k=1i v⃗k

The opposite state of a given philosophical state v⃗i is defined as:

r⃗i = v⃗i +2(a⃗i −v⃗i) = 2a⃗i − v⃗i

The opposition vector of philosophical state v⃗i is given by D⃗i = r⃗i − v⃗i. The opposition amplitude of that same state is defined as ||D⃗i||.

An emphasis move taking place from the philosophical state v⃗i is any displacement from v⃗i along the direction −r⃗i. A contrary move from the philosophical state v⃗i is any displacement from v⃗i along the direction r⃗i.

Given a time-sequence S of P philosophers, the philosophical move implied by two successive philosophers i and j corresponds to the M⃗i,j vector extending from v⃗to v⃗j , i.e.

M⃗i,j = v⃗j – v⃗i

In principle, an innovative or differentiated philosophical move would be such that it departs substantially from the current philosophical state v⃗i. Decomposing innovation moves into two main subtypes: opposition and skewness.

The opposition index Wi,j of a given philosophical move M⃗i,j is defined as

Wi,j = 〈M⃗i,j, D⃗i〉/  ||D⃗i||2

This index quantifies the intensity of opposition of that respective philosophical move, in the sense of having a large projection along the vector D⃗i. It should also be noticed that the repetition of opposition moves lead to little innovation, as it would imply in an oscillation around the average state. The skewness index si,j of that same philosophical move is the distance between v⃗j and the line L defined by the vector D⃗i, and therefore quantifies how much the new philosophical state departs from the respective opposition move. Actually, a sequence of moves with zero skewness would represent more trivial oscillations within the opposition line Li.

We also suggest an index to quantify the dialectics between a triple of successive philosophers i, j and k. More specifically, the philosophical state v⃗i is understood as the thesis, the state j is taken as the antithesis, with the synthesis being associated to the state v⃗k. The hypothesis that k is the consequence, among other forces, of a dialectics between the views v⃗i and v⃗j can be expressed by the fact that the philosophical state v⃗k be located near the middle line MLi,j defined by the thesis and antithesis (i.e. the points which are at an equal distance to both v⃗i and v⃗j) relatively to the opposition amplitude ||D⃗i||.

Therefore, the counter-dialectic index is defined as

ρi→k = di→k /||M⃗i,j||

where di→k is the distance between the philosophical state v⃗k and the middle-line MLi,j between v⃗i and v⃗j. Note that 0 ≤ di→k ≤ 1. The choice of counter-dialectics instead of dialectics is justified to maintain compatibility with the use of a distance from point to line as adopted for the definition of skewness….

Infinitesimal and Differential Philosophy. Note Quote.


If difference is the ground of being qua becoming, it is not difference as contradiction (Hegel), but as infinitesimal difference (Leibniz). Accordingly, the world is an ideal continuum or transfinite totality (Fold: Leibniz and the Baroque) of compossibilities and incompossibilities analyzable into an infinity of differential relations (Desert Islands and Other Texts). As the physical world is merely composed of contiguous parts that actually divide until infinity, it finds its sufficient reason in the reciprocal determination of evanescent differences (dy/dx, i.e. the perfectly determinable ratio or intensive magnitude between indeterminate and unassignable differences that relate virtually but never actually). But what is an evanescent difference if not a speculation or fiction? Leibniz refuses to make a distinction between the ontological nature and the practical effectiveness of infinitesimals. For even if they have no actuality of their own, they are nonetheless the genetic requisites of actual things.

Moreover, infinitesimals are precisely those paradoxical means through which the finite understanding is capable of probing into the infinite. They are the elements of a logic of sense, that great logical dream of a combinatory or calculus of problems (Difference and Repetition). On the one hand, intensive magnitudes are entities that cannot be determined logically, i.e. in extension, even if they appear or are determined in sensation only in connection with already extended physical bodies. This is because in themselves they are determined at infinite speed. Is not the differential precisely this problematic entity at the limit of sensibility that exists only virtually, formally, in the realm of thought? Isn’t the differential precisely a minimum of time, which refers only to the swiftness of its fictional apprehension in thought, since it is synthesized in Aion, i.e. in a time smaller than the minimum of continuous time and hence in the interstitial realm where time takes thought instead of thought taking time?

Contrary to the Kantian critique that seeks to eliminate the duality between finite understanding and infinite understanding in order to avoid the contradictions of reason, Deleuze thus agrees with Maïmon that we shouldn’t speak of differentials as mere fictions unless they require the status of a fully actual reality in that infinite understanding. The alternative between mere fictions and actual reality is a false problem that hides the paradoxical reality of the virtual as such: real but not actual, ideal but not abstract. If Deleuze is interested in the esoteric history of differential philosophy, this is as a speculative alternative to the exoteric history of the extensional science of actual differences and to Kantian critical philosophy. It is precisely through conceptualizing intensive, differential relations that finite thought is capable of acquiring consistency without losing the infinite in which it plunges. This brings us back to Leibniz and Spinoza. As Deleuze writes about the former: no one has gone further than Leibniz in the exploration of sufficient reason [and] the element of difference and therefore [o]nly Leibniz approached the conditions of a logic of thought. Or as he argues of the latter, fictional abstractions are only a preliminary stage for thought to become more real, i.e. to produce an expressive or progressive synthesis: The introduction of a fiction may indeed help us to reach the idea of God as quickly as possible without falling into the traps of infinite regression. In Maïmon’s reinvention of the Kantian schematism as well as in the Deleuzian system of nature, the differentials are the immanent noumena that are dramatized by reciprocal determination in the complete determination of the phenomenal. Even the Kantian concept of the straight line, Deleuze emphasizes, is a dramatic synthesis or integration of an infinity of differential relations. In this way, infinitesimals constitute the distinct but obscure grounds enveloped by clear but confused effects. They are not empirical objects but objects of thought. Even if they are only known as already developed within the extensional becomings of the sensible and covered over by representational qualities, as differences they are problems that do not resemble their solutions and as such continue to insist in an enveloped, quasi-causal state.

Some content on this page was disabled on May 4, 2020 as a result of a DMCA takedown notice from Columbia University Press. You can learn more about the DMCA here:

Deleuzian Grounds. Thought of the Day 42.0


With difference or intensity instead of identity as the ultimate philosophical one could  arrive at the crux of Deleuze’s use of the Principle of Sufficient Reason in Difference and Repetition. At the beginning of the first chapter, he defines the quadruple yoke of conceptual representation identity, analogy, opposition, resemblance in correspondence with the four principle aspects of the Principle of Sufficient Reason: the form of the undetermined concept, the relation between ultimate determinable concepts, the relation between determinations within concepts, and the determined object of the concept itself. In other words, sufficient reason according to Deleuze is the very medium of representation, the element in which identity is conceptually determined. In itself, however, this medium or element remains different or unformed (albeit not formless): Difference is the state in which one can speak of determination as such, i.e. determination in its occurrent quality of a difference being made, or rather making itself in the sense of a unilateral distinction. It is with the event of difference that what appears to be a breakdown of representational reason is also a breakthrough of the rumbling ground as differential element of determination (or individuation). Deleuze illustrates this with an example borrowed from Nietzsche:

Instead of something distinguished from something else, imagine something which distinguishes itself and yet that from which it distinguishes itself, does not distinguish itself from it. Lightning, for example, distinguishes itself from the black sky but must also trail behind it . It is as if the ground rose to the surface without ceasing to be the ground.

Between the abyss of the indeterminate and the superficiality of the determined, there thus appears an intermediate element, a field potential or intensive depth, which perhaps in a way exceeds sufficient reason itself. This is a depth which Deleuze finds prefigured in Schelling’s and Schopenhauer’s differend conceptualization of the ground (Grund) as both ground (fond) and grounding (fondement). The ground attains an autonomous power that exceeds classical sufficient reason by including the grounding moment of sufficient reason for itself. Because this self-grounding ground remains groundless (sans-fond) in itself, however, Hegel famously ridiculed Schelling’s ground as the indeterminate night in which all cows are black. He opposed it to the surface of determined identities that are only negatively correlated to each other. By contrast, Deleuze interprets the self-grounding ground through Nietzsche’s eternal return of the same. Whereas the passive syntheses of habit (connective series) and memory (conjunctions of connective series) are the processes by which representational reason grounds itself in time, the eternal return (disjunctive synthesis of series) ungrounds (effonde) this ground by introducing the necessity of future becomings, i.e. of difference as ongoing differentiation. Far from being a denial of the Principle of Sufficient Reason, this threefold process of self-(un)grounding constitutes the positive, relational system that brings difference out of the night of the Identical, and with finer, more varied and more terrifying flashes of lightning than those of contradiction: progressivity.

The breakthrough of the ground in the process of ungrounding itself in sheer distinction-production of the multiple against the indistinguishable is what Deleuze calls violence or cruelty, as it determines being or nature in a necessary system of asymmetric relations of intensity by the acausal action of chance, like an ontological game in which the throw of the dice is the only rule or principle. But it is also the vigil, the insomnia of thought, since it is here that reason or thought achieves its highest power of determination. It becomes a pure creativity or virtuality in which no well-founded identity (God, World, Self) remains: [T]hought is that moment in which determination makes itself one, by virtue of maintaining a unilateral and precise relation to the indeterminate. Since it produces differential events without subjective or objective remainder, however, Deleuze argues that thought belongs to the pure and empty form of time, a time that is no longer subordinate to (cosmological, psychological, eternal) movement in space. Time qua form of transcendental synthesis is the ultimate ground of everything that is, reasons and acts. It is the formal element of multiple becoming, no longer in the sense of finite a priori conditioning, but in the sense of a transfinite a posteriori synthesizer: an empt interiority in ongoing formation and materialization. As Deleuze and Guattari define synthesizer in A Thousand Plateaus: The synthesizer, with its operation of consistency, has taken the place of the ground in a priori synthetic judgment: its synthesis is of the molecular and the cosmic, material and force, not form and matter, Grund and territory.

Some content on this page was disabled on May 4, 2020 as a result of a DMCA takedown notice from Columbia University Press. You can learn more about the DMCA here:



Let us introduce the concept of space using the notion of reflexive action (or reflex action) between two things. Intuitively, a thing x acts on another thing y if the presence of x disturbs the history of y. Events in the real world seem to happen in such a way that it takes some time for the action of x to propagate up to y. This fact can be used to construct a relational theory of space à la Leibniz, that is, by taking space as a set of equitemporal things. It is necessary then to define the relation of simultaneity between states of things.

Let x and y be two things with histories h(xτ) and h(yτ), respectively, and let us suppose that the action of x on y starts at τx0. The history of y will be modified starting from τy0. The proper times are still not related but we can introduce the reflex action to define the notion of simultaneity. The action of y on x, started at τy0, will modify x from τx1 on. The relation “the action of x on y is reflected to x” is the reflex action. Historically, Galileo introduced the reflection of a light pulse on a mirror to measure the speed of light. With this relation we will define the concept of simultaneity of events that happen on different basic things.


Besides we have a second important fact: observation and experiment suggest that gravitation, whose source is energy, is a universal interaction, carried by the gravitational field.

Let us now state the above hypothesis axiomatically.

Axiom 1 (Universal interaction): Any pair of basic things interact. This extremely strong axiom states not only that there exist no completely isolated things but that all things are interconnected.

This universal interconnection of things should not be confused with “universal interconnection” claimed by several mystical schools. The present interconnection is possible only through physical agents, with no mystical content. It is possible to model two noninteracting things in Minkowski space assuming they are accelerated during an infinite proper time. It is easy to see that an infinite energy is necessary to keep a constant acceleration, so the model does not represent real things, with limited energy supply.

Now consider the time interval (τx1 − τx0). Special Relativity suggests that it is nonzero, since any action propagates with a finite speed. We then state

Axiom 2 (Finite speed axiom): Given two different and separated basic things x and y, such as in the above figure, there exists a minimum positive bound for the interval (τx1 − τx0) defined by the reflex action.

Now we can define Simultaneity as τy0 is simultaneous with τx1/2 =Df (1/2)(τx1 + τx0)

The local times on x and y can be synchronized by the simultaneity relation. However, as we know from General Relativity, the simultaneity relation is transitive only in special reference frames called synchronous, thus prompting us to include the following axiom:

Axiom 3 (Synchronizability): Given a set of separated basic things {xi} there is an assignment of proper times τi such that the relation of simultaneity is transitive.

With this axiom, the simultaneity relation is an equivalence relation. Now we can define a first approximation to physical space, which is the ontic space as the equivalence class of states defined by the relation of simultaneity on the set of things is the ontic space EO.

The notion of simultaneity allows the analysis of the notion of clock. A thing y ∈ Θ is a clock for the thing x if there exists an injective function ψ : SL(y) → SL(x), such that τ < τ′ ⇒ ψ(τ) < ψ(τ′). i.e.: the proper time of the clock grows in the same way as the time of things. The name Universal time applies to the proper time of a reference thing that is also a clock. From this we see that “universal time” is frame dependent in agreement with the results of Special Relativity.

The Locus of Renormalization. Note Quote.


Since symmetries and the related conservation properties have a major role in physics, it is interesting to consider the paradigmatic case where symmetry changes are at the core of the analysis: critical transitions. In these state transitions, “something is not preserved”. In general, this is expressed by the fact that some symmetries are broken or new ones are obtained after the transition (symmetry changes, corresponding to state changes). At the transition, typically, there is the passage to a new “coherence structure” (a non-trivial scale symmetry); mathematically, this is described by the non-analyticity of the pertinent formal development. Consider the classical para-ferromagnetic transition: the system goes from a disordered state to sudden common orientation of spins, up to the complete ordered state of a unique orientation. Or percolation, often based on the formation of fractal structures, that is the iteration of a statistically invariant motif. Similarly for the formation of a snow flake . . . . In all these circumstances, a “new physical object of observation” is formed. Most of the current analyses deal with transitions at equilibrium; the less studied and more challenging case of far form equilibrium critical transitions may require new mathematical tools, or variants of the powerful renormalization methods. These methods change the pertinent object, yet they are based on symmetries and conservation properties such as energy or other invariants. That is, one obtains a new object, yet not necessarily new observables for the theoretical analysis. Another key mathematical aspect of renormalization is that it analyzes point-wise transitions, that is, mathematically, the physical transition is seen as happening in an isolated mathematical point (isolated with respect to the interval topology, or the topology induced by the usual measurement and the associated metrics).

One can say in full generality that a mathematical frame completely handles the determination of the object it describes as long as no strong enough singularity (i.e. relevant infinity or divergences) shows up to break this very mathematical determination. In classical statistical fields (at criticality) and in quantum field theories this leads to the necessity of using renormalization methods. The point of these methods is that when it is impossible to handle mathematically all the interaction of the system in a direct manner (because they lead to infinite quantities and therefore to no relevant account of the situation), one can still analyze parts of the interactions in a systematic manner, typically within arbitrary scale intervals. This allows us to exhibit a symmetry between partial sets of “interactions”, when the arbitrary scales are taken as a parameter.

In this situation, the intelligibility still has an “upward” flavor since renormalization is based on the stability of the equational determination when one considers a part of the interactions occurring in the system. Now, the “locus of the objectivity” is not in the description of the parts but in the stability of the equational determination when taking more and more interactions into account. This is true for critical phenomena, where the parts, atoms for example, can be objectivized outside the system and have a characteristic scale. In general, though, only scale invariance matters and the contingent choice of a fundamental (atomic) scale is irrelevant. Even worse, in quantum fields theories, the parts are not really separable from the whole (this would mean to separate an electron from the field it generates) and there is no relevant elementary scale which would allow ONE to get rid of the infinities (and again this would be quite arbitrary, since the objectivity needs the inter-scale relationship).

In short, even in physics there are situations where the whole is not the sum of the parts because the parts cannot be summed on (this is not specific to quantum fields and is also relevant for classical fields, in principle). In these situations, the intelligibility is obtained by the scale symmetry which is why fundamental scale choices are arbitrary with respect to this phenomena. This choice of the object of quantitative and objective analysis is at the core of the scientific enterprise: looking only at molecules as the only pertinent observable of life is worse than reductionist, it is against the history of physics and its audacious unifications and invention of new observables, scale invariances and even conceptual frames.

As for criticality in biology, there exists substantial empirical evidence that living organisms undergo critical transitions. These are mostly analyzed as limit situations, either never really reached by an organism or as occasional point-wise transitions. Or also, as researchers nicely claim in specific analysis: a biological system, a cell genetic regulatory networks, brain and brain slices …are “poised at criticality”. In other words, critical state transitions happen continually.

Thus, as for the pertinent observables, the phenotypes, we propose to understand evolutionary trajectories as cascades of critical transitions, thus of symmetry changes. In this perspective, one cannot pre-give, nor formally pre-define, the phase space for the biological dynamics, in contrast to what has been done for the profound mathematical frame for physics. This does not forbid a scientific analysis of life. This may just be given in different terms.

As for evolution, there is no possible equational entailment nor a causal structure of determination derived from such entailment, as in physics. The point is that these are better understood and correlated, since the work of Noether and Weyl in the last century, as symmetries in the intended equations, where they express the underlying invariants and invariant preserving transformations. No theoretical symmetries, no equations, thus no laws and no entailed causes allow the mathematical deduction of biological trajectories in pre-given phase spaces – at least not in the deep and strong sense established by the physico-mathematical theories. Observe that the robust, clear, powerful physico-mathematical sense of entailing law has been permeating all sciences, including societal ones, economics among others. If we are correct, this permeating physico-mathematical sense of entailing law must be given up for unentailed diachronic evolution in biology, in economic evolution, and cultural evolution.

As a fundamental example of symmetry change, observe that mitosis yields different proteome distributions, differences in DNA or DNA expressions, in membranes or organelles: the symmetries are not preserved. In a multi-cellular organism, each mitosis asymmetrically reconstructs a new coherent “Kantian whole”, in the sense of the physics of critical transitions: a new tissue matrix, new collagen structure, new cell-to-cell connections . . . . And we undergo millions of mitosis each minute. More, this is not “noise”: this is variability, which yields diversity, which is at the core of evolution and even of stability of an organism or an ecosystem. Organisms and ecosystems are structurally stable, also because they are Kantian wholes that permanently and non-identically reconstruct themselves: they do it in an always different, thus adaptive, way. They change the coherence structure, thus its symmetries. This reconstruction is thus random, but also not random, as it heavily depends on constraints, such as the proteins types imposed by the DNA, the relative geometric distribution of cells in embryogenesis, interactions in an organism, in a niche, but also on the opposite of constraints, the autonomy of Kantian wholes.

In the interaction with the ecosystem, the evolutionary trajectory of an organism is characterized by the co-constitution of new interfaces, i.e. new functions and organs that are the proper observables for the Darwinian analysis. And the change of a (major) function induces a change in the global Kantian whole as a coherence structure, that is it changes the internal symmetries: the fish with the new bladder will swim differently, its heart-vascular system will relevantly change ….

Organisms transform the ecosystem while transforming themselves and they can stand/do it because they have an internal preserved universe. Its stability is maintained also by slightly, yet constantly changing internal symmetries. The notion of extended criticality in biology focuses on the dynamics of symmetry changes and provides an insight into the permanent, ontogenetic and evolutionary adaptability, as long as these changes are compatible with the co-constituted Kantian whole and the ecosystem. As we said, autonomy is integrated in and regulated by constraints, with an organism itself and of an organism within an ecosystem. Autonomy makes no sense without constraints and constraints apply to an autonomous Kantian whole. So constraints shape autonomy, which in turn modifies constraints, within the margin of viability, i.e. within the limits of the interval of extended criticality. The extended critical transition proper to the biological dynamics does not allow one to prestate the symmetries and the correlated phase space.

Consider, say, a microbial ecosystem in a human. It has some 150 different microbial species in the intestinal tract. Each person’s ecosystem is unique, and tends largely to be restored following antibiotic treatment. Each of these microbes is a Kantian whole, and in ways we do not understand yet, the “community” in the intestines co-creates their worlds together, co-creating the niches by which each and all achieve, with the surrounding human tissue, a task closure that is “always” sustained even if it may change by immigration of new microbial species into the community and extinction of old species in the community. With such community membership turnover, or community assembly, the phase space of the system is undergoing continual and open ended changes. Moreover, given the rate of mutation in microbial populations, it is very likely that these microbial communities are also co-evolving with one another on a rapid time scale. Again, the phase space is continually changing as are the symmetries.

Can one have a complete description of actual and potential biological niches? If so, the description seems to be incompressible, in the sense that any linguistic description may require new names and meanings for the new unprestable functions, where functions and their names make only sense in the newly co-constructed biological and historical (linguistic) environment. Even for existing niches, short descriptions are given from a specific perspective (they are very epistemic), looking at a purpose, say. One finds out a feature in a niche, because you observe that if it goes away the intended organisms dies. In other terms, niches are compared by differences: one may not be able to prove that two niches are identical or equivalent (in supporting life), but one may show that two niches are different. Once more, there are no symmetries organizing over time these spaces and their internal relations. Mathematically, no symmetry (groups) nor (partial-) order (semigroups) organize the phase spaces of phenotypes, in contrast to physical phase spaces.

Finally, here is one of the many logical challenges posed by evolution: the circularity of the definition of niches is more than the circularity in the definitions. The “in the definitions” circularity concerns the quantities (or quantitative distributions) of given observables. Typically, a numerical function defined by recursion or by impredicative tools yields a circularity in the definition and poses no mathematical nor logical problems, in contemporary logic (this is so also for recursive definitions of metabolic cycles in biology). Similarly, a river flow, which shapes its own border, presents technical difficulties for a careful equational description of its dynamics, but no mathematical nor logical impossibility: one has to optimize a highly non linear and large action/reaction system, yielding a dynamically constructed geodetic, the river path, in perfectly known phase spaces (momentum and space or energy and time, say, as pertinent observables and variables).

The circularity “of the definitions” applies, instead, when it is impossible to prestate the phase space, so the very novel interaction (including the “boundary conditions” in the niche and the biological dynamics) co-defines new observables. The circularity then radically differs from the one in the definition, since it is at the meta-theoretical (meta-linguistic) level: which are the observables and variables to put in the equations? It is not just within prestatable yet circular equations within the theory (ordinary recursion and extended non – linear dynamics), but in the ever changing observables, the phenotypes and the biological functions in a circularly co-specified niche. Mathematically and logically, no law entails the evolution of the biosphere.

The Sibyl’s Prophecy/Nordic Creation. Note Quote.



The Prophecy of the Tenth Sibyl, a medieval best-seller, surviving in over 100 manuscripts from the 11th to the 16th century, predicts, among other things, the reign of evil despots, the return of the Antichrist and the sun turning to blood.

The Tenth or Tiburtine Sibyl was a pagan prophetess perhaps of Etruscan origin. To quote Lactantus in his general account of the ten sibyls in the introduction, ‘The Tiburtine Sibyl, by name Albunea, is worshiped at Tibur as a goddess, near the banks of the Anio in which stream her image is said to have been found, holding a book in her hand’.

The work interprets the Sibyl’s dream in which she foresees the downfall and apocalyptic end of the world; 9 suns appear in the sky, each one more ugly and bloodstained than the last, representing the 9 generations of mankind and ending with Judgment Day. The original Greek version dates from the end of the 4th century and the earliest surviving manuscript in Latin is dated 1047. The Tiburtine Sibyl is often depicted with Emperor Augustus, who asks her if he should be worshipped as a god.

The foremost lay of the Elder Edda is called Voluspa (The Sibyl’s Prophecy). The volva, or sibyl, represents the indelible imprint of the past, wherein lie the seeds of the future. Odin, the Allfather, consults this record to learn of the beginning, life, and end of the world. In her response, she addresses Odin as a plurality of “holy beings,” indicating the omnipresence of the divine principle in all forms of life. This also hints at the growth of awareness gained by all living, learning entities during their evolutionary pilgrimage through spheres of existence.

Hear me, all ye holy beings, greater as lesser sons of Heimdal! You wish me to tell of Allfather’s works, tales of the origin, the oldest I know. Giants I remember, born in the foretime, they who long ago nurtured me. Nine worlds I remember, nine trees of life, before this world tree grew from the ground.

Paraphrased, this could be rendered as:

Learn, all ye living entities, imbued with the divine essence of Odin, ye more and less evolved sons of the solar divinity (Heimdal) who stands as guardian between the manifest worlds of the solar system and the realm of divine consciousness. You wish to learn of what has gone before. I am the record of long ages past (giants), that imprinted their experience on me. I remember nine periods of manifestation that preceded the present system of worlds.

Time being inextricably a phenomenon of manifestation, the giant ages refer to the matter-side of creation. Giants represent ages of such vast duration that, although their extent in space and time is limited, it is of a scope that can only be illustrated as gigantic. Smaller cycles within the greater are referred to in the Norse myths as daughters of their father-giant. Heimdal is the solar deity in the sign of Aries – of beginnings for our system – whose “sons” inhabit, in fact compose, his domain.

Before a new manifestation of a world, whether a cosmos or a lesser system, all its matter is frozen in a state of immobility, referred to in the Edda as a frost giant. The gods – consciousnesses – are withdrawn into their supernal, unimaginable abstraction of Nonbeing, called in Sanskrit “paranirvana.” Without a divine activating principle, space itself – the great container – is a purely theoretical abstraction where, for lack of any organizing energic impulse of consciousness, matter cannot exist.

This was the origin of ages when Ymer built. No soil was there, no sea, no cool waves. Earth was not, nor heaven above; Gaping Void alone, no growth. Until the sons of Bur raised the tables; they who created beautiful Midgard. The sun shone southerly on the stones of the court; then grew green herbs in fertile soil.

To paraphrase again:

Before time began, the frost giant (Ymer) prevailed. No elements existed for there were ‘no waves,’ no motion, hence no organized form nor any temporal events, until the creative divine forces emanated from Space (Bur — a principle, not a locality) and organized latent protosubstance into the celestial bodies (tables at which the gods feast on the mead of life-experience). Among these tables is Middle Court (Midgard), our own beautiful planet. The life-giving sun sheds its radiant energies to activate into life all the kingdoms of nature which compose it.

The Gaping Void (Ginnungagap) holds “no cool waves” throughout illimitable depths during the age of the frost giant. Substance has yet to be created. Utter wavelessness negates it, for all matter is the effect of organized, undulating motion. As the cosmic hour strikes for a new manifestation, the ice of Home of Nebulosity (Niflhem) is melted by the heat from Home of Fire (Muspellshem), resulting in vapor in the void. This is Ymer, protosubstance as yet unformed, the nebulae from which will evolve the matter components of a new universe, as the vital heat of the gods melts and vivifies the formless immobile “ice.”

When the great age of Ymer has run its course, the cow Audhumla, symbol of fertility, “licking the salt from the ice blocks,” uncovers the head of Buri, first divine principle. From this infinite, primal source emanates Bur, whose “sons” are the creative trinity: Divine Allfather, Will, and Sanctity (Odin, Vile, and Vi). This triune power “kills” the frost giant by transforming it into First Sound (Orgalmer), or keynote, whose overtones vibrate through the planes of sleeping space and organize latent protosubstance into the multifarious forms which will be used by all “holy beings” as vehicles for gaining experience in worlds of matter.

Beautiful Midgard, our physical globe earth, is but one of the “tables” raised by the creative trinity, whereat the gods shall feast. The name Middle Court is suggestive, for the ancient traditions place our globe in a central position in the series of spheres that comprise the terrestrial being’s totality. All living entities, man included, comprise besides the visible body a number of principles and characteristics not cognized by the gross physical senses. In the Lay of Grimner (Grimnismal), wherein Odin in the guise of a tormented prisoner on earth instructs a human disciple, he enumerates twelve spheres or worlds, all but one of which are unseen by our organs of sight. As to the formation of Midgard, he relates:

Of Ymer’s flesh was the earth formed, the billows of his blood, the mountains of his bones, bushes of his hair, and of his brainpan heaven. With his eyebrows beneficent powers enclosed Midgard for man; but of his brain were surely all dark skies created.

The trinity of immanent powers organize Ymer into the forms wherein they dwell, shaping the chaos or frost giant into living globes on many planes of being. The “eyebrows” that gird the earth and protect it suggest the Van Allen belts that shield the planet from inimical radiation. The brain of Ymer – material thinking – is surely all too evident in the thought atmosphere wherein man participates.

The formation of the physical globe is described as the creation of “dwarfs” – elemental forces which shape the body of the earth-being and which include the mineral. vegetable, and animal kingdoms.

The mighty drew to their judgment seats, all holy gods to hold counsel: who should create a host of dwarfs from the blood of Brimer and the limbs of the dead. Modsogne there was, mightiest of all the dwarfs, Durin the next; there were created many humanoid dwarfs from the earth, as Durin said.

Brimer is the slain Ymer, a kenning for the waters of space. Modsogne is the Force-sucker, Durin the Sleeper, and later comes Dvalin the Entranced. They are “dwarf”-consciousnesses, beings that are miðr than human – the Icelandic miðr meaning both “smaller” and “less.” By selecting the former meaning, popular concepts have come to regard them as undersized mannikins, rather than as less evolved natural species that have not yet reached the human condition of intelligence and self-consciousness.

During the life period or manifestation of a universe, the governing giant or age is named Sound of Thor (Trudgalmer), the vital force which sustains activity throughout the cycle of existence. At the end of this age the worlds become Sound of Fruition (Bargalmer). This giant is “placed on a boat-keel and saved,” or “ground on the mill.” Either version suggests the karmic end product as the seed of future manifestation, which remains dormant throughout the ensuing frost giant of universal dissolution, when cosmic matter is ground into a formless condition of wavelessness, dissolved in the waters of space.

There is an inescapable duality of gods-giants in all phases of manifestation: gods seek experience in worlds of substance and feast on the mead at stellar and planetary tables; giants, formed into vehicles inspired with the divine impetus, rise through cycles of this association on the ladder of conscious awareness. All states being relative and bipolar, there is in endless evolution an inescapable link between the subjective and objective progress of beings. Odin as the “Opener” is paired with Orgalmer, the keynote on which a cosmos is constructed; Odin as the “Closer” is equally linked with Bargalmer, the fruitage of a life cycle. During the manifesting universe, Odin-Allfather corresponds to Trudgalmer, the sustainer of life.

A creative trinity plays an analogical part in the appearance of humanity. Odin remains the all-permeant divine essence, while on this level his brother-creators are named Honer and Lodur, divine counterparts of water or liquidity, and fire or vital heat and motion. They “find by the shore, of little power” the Ash and the Elm and infuse into these earth-beings their respective characteristics, making a human image or reflection of themselves. These protohumans, miniatures of the world tree, the cosmic Ash, Yggdrasil, in addition to their earth-born qualities of growth force and substance, receive the divine attributes of the gods. By Odin man is endowed with spirit, from Honer comes his mind, while Lodur gives him will and godlike form. The essentially human qualities are thus potentially divine. Man is capable of blending with the earth, whose substances form his body, yet is able to encompass in his consciousness the vision native to his divine source. He is in fact a minor world tree, part of the universal tree of life, Yggdrasil.

Ygg in conjunction with other words has been variously translated as Eternal, Awesome or Terrible, and Old. Sometimes Odin is named Yggjung, meaning the Ever-Young, or Old-Young. Like the biblical “Ancient of Days” it is a concept that mind can grasp only in the wake of intuition. Yggdrasil is the “steed” or the “gallows” of Ygg, whereon Odin is mounted or crucified during any period of manifested life. The world tree is rooted in Nonbeing and ramifies through the planes of space, its branches adorned with globes wherein the gods imbody. The sibyl spoke of ours as the tenth in a series of such world trees, and Odin confirms this in The Song of the High One (Den Hoges Sang):

I know that I hung in the windtorn tree nine whole nights, spear-pierced, given to Odin, my self to my Self above me in the tree, whose root none knows whence it sprang. None brought me bread, none served me drink; I searched the depths, spied runes of wisdom, raised them with song, and fell once more from the tree. Nine powerful songs I learned from the wise son of Boltorn, Bestla’s father; a draught I drank of precious mead ladled from Odrorer. I began to grow, to grow wise, to grow greater and enjoy; for me words from words led to new words, for me deeds from deeds led to new deeds.

Numerous ancient tales relate the divine sacrifice and crucifixion of the Silent Watcher whose realm or protectorate is a world in manifestation. Each tree of life, of whatever scope, constitutes the cross whereon the compassionate deity inherent in that hierarchy remains transfixed for the duration of the cycle of life in matter. The pattern of repeated imbodiments for the purpose of gaining the precious mead is clear, as also the karmic law of cause and effect as words and deeds bring their results in new words and deeds.

Yggdrasil is said to have three roots. One extends into the land of the frost giants, whence flow twelve rivers of lives or twelve classes of beings; another springs from and is watered by the well of Origin (Urd), where the three Norns, or fates, spin the threads of destiny for all lives. “One is named Origin, the second Becoming. These two fashion the third, named Debt.” They represent the inescapable law of cause and effect. Though they have usually been roughly translated as Past, Present, and Future, the dynamic concept in the Edda is more complete and philosophically exact. The third root of the world tree reaches to the well of the “wise giant Mimer,” owner of the well of wisdom. Mimer represents material existence and supplies the wisdom gained from experience of life. Odin forfeited one eye for the privilege of partaking of these waters of life, hence he is represented in manifestation as one-eyed and named Half-Blind. Mimer, the matter-counterpart, at the same time receives partial access to divine vision.

The lays make it very clear that the purpose of existence is for the consciousness-aspect of all beings to gain wisdom through life, while inspiring the substantial side of itself to growth in inward awareness and spirituality. At the human level, self-consciousness and will are aroused, making it possible for man to progress willingly and purposefully toward his divine potential, aided by the gods who have passed that way before him, rather than to drift by slow degrees and many detours along the road of inevitable evolution. Odin’s instructions to a disciple, Loddfafner, the dwarf-nature in man, conclude with:

Now is sung the High One’s song in the High One’s hall. Useful to sons of men, useless to sons of giants. Hail Him who sang! Hail him who kens! Rejoice they who understand! Happy they who heed!

The Womb of Cosmogony. Thought of the Day 30.0

Nowhere and by no people was speculation allowed to range beyond those manifested gods. The boundless and infinite UNITY remained with every nation a virgin forbidden soil, untrodden by man’s thought, untouched by fruitless speculation. The only reference made to it was the brief conception of its diastolic and systolic property, of its periodical expansion or dilatation, and contraction. In the Universe with all its incalculable myriads of systems and worlds disappearing and re-appearing in eternity, the anthropomorphised powers, or gods, their Souls, had to disappear from view with their bodies: — “The breath returning to the eternal bosom which exhales and inhales them,” says our Catechism. . . . In every Cosmogony, behind and higher than the creative deity, there is a superior deity, a planner, an Architect, of whom the Creator is but the executive agent. And still higher, over and around, withinand without, there is the UNKNOWABLE and the unknown, the Source and Cause of all these Emanations. – The Secret Doctrine


Many are the names in the ancient literatures which have been given to the Womb of Being from which all issues, in which all forever is, and into the spiritual and divine reaches of which all ultimately returns, whether infinitesimal entity or macrocosmic spacial unit.

The Tibetans called this ineffable mystery Tong-pa-nnid, the unfathomable Abyss of the spiritual realms. The Buddhists of the Mahayana school describe it as Sunyata or the Emptiness, simply because no human imagination can figurate to itself the incomprehensible Fullness which it is. In the Eddas of ancient Scandinavia the Boundless was called by the suggestive term Ginnungagap – a word meaning yawning or uncircumscribed void. The Hebrew Bible states that the earth was formless and void, and darkness was upon the face of Tehom, the Deep, the Abyss of Waters, and therefore the great Deep of kosmic Space. It has the identical significance of the Womb of Space as envisioned by other peoples. In the Chaldaeo-Jewish Qabbalah the same idea is conveyed by the term ‘Eyn (or Ain) Soph, without bounds. In the Babylonian accounts of Genesis, it is Mummu Tiamatu which stands for the Great Sea or Deep. The archaic Chaldaean cosmology speaks of the Abyss under the name of Ab Soo, the Father or source of knowledge, and in primitive Magianism it was Zervan Akarana — in its original meaning of Boundless Spirit instead of the later connotation of Boundless Time.

In the Chinese cosmogony, Tsi-tsai, the Self-Existent, is the Unknown Darkness, the root of the Wuliang-sheu, Boundless Age. The wu wei of Lao-tse, often mistranslated as passivity and nonaction, imbodies a similar conception. In the sacred scriptures of the Quiches of Guatemala, the Popol Vuh or “Book of the Azure Veil,” reference is made to the “void which was the immensity of the Heavens,” and to the “Great Sea of Space.” The ancient Egyptians spoke of the Endless Deep; the same idea also is imbodied in the Celi-Ced of archaic Druidism, Ced being spoken of as the “Black Virgin” — Chaos — a state of matter prior to manvantaric differentiation.

The Orphic Mysteries taught of the Thrice-Unknown Darkness or Chronos, about which nothing could be predicated except its timeless Duration. With the Gnostic schools, as for instance with Valentinus, it was Bythos, the Deep. In Greece, the school of Democritus and Epicurus postulated To Kenon, the Void; the same idea was later voiced by Leucippus and Diagoras. But the two most common terms in Greek philosophy for the Boundless were Apeiron, as used by Plato, Anaximander and Anaximenes, and Apeiria, as used by Anaxagoras and Aristotle. Both words had the significance of frontierless expansion, that which has no circumscribing bounds.

The earliest conception of Chaos was that almost unthinkable condition of kosmic space or kosmic expanse, which to human minds is infinite and vacant extension of primordial Aether, a stage before the formation of manifested worlds, and out of which everything that later existed was born, including gods and men and all the celestial hosts. We see here a faithful echo of the archaic esoteric philosophy, because among the Greeks Chaos was the kosmic mother of Erebos and Nyx, Darkness and Night — two aspects of the same primordial kosmic stage. Erebos was the spiritual or active side corresponding to Brahman in Hindu philosophy, and Nyx the passive side corresponding to pradhana or mulaprakriti, both meaning root-nature. Then from Erebos and Nyx as dual were born Aether and Hemera, Spirit and Day — Spirit being here again in this succeeding stage the active side, and Day the passive aspect, the substantial or vehicular side. The idea was that just as in the Day of Brahma of Hindu cosmogony things spring into active manifested existence, so in the kosmic Day of the Greeks things spring from elemental substance into manifested light and activity, because of the indwelling urge of the kosmic Spirit.

Biogrammatic Vir(Ac)tuality. Note Quote.

In Foucault’s most famous example, the prison acts as the confluence of content (prisoners) and expression (law, penal code) (Gilles Deleuze, Sean Hand-Foucault). Informal Diagrams are proliferate. As abstract machines they contain the transversal vectors that cut across a panoply of features (such as institutions, classes, persons, economic formation, etc), mapping from point to relational point, the generalized features of power economies. The disciplinary diagram explored by Foucault, imposes “a particular conduct upon a particular human multiplicity”. The imposition of force upon force affects and effectuates the felt experience of a life, a living. Deleuze has called the abstract machine “pure matter/function” in which relations between forces are nonetheless very real.

[…] the diagram acts as a non-unifying immanent cause that is co-extensive with the whole social field: the abstract machine is like the cause of the concrete assemblages that execute its relations; and these relations between forces take place ‘not above’ but within the very tissue of the assemblages they produce.

The processual conjunction of content and expression; the cutting edge of deterritorialization:

The relations of power and resistance between theory and practice resonate – becoming-form; diagrammatics as praxis, integrates and differentiates the immanent cause and quasi-cause of the actualized occasions of research/creation. What do we mean by immanent cause? It is a cause which is realized, integrated and distinguished in its effect. Or rather, the immanent cause is realized, integrated and distinguished by its effect. In this way there is a correlation or mutual presupposition between cause and effect, between abstract machine and concrete assemblages

Memory is the real name of the relation to oneself, or the affect of self by self […] Time becomes a subject because it is the folding of the outside…forces every present into forgetting but preserves the whole of the past within memory: forgetting is the impossibiltiy of return and memory is the necessity of renewal.


The figure on the left is Henri Bergson’s diagram of an infinitely contracted past that directly intersects with the body at point S – a mobile, sensorimotor present where memory is closest to action. Plane P represents the actual present; plane of contact with objects. The AB segments represent repetitive compressions of memory. As memory contracts it gets closer to action. In it’s more expanded forms it is closer to dreams. The figure on the right extrapolates from Bergson’s memory model to describe the Biogrammatic ontological vector of the Diagram as it moves from abstract (informal) machine in the most expanded form “A” through the cone “tissue” to the phase-shifting (formal), arriving at the Strata of the P plane to become artefact. The ontological vector passes through the stratified, through the interval of difference created in the phase shift (the same phase shift that separates and folds content and expression to move vertically, transversally, back through to the abstract diagram.)

A spatio-temporal-material contracting-expanding of the abstract machine is the processual thinking-feeling-articulating of the diagram becoming-cartographic; synaesthetic conceptual mapping. A play of forces, a series of relays, affecting a tendency toward an inflection of the informal diagram becoming-form. The inflected diagram/biogram folds and unfolds perception, appearances; rides in the gap of becoming between content and expression; intuitively transduces the actualizing (thinking, drawing, marking, erasing) of matter-movement, of expressivity-movement. “To follow the flow of matter… is intuition in action.” A processual stage that prehends the process of the virtual actualizing;

the creative construction of a new reality. The biogrammatic stage of the diagrammatic is paradoxically double in that it is both the actualizing of the abstract machine (contraction) and the recursive counter-actualization of the formal diagram (détournement); virtual and actual.

It is the event-dimension of potential – that is the effective dimension of the interrelating of elements, of their belonging to each other. That belonging is a dynamic corporeal “abstraction” – the “drawing off” (transductive conversion) of the corporeal into its dynamism (yielding the event) […] In direct channeling. That is, in a directional channeling: ontological vector. The transductive conversion is an ontological vector that in-gathers a heterogeneity of substantial elements along with the already-constituted abstractions of language (“meaning”) and delivers them together to change. (Brian Massumi Parables for the Virtual Movement, Affect, Sensation)

Skin is the space of the body the BwO that is interior and exterior. Interstitial matter of the space of the body.


The material markings and traces of a diagrammatic process, a ‘capturing’ becoming-form. A diagrammatic capturing involves a transductive process between a biogrammatic form of content and a form of expression. The formal diagram is thus an individuating phase-shift as Simondon would have it, always out-of-phase with itself. A becoming-form that inhabits the gap, the difference, between the wave phase of the biogrammatic that synaesthetically draws off the intermix of substance and language in the event-dimension and the drawing of wave phase in which partial capture is formalized. The phase shift difference never acquires a vectorial intention. A pre-decisive, pre-emptive drawing of phase-shifting with a “drawing off” the biogram.


If effects realize something this is because the relations between forces or power relations, are merely virtual, potential, unstable vanishing and molecular, and define only possibilities of interaction so long as they do not enter a macroscopic whole capable of giving form to their fluid manner and diffuse function. But realization is equally an integration, a collection of progressive integrations that are initially local and then become or tend to become global, aligning, homogenizing and summarizing relations between forces: here law is the integration of illegalisms.


Without Explosions, WE Would NOT Exist!


The matter and radiation in the universe gets hotter and hotter as we go back in time towards the initial quantum state, because it was compressed into a smaller volume. In this Hot Big Bang epoch in the early universe, we can use standard physical laws to examine the processes going on in the expanding mixture of matter and radiation. A key feature is that about 300,000 years after the start of the Hot Big Bang epoch, nuclei and electrons combined to form atoms. At earlier times when the temperature was higher, atoms could not exist, as the radiation then had so much energy it disrupted any atoms that tried to form into their constituent parts (nuclei and electrons). Thus at earlier times matter was ionized, consisting of negatively charged electrons moving independently of positively charged atomic nuclei. Under these conditions, the free electrons interact strongly with radiation by Thomson scattering. Consequently matter and radiation were tightly coupled in equilibrium at those times, and the Universe was opaque to radiation. When the temperature dropped through the ionization temperature of about 4000K, atoms formed from the nuclei and electrons, and this scattering ceased: the Universe became very transparent. The time when this transition took place is known as the time of decoupling – it was the time when matter and radiation ceased to be tightly coupled to each other, at a redshift zdec ≃ 1100 (Scott Dodelson (Auth.)-Modern Cosmology-Academic Press). By

μbar ∝ S−3, μrad ∝ S−4, Trad ∝ S−1 —– (1)

The scale factor S(t) obeys the Raychaudhuri equation

3S ̈/S = -1/2 κ(μ +3p/c2) + Λ —– (2)

where κ is the gravitational constant and Λ the cosmological constant.

, the universe was radiation dominated (μrad ≫ μmat) at early times and matter dominated (μrad ≪ μmat) at late times; matter-radiation density equality occurred significantly before decoupling (the temperature Teq when this equality occurred was Teq ≃ 104K; at that time the scale factor was Seq ≃ 104S0, where S0 is the present-day value). The dynamics of both the background model and of perturbations about that model differ significantly before and after Seq.

Radiation was emitted by matter at the time of decoupling, thereafter travelling freely to us through the intervening space. When it was emitted, it had the form of blackbody radiation, because this is a consequence of matter and radiation being in thermodynamic equilibrium at earlier times. Thus the matter at z = zdec forms the Last Scattering Surface (LSS) in the early universe, emitting Cosmic Blackbody Background Radiation (‘CBR’) at 4000K, that since then has travelled freely with its temperature T scaling inversely with the scale function of the universe. As the radiation travelled towards us, the universe expanded by a factor of about 1100; consequently by the time it reaches us, it has cooled to 2.75 K (that is, about 3 degrees above absolute zero, with a spectrum peaking in the microwave region), and so is extremely hard to observe. It was however detected in 1965, and its spectrum has since been intensively investigated, its blackbody nature being confirmed to high accuracy (R. B. Partridge-3K_ The Cosmic Microwave Background Radiation). Its existence is now taken as solid proof both that the Universe has indeed expanded from a hot early phase, and that standard physics applied unchanged at that era in the early universe.

The thermal capacity of the radiation is hugely greater than that of the matter. At very early times before decoupling, the temperatures of the matter and radiation were the same (because they were in equilibrium with each other), scaling as 1/S(t) (Equation 1 above). The early universe exceeded any temperature that can ever be attained on Earth or even in the centre of the Sun; as it dropped towards its present value of 3 K, successive physical reactions took place that determined the nature of the matter we see around us today. At very early times and high temperatures, only elementary particles can survive and even neutrinos had a very small mean free path; as the universe cooled down, neutrinos decoupled from the matter and streamed freely through space. At these times the expansion of the universe was radiation dominated, and we can approximate the universe then by models with {k = 0, w = 1/3, Λ = 0}, the resulting simple solution of

3S ̇2/S2 = A/S3 + B/S4 + Λ/3 – 3k/S2 —– (3)

uniquely relating time to temperature:

S(t)=S0t1/2 , t=1.92sec [T/1010K]−2 —– (4)

(There are no free constants in the latter equation).

At very early times, even neutrinos were tightly coupled and in equilibrium with the radiation; they decoupled at about 1010K, resulting in a relic neutrino background density in the universe today of about Ων0 ≃ 10−5 if they are massless (but it could be higher depending on their masses). Key events in the early universe are associated with out of equilibrium phenomena. An important event was the era of nucleosynthesis, the time when the light elements were formed. Above about 109K, nuclei could not exist because the radiation was so energetic that as fast as they formed, they were disrupted into their constituent parts (protons and neutrons). However below this temperature, if particles collided with each other with sufficient energy for nuclear reactions to take place, the resultant nuclei remained intact (the radiation being less energetic than their binding energy and hence unable to disrupt them). Thus the nuclei of the light elements  – deuterium, tritium, helium, and lithium – were created by neutron capture. This process ceased when the temperature dropped below about 108K (the nuclear reaction threshold). In this way, the proportions of these light elements at the end of nucleosynthesis were determined; they have remained virtually unchanged since. The rate of reaction was extremely high; all this took place within the first three minutes of the expansion of the Universe. One of the major triumphs of Big Bang theory is that theory and observation are in excellent agreement provided the density of baryons is low: Ωbar0 ≃ 0.044. Then the predicted abundances of these elements (25% Helium by weight, 75% Hydrogen, the others being less than 1%) agrees very closely with the observed abundances. Thus the standard model explains the origin of the light elements in terms of known nuclear reactions taking place in the early Universe. However heavier elements cannot form in the time available (about 3 minutes).

In a similar way, physical processes in the very early Universe (before nucleosynthesis) can be invoked to explain the ratio of matter to anti-matter in the present-day Universe: a small excess of matter over anti-matter must be created then in the process of baryosynthesis, without which we could not exist today (if there were no such excess, matter and antimatter would have all annihilated to give just radiation). However other quantities (such as electric charge) are believed to have been conserved even in the extreme conditions of the early Universe, so their present values result from given initial conditions at the origin of the Universe, rather than from physical processes taking place as it evolved. In the case of electric charge, the total conserved quantity appears to be zero: after quarks form protons and neutrons at the time of baryosynthesis, there are equal numbers of positively charged protons and negatively charged electrons, so that at the time of decoupling there were just enough electrons to combine with the nuclei and form uncharged atoms (it seems there is no net electrical charge on astronomical bodies such as our galaxy; were this not true, electromagnetic forces would dominate cosmology, rather than gravity).

After decoupling, matter formed large scale structures through gravitational instability which eventually led to the formation of the first generation of stars and is probably associated with the re-ionization of matter. However at that time planets could not form for a very important reason: there were no heavy elements present in the Universe. The first stars aggregated matter together by gravitational attraction, the matter heating up as it became more and more concentrated, until its temperature exceeded the thermonuclear ignition point and nuclear reactions started burning hydrogen to form helium. Eventually more complex nuclear reactions started in concentric spheres around the centre, leading to a build-up of heavy elements (carbon, nitrogen, oxygen for example), up to iron. These elements can form in stars because there is a long time available (millions of years) for the reactions to take place. Massive stars burn relatively rapidly, and eventually run out of nuclear fuel. The star becomes unstable, and its core rapidly collapses because of gravitational attraction. The consequent rise in temperature blows it apart in a giant explosion, during which time new reactions take place that generate elements heavier than iron; this explosion is seen by us as a Supernova (“New Star”) suddenly blazing in the sky, where previously there was just an ordinary star. Such explosions blow into space the heavy elements that had been accumulating in the star’s interior, forming vast filaments of dust around the remnant of the star. It is this material that can later be accumulated, during the formation of second generation stars, to form planetary systems around those stars. Thus the elements of which we are made (the carbon, nitrogen, oxygen and iron nuclei for example) were created in the extreme heat of stellar interiors, and made available for our use by supernova explosions. Without these explosions, we could not exist.