The Affinity of Mirror Symmetry to Algebraic Geometry: Going Beyond Formalism



Even though formalism of homological mirror symmetry is an established case, what of other explanations of mirror symmetry which lie closer to classical differential and algebraic geometry? One way to tackle this is the so-called Strominger, Yau and Zaslow mirror symmetry or SYZ in short.

The central physical ingredient in this proposal is T-duality. To explain this, let us consider a superconformal sigma model with target space (M, g), and denote it (defined as a geometric functor, or as a set of correlation functions), as

CFT(M, g)

In physics, a duality is an equivalence

CFT(M, g) ≅ CFT(M′, g′)

which holds despite the fact that the underlying geometries (M,g) and (M′, g′) are not classically diffeomorphic.

T-duality is a duality which relates two CFT’s with toroidal target space, M ≅ M′ ≅ Td, but different metrics. In rough terms, the duality relates a “small” target space, with noncontractible cycles of length L < ls, with a “large” target space in which all such cycles have length L > ls.

This sort of relation is generic to dualities and follows from the following logic. If all length scales (lengths of cycles, curvature lengths, etc.) are greater than ls, string theory reduces to conventional geometry. Now, in conventional geometry, we know what it means for (M, g) and (M′, g′) to be non-isomorphic. Any modification to this notion must be associated with a breakdown of conventional geometry, which requires some length scale to be “sub-stringy,” with L < ls. To state T-duality precisely, let us first consider M = M′ = S1. We parameterise this with a coordinate X ∈ R making the identification X ∼ X + 2π. Consider a Euclidean metric gR given by ds2 = R2dX2. The real parameter R is usually called the “radius” from the obvious embedding in R2. This manifold is Ricci-flat and thus the sigma model with this target space is a conformal field theory, the “c = 1 boson.” Let us furthermore set the string scale ls = 1. With this, we attain a complete physical equivalence.

CFT(S1, gR) ≅ CFT(S1, g1/R)

Thus these two target spaces are indistinguishable from the point of view of string theory.

Just to give a physical picture for what this means, suppose for sake of discussion that superstring theory describes our universe, and thus that in some sense there must be six extra spatial dimensions. Suppose further that we had evidence that the extra dimensions factorized topologically and metrically as K5 × S1; then it would make sense to ask: What is the radius R of this S1 in our universe? In principle this could be measured by producing sufficiently energetic particles (so-called “Kaluza-Klein modes”), or perhaps measuring deviations from Newton’s inverse square law of gravity at distances L ∼ R. In string theory, T-duality implies that R ≥ ls, because any theory with R < ls is equivalent to another theory with R > ls. Thus we have a nontrivial relation between two (in principle) observable quantities, R and ls, which one might imagine testing experimentally. Let us now consider the theory CFT(Td, g), where Td is the d-dimensional torus, with coordinates Xi parameterising Rd/2πZd, and a constant metric tensor gij. Then there is a complete physical equivalence

CFT(Td, g) ≅ CFT(Td, g−1)

In fact this is just one element of a discrete group of T-duality symmetries, generated by T-dualities along one-cycles, and large diffeomorphisms (those not continuously connected to the identity). The complete group is isomorphic to SO(d, d; Z).

While very different from conventional geometry, T-duality has a simple intuitive explanation. This starts with the observation that the possible embeddings of a string into X can be classified by the fundamental group π1(X). Strings representing non-trivial homotopy classes are usually referred to as “winding states.” Furthermore, since strings interact by interconnecting at points, the group structure on π1 provided by concatenation of based loops is meaningful and is respected by interactions in the string theory. Now π1(Td) ≅ Zd, as an abelian group, referred to as the group of “winding numbers”.

Of course, there is another Zd we could bring into the discussion, the Pontryagin dual of the U(1)d of which Td is an affinization. An element of this group is referred to physically as a “momentum,” as it is the eigenvalue of a translation operator on Td. Again, this group structure is respected by the interactions. These two group structures, momentum and winding, can be summarized in the statement that the full closed string algebra contains the group algebra C[Zd] ⊕ C[Zd].

In essence, the point of T-duality is that if we quantize the string on a sufficiently small target space, the roles of momentum and winding will be interchanged. But the main point can be seen by bringing in some elementary spectral geometry. Besides the algebra structure, another invariant of a conformal field theory is the spectrum of its Hamiltonian H (technically, the Virasoro operator L0 + L ̄0). This Hamiltonian can be thought of as an analog of the standard Laplacian ∆g on functions on X, and its spectrum on Td with metric g is

Spec ∆= {∑i,j=1d gijpipj; pi ∈ Zd}

On the other hand, the energy of a winding string is (intuitively) a function of its length. On our torus, a geodesic with winding number w ∈ Zd has length squared

L2 = ∑i,j=1d gijwiwj

Now, the only string theory input we need to bring in is that the total Hamiltonian contains both terms,

H = ∆g + L2 + · · ·

where the extra terms … express the energy of excited (or “oscillator”) modes of the string. Then, the inversion g → g−1, combined with the interchange p ↔ w, leaves the spectrum of H invariant. This is T-duality.

There is a simple generalization of the above to the case with a non-zero B-field on the torus satisfying dB = 0. In this case, since B is a constant antisymmetric tensor, we can label CFT’s by the matrix g + B. Now, the basic T-duality relation becomes

CFT(Td, g + B) ≅ CFT(Td, (g + B)−1)

Another generalization, which is considerably more subtle, is to do T-duality in families, or fiberwise T-duality. The same arguments can be made, and would become precise in the limit that the metric on the fibers varies on length scales far greater than ls, and has curvature lengths far greater than ls. This is sometimes called the “adiabatic limit” in physics. While this is a very restrictive assumption, there are more heuristic physical arguments that T-duality should hold more generally, with corrections to the relations proportional to curvatures ls2R and derivatives ls∂ of the fiber metric, both in perturbation theory and from world-sheet instantons.

Fortune of the Individuals Restricted to Integers: Random Economic Exchange Between Populations of Traders.


Consider a population of traders, each of which possesses a certain amount of capital which is assumed to be quantized in units of minimal capital. Taking this latter quantity as the basic unit, the fortune of an individual is restricted to the integers. The wealth of the population evolves by the repeated interaction of random pairs of traders. In each interaction, one unit of capital is transferred between the trading partners. To complete the description, we specify that if a poorest individual (with one unit of capital) loses all remaining capital by virtue of a “loss”, the bankrupt individual is considered to be economically dead and no longer participates in economic activity.

In the following, we consider a specific realization of additive capital exchange, the “random” exchange, where the direction of the capital exchange is independent of the relative capital of the traders. While this rule has little economic basis, the model is completely soluble and thus provides a helpful pedagogical point.

In a random exchange, one unit of capital is exchanged between trading partners as represented by the reaction scheme (j, k) → (j ± 1, k ∓ 1). Let ck(t) be the density of individuals with capital k. within a mean-field description, ck(t) evolves according to

dck(t)/dt = N(t) [ck+1(t) + ck-1(t) – 2ck(t)] —– (1)

with N(t) ≡ M0(t) = ∑k=1 ck(t), the population density. The first two terms account for gain in ck(t) due to the interactions (j, k + 1) → (j + 1, k) and (j, k − 1) → (j−1, k), respectively, while the last term accounts for the loss in ck(t) due to the interactions (j, k) → (j±1, k∓1).

By defining a modified time variable,

T = ∫0dt’N(t’) —– (2)

equation (1) is reduced to the discrete diffusion equation

dck(T)/dT = ck+1(T) + ck-1(T) – 2ck(T) —– (3)

The rate equation for the poorest density has the slightly different form, dc1/dT = c2 − 2c1, but may be written in the same form as equation (3) if we impose the boundary condition c0(T) = 0.

For illustrative purposes, let us assume that initially all individuals have one unit of capital, ck(0) = δk1. The solution to equation (3) subject to these initial and boundary conditions is

ck(T) = e−2T [Ik−1(2T) − Ik+1(2T)] —– (4)

where In denotes the modified Bessel function of order n. consequently, the total density N(t) is

N(T) = e−2T [I0(2T) + I1(2T)] —– (5)

To re-express this exact solution in terms of the physical time t, we first invert equation (2) to obtain t(T) = ∫0T dT′/N(T′), and then eliminate T in favor of t in the solution for ck(T). For simplicity and concreteness, let us consider the long-time limit. From equation (4),

ck(T) ≅ k/√(4πT3) exp (-k2/4T) —– (6)

and from equation (5),

N(T) ≅ (πT)−1/2 —– (7)

Equation (7) also implies t ≅ 2/3 √(πT3) which gives

N(T) ≅ (2/3πt)1/3 —– (8)


ck(t) ≅ k/3t exp [-(π/144)1/3 k2/t2/3] —– (9)

Note that this latter expression may be written in the scaling form ck(t) ∝ N2xe−x2, with the scaling variable x ∝ kN. One can also confirm that the scaling solution represents the basin of attraction for almost all exact solutions. Indeed, for any initial condition with ck(0) decaying faster than k−2, the system reaches the scaling limit ck(t) ∝ N2xe−x2. On the other hand, if ck(0) ∼ k−1−α, with 0 < α < 1, such an initial state converges to an alternative scaling limit which depends on α. These solutions exhibit a slower decay of the total density, N ∼ t−α/(1+α), while the scaling form of the wealth distribution is

ck(t) ∼ N2/αCα(x), x ∝ kN1/α —– (10)

with the scaling function

Cα(x) = e−x20 du e−u2 sinh(2ux)/u1+α —– (11)

Evaluating the integral by the Laplace method gives an asymptotic distribution which exhibits the same x−1−α as the initial distribution. This anomalous scaling in the solution to the diffusion equation is a direct consequence of the extended initial condition. This latter case is not physically relevant, however, since the extended initial distribution leads to a divergent initial wealth density.

Evolutionary Game Theory. Note Quote


In classical evolutionary biology the fitness landscape for possible strategies is considered static. Therefore optimization theory is the usual tool in order to analyze the evolution of strategies that consequently tend to climb the peaks of the static landscape. However in more realistic scenarios the evolution of populations modifies the environment so that the fitness landscape becomes dynamic. In other words, the maxima of the fitness landscape depend on the number of specimens that adopt every strategy (frequency-dependent landscape). In this case, when the evolution depends on agents’ actions, game theory is the adequate mathematical tool to describe the process. But this is precisely the scheme in that the evolving physical laws (i.e. algorithms or strategies) are generated from the agent-agent interactions (bottom-up process) submitted to natural selection.

The concept of evolutionarily stable strategy (ESS) is central to evolutionary game theory. An ESS is defined as that strategy that cannot be displaced by any alternative strategy when being followed by the great majority – almost all of systems in a population. In general,

an ESS is not necessarily optimal; however it might be assumed that in the last stages of evolution — before achieving the quantum equilibrium — the fitness landscape of possible strategies could be considered static or at least slow varying. In this simplified case an ESS would be one with the highest payoff therefore satisfying an optimizing criterion. Different ESSs could exist in other regions of the fitness landscape.

In the information-theoretic Darwinian approach it seems plausible to assume as optimization criterion the optimization of information flows for the system. A set of three regulating principles could be:

Structure: The complexity of the system is optimized (maximized).. The definition that is adopted for complexity is Bennett’s logical depth that for a binary string is the time needed to execute the minimal program that generates such string. There is no a general acceptance of the definition of complexity, neither is there a consensus on the relation between the increase of complexity – for a certain definition – and Darwinian evolution. However, it seems that there is some agreement on the fact that, in the long term, Darwinian evolution should drive to an increase in complexity in the biological realm for an adequate natural definition of this concept. Then the complexity of a system at time in this theory would be the Bennett’s logical depth of the program stored at time in its Turing machine. The increase of complexity is a characteristic of Lamarckian evolution, and it is also admitted that the trend of evolution in the Darwinian theory is in the direction in which complexity grows, although whether this tendency depends on the timescale – or some other factors – is still not very clear.

Dynamics: The information outflow of the system is optimized (minimized). The information is the Fisher information measure for the probability density function of the position of the system. According to S. A. Frank, natural selection acts maximizing the Fisher information within a Darwinian system. As a consequence, assuming that the flow of information between a system and its surroundings can be modeled as a zero-sum game, Darwinian systems would follow dynamics.

Interaction: The interaction between two subsystems optimizes (maximizes) the complexity of the total system. The complexity is again equated to the Bennett’s logical depth. The role of Interaction is central in the generation of composite systems, therefore in the structure for the information processor of composite systems resulting from the logical interconnections among the processors of the constituents. There is an enticing option of defining the complexity of a system in contextual terms as the capacity of a system for anticipating the behavior at t + ∆t of the surrounding systems included in the sphere of radius r centered in the position X(t) occupied by the system. This definition would directly drive to the maximization of the predictive power for the systems that maximized their complexity. However, this magnitude would definitely be very difficult to even estimate, in principle much more than the usual definitions for complexity.

Quantum behavior of microscopic systems should now emerge from the ESS. In other terms, the postulates of quantum mechanics should be deduced from the application of the three regulating principles on our physical systems endowed with an information processor.

Let us apply Structure. It is reasonable to consider that the maximization of the complexity of a system would in turn maximize the predictive power of such system. And this optimal statistical inference capacity would plausibly induce the complex Hilbert space structure for the system’s space of states. Let us now consider Dynamics. This is basically the application of the principle of minimum Fisher information or maximum Cramer-Rao bound on the probability distribution function for the position of the system. The concept of entanglement seems to be determinant to study the generation of composite systems, in particular in this theory through applying Interaction. The theory admits a simple model that characterizes the entanglement between two subsystems as the mutual exchange of randomizers (R1, R2), programs (P1, P2) – with their respective anticipation modules (A1, A2) – and wave functions (Ψ1, Ψ2). In this way, both subsystems can anticipate not only the behavior of their corresponding surrounding systems, but also that of the environment of its partner entangled subsystem. In addition, entanglement can be considered a natural phenomenon in this theory, a consequence of the tendency to increase the complexity, and therefore, in a certain sense, an experimental support to the theory.

In addition, the information-theoretic Darwinian approach is a minimalist realist theory – every system follows a continuous trajectory in time, as in Bohmian mechanics, a local theory in physical space – in this theory apparent nonlocality, as in Bell’s inequality violations, would be an artifact of the anticipation module in the information space, although randomness would necessarily be intrinsic to nature through the random number generator methodologically associated with every fundamental system at t = 0, and as essential ingredient to start and fuel – through variation – Darwinian evolution. As time increases, random events determined by the random number generators would progressively be replaced by causal events determined by the evolving programs that gradually take control of the elementary systems. Randomness would be displaced by causality as physical Darwinian evolution gave rise to the quantum equilibrium regime, but not completely, since randomness would play a crucial role in the optimization of strategies – thus, of information flows – as game theory states.

Mappings, Manifolds and Kantian Abstract Properties of Synthesis


An inverse system is a collection of sets which are connected by mappings. We start off with the definitions before relating these to abstract properties of synthesis.

Definition: A directed set is a set T together with an ordering relation ≤ such that

(1) ≤ is a partial order, i.e. transitive, reflexive, anti-symmetric

(2) ≤ is directed, i.e. for any s, t ∈ T there is r ∈ T with s, t ≤ r

Definition: An inverse system indexed by T is a set D = {Ds|s ∈ T} together with a family of mappings F = {hst|s ≥ t, hst : Ds → Dt}. The mappings in F must satisfy the coherence requirement that if s ≥ t ≥ r, htr ◦ hst = hsr.

Interpretation of the index set: The index set represents some abstract properties of synthesis. The ‘synthesis of apprehension in intuition’ proceeds by a ’running through and holding together of the manifold’ and is thus a process that takes place in time. We may now think of an index s ∈ T as an interval of time available for the process of ’running through and holding together’. More formally, s can be taken to be a set of instants or events, ordered by a ‘precedes’ relation; the relation t ≤ s then stands for: t is a substructure of s. It is immediate that on this interpretation ≤ is a partial order. The directedness is related to what Kant called ‘the formal unity of the consciousness in the synthesis of the manifold of representations’ or ‘the necessary unity of self-consciousness, thus also of the synthesis of the manifold, through a common function of the mind for combining it in one representation’ – the requirement that ‘for any s, t ∈ T there is r ∈ T with s, t ≤ r’ creates the formal conditions for combining the syntheses executed during s and t in one representation, coded by r.

Interpretation of the Ds and the mappings hst : Ds → Dt. An object in Ds can thought of as a possible ‘indeterminate object of empirical intuition’ synthesised in the interval s. If s ≥ t, the mapping hst : Ds → Dt expresses a consistency requirement: if d ∈ Ds represents an indeterminate object of empirical intuition synthesised in interval s, so that a particular manifold of features can be ‘run through and held together’ during s, some indeterminate object of empirical intuition must already be synthesisable by ‘running through and holding together’ in interval t, e.g. by combining a subset of the features characaterising d. This interpretation justifies the coherence condition s ≥ t ≥ r, htr ◦ hst = hsr: the synthesis obtained from first restricting the interval available for ‘running through and holding together’ to interval t, and then to interval r should not differ from the synthesis obtained by restricting to r directly.

We do not put any further requirements on the mappings hst : Ds → Dt, such as surjectivity or injectivity. Some indeterminate object of experience in Dt may have disappeared in Ds: more time for ‘running through and holding together’ may actually yield fewer features that can be combined. Thus we do not require the mappings to be surjective. It may also happen that an indeterminate object of experience in Dt corresponds to two or more of such objects in Ds, as when a building viewed from afar upon closer inspection turns out to be composed of two spatially separated buildings; thus the mappings need not be injective.

The interaction of the directedness of the index set and the mappings hst is of some interest. If r ≥ s, t there are mappings hrs : Dr → Ds and hrt : Ds → Dt. Each ‘indeterminate object of empirical intuition’ in d ∈ Dr can be seen as a synthesis of such objects hrs(d) ∈ Ds and hrt(d) ∈ Dt. For example, the ‘manifold of a house’ can be viewed as synthesised from a ‘manifold of the front’ and a ‘manifold of the back’. The operation just described has some of the characteristics of the synthesis of reproduction in imagination: the fact that the front of the house can be unified with the back to produce a coherent object presupposes that the front can be reproduced as it is while we are staring at the back. The mappings hrs : Dr → Ds and hrt : Ds → Dt capture the idea that d ∈ Dr arises from reproductions of hrs(d) and hrt(d) in r.

Deleuzian Speculative Philosophy. Thought of the Day 44.0


Deleuze’s version of speculative philosophy is the procedure of counter-effectuation or counter-actualization. In defiance of the causal laws of an actual situation, speculation experiments with the quasi-causal intensities capable of bringing about effects that have their own retro-active power. This is its political import. Leibniz already argued that all things are effects or consequences, even though they do not necessarily have a cause, since the sufficient reason of what exists always lies outside of any actual series and remains virtual. (Leibniz) With the Principle of Sufficient Reason, he thus reinvented the Stoic disjunction between the series of corporeal causes and the series of incorporeal effects. Not because he anticipated the modern bifurcation of given necessary causes (How?) and metaphysically constructed reasons (Why?), but because for him the virtuality of effects is no less real than the interaction of causes. The effect always includes its own cause, since divergent series of events (incompossible worlds) enter into relation with any particular event (in this world), while these interpenetrating series are prior to, and not limited by, actual relations of causality per se. In terms of Deleuze, cause and effect do not share the same temporality. Whereas causes relate to one another in an eternal present (Chronos), effects relate to one another in a past-future purified of the present (Aion). Taken together, these temporalities form the double structure of every event (Logic of Sense). When the night is lit up by a sudden flash of lightning, this is the effect of an intensive, metaphysical becoming that contains its own destiny, integrating a differential potentiality that is irreducible to the physical series of necessary efficient causes that nonetheless participate in it. In order for such a contingent conjugation of events to be actualized (i.e. for effects to influence causes and become individuated in a materially extended state of affairs), however, its impersonal and pre-individual presence must be trusted upon. This takes a speculative investment or amor fati that forms its precursive reason/ground. In Difference and Repetition, Deleuze refers to this will to speculate as the dark precursor which determines the path of a thunderbolt in advance but in reverse, as though intagliated by setting up a communication of difference with difference. It is therefore the differenciator of these differences or in- itself of difference. We find a paradigmatic example of this will to make a difference in William James’ The Will to Believe when he writes: We can and we may, as it were, jump with both feet off the ground into a world of which we trust the other parts to meet our jump and only so can the making of a perfected world of the pluralistic pattern ever take place. Only through our precursive trust in it can it come into being. (James) As Stengers explains, we can and do speculate each time we precursively trust in the possibility of connecting, of entering into a (partial) rapport that cannot be derived from the ground of our current, dominant premises. Or as Deleuze writes: the dark precursor is not the friend (Difference and Repetition) but rather the bad will of a traitor or enemy, since the will does not precede the presubjective cruelty of the event in its involuntariness. At the same time, however, we never jump into a vacuum. We always speculate by the milieu, since a jump in general could never be trusted: If a jump is always situated, it is because its aim is not to escape the ground in order to get access to a higher realm. The jump, connecting this ground, always this ground, with what it was alien to, has the necessity of a response. In other words, the ground must have been given the power to make itself felt as calling for new dimensions. (Stengers) Indeed, if speculative thought cannot be detached from a practical concern, Deleuze at the same time states that [t]here is no other ethic than the amor fati of philosophy. (What is Philosophy?) Speculative reasoning is thus an art of pure expression or efficacy, an art of precipitating events: an art that detects and affirms the possibility of other reasons insisting as so many virtual forces that have not yet had the chance to emerge but whose presence can be trusted upon to make a difference.

In Deleuze’s own terms, there is no such thing as pure reason, only heterogeneous processes of rationalization, of actualizing an irrational potential: There is no metaphysics, but rather a politics of being. (Deleuze) For this reason, the method of speculative philosophy is the method of dramatization. It is a method that distributes events according to a logic that conditions the order of their intelligibility. As such it belongs to what in Difference and Repetition is referred to as the proper order of reasons: differentiation-individuation-dramatisation-differenciation. A book of philosophy, Deleuze famously writes in the preface, should be in part a very particular species of detective novel, in part a kind of science fiction. On the one hand, the creation of concepts cannot be separated from a problematic milieu or stage that matters practically; on the other hand, it seeks to deterritorialize this milieu by speculating on the quasi-causal intensity of its becoming-other.

Some content on this page was disabled on May 4, 2020 as a result of a DMCA takedown notice from Columbia University Press. You can learn more about the DMCA here:



Let us introduce the concept of space using the notion of reflexive action (or reflex action) between two things. Intuitively, a thing x acts on another thing y if the presence of x disturbs the history of y. Events in the real world seem to happen in such a way that it takes some time for the action of x to propagate up to y. This fact can be used to construct a relational theory of space à la Leibniz, that is, by taking space as a set of equitemporal things. It is necessary then to define the relation of simultaneity between states of things.

Let x and y be two things with histories h(xτ) and h(yτ), respectively, and let us suppose that the action of x on y starts at τx0. The history of y will be modified starting from τy0. The proper times are still not related but we can introduce the reflex action to define the notion of simultaneity. The action of y on x, started at τy0, will modify x from τx1 on. The relation “the action of x on y is reflected to x” is the reflex action. Historically, Galileo introduced the reflection of a light pulse on a mirror to measure the speed of light. With this relation we will define the concept of simultaneity of events that happen on different basic things.


Besides we have a second important fact: observation and experiment suggest that gravitation, whose source is energy, is a universal interaction, carried by the gravitational field.

Let us now state the above hypothesis axiomatically.

Axiom 1 (Universal interaction): Any pair of basic things interact. This extremely strong axiom states not only that there exist no completely isolated things but that all things are interconnected.

This universal interconnection of things should not be confused with “universal interconnection” claimed by several mystical schools. The present interconnection is possible only through physical agents, with no mystical content. It is possible to model two noninteracting things in Minkowski space assuming they are accelerated during an infinite proper time. It is easy to see that an infinite energy is necessary to keep a constant acceleration, so the model does not represent real things, with limited energy supply.

Now consider the time interval (τx1 − τx0). Special Relativity suggests that it is nonzero, since any action propagates with a finite speed. We then state

Axiom 2 (Finite speed axiom): Given two different and separated basic things x and y, such as in the above figure, there exists a minimum positive bound for the interval (τx1 − τx0) defined by the reflex action.

Now we can define Simultaneity as τy0 is simultaneous with τx1/2 =Df (1/2)(τx1 + τx0)

The local times on x and y can be synchronized by the simultaneity relation. However, as we know from General Relativity, the simultaneity relation is transitive only in special reference frames called synchronous, thus prompting us to include the following axiom:

Axiom 3 (Synchronizability): Given a set of separated basic things {xi} there is an assignment of proper times τi such that the relation of simultaneity is transitive.

With this axiom, the simultaneity relation is an equivalence relation. Now we can define a first approximation to physical space, which is the ontic space as the equivalence class of states defined by the relation of simultaneity on the set of things is the ontic space EO.

The notion of simultaneity allows the analysis of the notion of clock. A thing y ∈ Θ is a clock for the thing x if there exists an injective function ψ : SL(y) → SL(x), such that τ < τ′ ⇒ ψ(τ) < ψ(τ′). i.e.: the proper time of the clock grows in the same way as the time of things. The name Universal time applies to the proper time of a reference thing that is also a clock. From this we see that “universal time” is frame dependent in agreement with the results of Special Relativity.

US Stock Market Interaction Network as Learned by the Boltzmann Machine


Price formation on a financial market is a complex problem: It reflects opinion of investors about true value of the asset in question, policies of the producers, external regulation and many other factors. Given the big number of factors influencing price, many of which unknown to us, describing price formation essentially requires probabilistic approaches. In the last decades, synergy of methods from various scientific areas has opened new horizons in understanding the mechanisms that underlie related problems. One of the popular approaches is to consider a financial market as a complex system, where not only a great number of constituents plays crucial role but also non-trivial interaction properties between them. For example, related interdisciplinary studies of complex financial systems have revealed their enhanced sensitivity to fluctuations and external factors near critical events with overall change of internal structure. This can be complemented by the research devoted to equilibrium and non-equilibrium phase transitions.

In general, statistical modeling of the state space of a complex system requires writing down the probability distribution over this space using real data. In a simple version of modeling, the probability of an observable configuration (state of a system) described by a vector of variables s can be given in the exponential form

p(s) = Z−1 exp {−βH(s)} —– (1)

where H is the Hamiltonian of a system, β is inverse temperature (further β ≡ 1 is assumed) and Z is a statistical sum. Physical meaning of the model’s components depends on the context and, for instance, in the case of financial systems, s can represent a vector of stock returns and H can be interpreted as the inverse utility function. Generally, H has parameters defined by its series expansion in s. Basing on the maximum entropy principle, expansion up to the quadratic terms is usually used, leading to the pairwise interaction models. In the equilibrium case, the Hamiltonian has form

H(s) = −hTs − sTJs —– (2)

where h is a vector of size N of external fields and J is a symmetric N × N matrix of couplings (T denotes transpose). The energy-based models represented by (1) play essential role not only in statistical physics but also in neuroscience (models of neural networks) and machine learning (generative models, also known as Boltzmann machines). Given topological similarities between neural and financial networks, these systems can be considered as examples of complex adaptive systems, which are characterized by the adaptation ability to changing environment, trying to stay in equilibrium with it. From this point of view, market structural properties, e.g. clustering and networks, play important role for modeling of the distribution of stock prices. Adaptation (or learning) in these systems implies change of the parameters of H as financial and economic systems evolve. Using statistical inference for the model’s parameters, the main goal is to have a model capable of reproducing the same statistical observables given time series for a particular historical period. In the pairwise case, the objective is to have

⟨sidata = ⟨simodel —– (3a)

⟨sisjdata = ⟨sisjmodel —– (3b)

where angular brackets denote statistical averaging over time. Having specified general mathematical model, one can also discuss similarities between financial and infinite- range magnetic systems in terms of phenomena related, e.g. extensivity, order parameters and phase transitions, etc. These features can be captured even in the simplified case, when si is a binary variable taking only two discrete values. Effect of the mapping to a binarized system, when the values si = +1 and si = −1 correspond to profit and loss respectively. In this case, diagonal elements of the coupling matrix, Jii, are zero because s2i = 1 terms do not contribute to the Hamiltonian….

US stock market interaction network as learned by the Boltzmann Machine

The Locus of Renormalization. Note Quote.


Since symmetries and the related conservation properties have a major role in physics, it is interesting to consider the paradigmatic case where symmetry changes are at the core of the analysis: critical transitions. In these state transitions, “something is not preserved”. In general, this is expressed by the fact that some symmetries are broken or new ones are obtained after the transition (symmetry changes, corresponding to state changes). At the transition, typically, there is the passage to a new “coherence structure” (a non-trivial scale symmetry); mathematically, this is described by the non-analyticity of the pertinent formal development. Consider the classical para-ferromagnetic transition: the system goes from a disordered state to sudden common orientation of spins, up to the complete ordered state of a unique orientation. Or percolation, often based on the formation of fractal structures, that is the iteration of a statistically invariant motif. Similarly for the formation of a snow flake . . . . In all these circumstances, a “new physical object of observation” is formed. Most of the current analyses deal with transitions at equilibrium; the less studied and more challenging case of far form equilibrium critical transitions may require new mathematical tools, or variants of the powerful renormalization methods. These methods change the pertinent object, yet they are based on symmetries and conservation properties such as energy or other invariants. That is, one obtains a new object, yet not necessarily new observables for the theoretical analysis. Another key mathematical aspect of renormalization is that it analyzes point-wise transitions, that is, mathematically, the physical transition is seen as happening in an isolated mathematical point (isolated with respect to the interval topology, or the topology induced by the usual measurement and the associated metrics).

One can say in full generality that a mathematical frame completely handles the determination of the object it describes as long as no strong enough singularity (i.e. relevant infinity or divergences) shows up to break this very mathematical determination. In classical statistical fields (at criticality) and in quantum field theories this leads to the necessity of using renormalization methods. The point of these methods is that when it is impossible to handle mathematically all the interaction of the system in a direct manner (because they lead to infinite quantities and therefore to no relevant account of the situation), one can still analyze parts of the interactions in a systematic manner, typically within arbitrary scale intervals. This allows us to exhibit a symmetry between partial sets of “interactions”, when the arbitrary scales are taken as a parameter.

In this situation, the intelligibility still has an “upward” flavor since renormalization is based on the stability of the equational determination when one considers a part of the interactions occurring in the system. Now, the “locus of the objectivity” is not in the description of the parts but in the stability of the equational determination when taking more and more interactions into account. This is true for critical phenomena, where the parts, atoms for example, can be objectivized outside the system and have a characteristic scale. In general, though, only scale invariance matters and the contingent choice of a fundamental (atomic) scale is irrelevant. Even worse, in quantum fields theories, the parts are not really separable from the whole (this would mean to separate an electron from the field it generates) and there is no relevant elementary scale which would allow ONE to get rid of the infinities (and again this would be quite arbitrary, since the objectivity needs the inter-scale relationship).

In short, even in physics there are situations where the whole is not the sum of the parts because the parts cannot be summed on (this is not specific to quantum fields and is also relevant for classical fields, in principle). In these situations, the intelligibility is obtained by the scale symmetry which is why fundamental scale choices are arbitrary with respect to this phenomena. This choice of the object of quantitative and objective analysis is at the core of the scientific enterprise: looking only at molecules as the only pertinent observable of life is worse than reductionist, it is against the history of physics and its audacious unifications and invention of new observables, scale invariances and even conceptual frames.

As for criticality in biology, there exists substantial empirical evidence that living organisms undergo critical transitions. These are mostly analyzed as limit situations, either never really reached by an organism or as occasional point-wise transitions. Or also, as researchers nicely claim in specific analysis: a biological system, a cell genetic regulatory networks, brain and brain slices …are “poised at criticality”. In other words, critical state transitions happen continually.

Thus, as for the pertinent observables, the phenotypes, we propose to understand evolutionary trajectories as cascades of critical transitions, thus of symmetry changes. In this perspective, one cannot pre-give, nor formally pre-define, the phase space for the biological dynamics, in contrast to what has been done for the profound mathematical frame for physics. This does not forbid a scientific analysis of life. This may just be given in different terms.

As for evolution, there is no possible equational entailment nor a causal structure of determination derived from such entailment, as in physics. The point is that these are better understood and correlated, since the work of Noether and Weyl in the last century, as symmetries in the intended equations, where they express the underlying invariants and invariant preserving transformations. No theoretical symmetries, no equations, thus no laws and no entailed causes allow the mathematical deduction of biological trajectories in pre-given phase spaces – at least not in the deep and strong sense established by the physico-mathematical theories. Observe that the robust, clear, powerful physico-mathematical sense of entailing law has been permeating all sciences, including societal ones, economics among others. If we are correct, this permeating physico-mathematical sense of entailing law must be given up for unentailed diachronic evolution in biology, in economic evolution, and cultural evolution.

As a fundamental example of symmetry change, observe that mitosis yields different proteome distributions, differences in DNA or DNA expressions, in membranes or organelles: the symmetries are not preserved. In a multi-cellular organism, each mitosis asymmetrically reconstructs a new coherent “Kantian whole”, in the sense of the physics of critical transitions: a new tissue matrix, new collagen structure, new cell-to-cell connections . . . . And we undergo millions of mitosis each minute. More, this is not “noise”: this is variability, which yields diversity, which is at the core of evolution and even of stability of an organism or an ecosystem. Organisms and ecosystems are structurally stable, also because they are Kantian wholes that permanently and non-identically reconstruct themselves: they do it in an always different, thus adaptive, way. They change the coherence structure, thus its symmetries. This reconstruction is thus random, but also not random, as it heavily depends on constraints, such as the proteins types imposed by the DNA, the relative geometric distribution of cells in embryogenesis, interactions in an organism, in a niche, but also on the opposite of constraints, the autonomy of Kantian wholes.

In the interaction with the ecosystem, the evolutionary trajectory of an organism is characterized by the co-constitution of new interfaces, i.e. new functions and organs that are the proper observables for the Darwinian analysis. And the change of a (major) function induces a change in the global Kantian whole as a coherence structure, that is it changes the internal symmetries: the fish with the new bladder will swim differently, its heart-vascular system will relevantly change ….

Organisms transform the ecosystem while transforming themselves and they can stand/do it because they have an internal preserved universe. Its stability is maintained also by slightly, yet constantly changing internal symmetries. The notion of extended criticality in biology focuses on the dynamics of symmetry changes and provides an insight into the permanent, ontogenetic and evolutionary adaptability, as long as these changes are compatible with the co-constituted Kantian whole and the ecosystem. As we said, autonomy is integrated in and regulated by constraints, with an organism itself and of an organism within an ecosystem. Autonomy makes no sense without constraints and constraints apply to an autonomous Kantian whole. So constraints shape autonomy, which in turn modifies constraints, within the margin of viability, i.e. within the limits of the interval of extended criticality. The extended critical transition proper to the biological dynamics does not allow one to prestate the symmetries and the correlated phase space.

Consider, say, a microbial ecosystem in a human. It has some 150 different microbial species in the intestinal tract. Each person’s ecosystem is unique, and tends largely to be restored following antibiotic treatment. Each of these microbes is a Kantian whole, and in ways we do not understand yet, the “community” in the intestines co-creates their worlds together, co-creating the niches by which each and all achieve, with the surrounding human tissue, a task closure that is “always” sustained even if it may change by immigration of new microbial species into the community and extinction of old species in the community. With such community membership turnover, or community assembly, the phase space of the system is undergoing continual and open ended changes. Moreover, given the rate of mutation in microbial populations, it is very likely that these microbial communities are also co-evolving with one another on a rapid time scale. Again, the phase space is continually changing as are the symmetries.

Can one have a complete description of actual and potential biological niches? If so, the description seems to be incompressible, in the sense that any linguistic description may require new names and meanings for the new unprestable functions, where functions and their names make only sense in the newly co-constructed biological and historical (linguistic) environment. Even for existing niches, short descriptions are given from a specific perspective (they are very epistemic), looking at a purpose, say. One finds out a feature in a niche, because you observe that if it goes away the intended organisms dies. In other terms, niches are compared by differences: one may not be able to prove that two niches are identical or equivalent (in supporting life), but one may show that two niches are different. Once more, there are no symmetries organizing over time these spaces and their internal relations. Mathematically, no symmetry (groups) nor (partial-) order (semigroups) organize the phase spaces of phenotypes, in contrast to physical phase spaces.

Finally, here is one of the many logical challenges posed by evolution: the circularity of the definition of niches is more than the circularity in the definitions. The “in the definitions” circularity concerns the quantities (or quantitative distributions) of given observables. Typically, a numerical function defined by recursion or by impredicative tools yields a circularity in the definition and poses no mathematical nor logical problems, in contemporary logic (this is so also for recursive definitions of metabolic cycles in biology). Similarly, a river flow, which shapes its own border, presents technical difficulties for a careful equational description of its dynamics, but no mathematical nor logical impossibility: one has to optimize a highly non linear and large action/reaction system, yielding a dynamically constructed geodetic, the river path, in perfectly known phase spaces (momentum and space or energy and time, say, as pertinent observables and variables).

The circularity “of the definitions” applies, instead, when it is impossible to prestate the phase space, so the very novel interaction (including the “boundary conditions” in the niche and the biological dynamics) co-defines new observables. The circularity then radically differs from the one in the definition, since it is at the meta-theoretical (meta-linguistic) level: which are the observables and variables to put in the equations? It is not just within prestatable yet circular equations within the theory (ordinary recursion and extended non – linear dynamics), but in the ever changing observables, the phenotypes and the biological functions in a circularly co-specified niche. Mathematically and logically, no law entails the evolution of the biosphere.