Revisiting Catastrophes. Thought of the Day 134.0

The most explicit influence from mathematics in semiotics is probably René Thom’s controversial theory of catastrophes (here and here), with philosophical and semiotic support from Jean Petitot. Catastrophe theory is but one of several formalisms in the broad field of qualitative dynamics (comprising also chaos theory, complexity theory, self-organized criticality, etc.). In all these cases, the theories in question are in a certain sense phenomenological because the focus is different types of qualitative behavior of dynamic systems grasped on a purely formal level bracketing their causal determination on the deeper level. A widespread tool in these disciplines is phase space – a space defined by the variables governing the development of the system so that this development may be mapped as a trajectory through phase space, each point on the trajectory mapping one global state of the system. This space may be inhabited by different types of attractors (attracting trajectories), repellors (repelling them), attractor basins around attractors, and borders between such basins characterized by different types of topological saddles which may have a complicated topology.

Catastrophe theory has its basis in differential topology, that is, the branch of topology keeping various differential properties in a function invariant under transformation. It is, more specifically, the so-called Whitney topology whose invariants are points where the nth derivative of a function takes the value 0, graphically corresponding to minima, maxima, turning tangents, and, in higher dimensions, different complicated saddles. Catastrophe theory takes its point of departure in singularity theory whose object is the shift between types of such functions. It thus erects a distinction between an inner space – where the function varies – and an outer space of control variables charting the variation of that function including where it changes type – where, e.g. it goes from having one minimum to having two minima, via a singular case with turning tangent. The continuous variation of control parameters thus corresponds to a continuous variation within one subtype of the function, until it reaches a singular point where it discontinuously, ‘catastrophically’, changes subtype. The philosophy-of-science interpretation of this formalism now conceives the stable subtype of function as representing the stable state of a system, and the passage of the critical point as the sudden shift to a new stable state. The configuration of control parameters thus provides a sort of map of the shift between continuous development and discontinuous ‘jump’. Thom’s semiotic interpretation of this formalism entails that typical catastrophic trajectories of this kind may be interpreted as stable process types phenomenologically salient for perception and giving rise to basic verbal categories.

Untitled

One of the simpler catastrophes is the so-called cusp (a). It constitutes a meta-diagram, namely a diagram of the possible type-shifts of a simpler diagram (b), that of the equation ax4 + bx2 + cx = 0. The upper part of (a) shows the so-called fold, charting the manifold of solutions to the equation in the three dimensions a, b and c. By the projection of the fold on the a, b-plane, the pointed figure of the cusp (lower a) is obtained. The cusp now charts the type-shift of the function: Inside the cusp, the function has two minima, outside it only one minimum. Different paths through the cusp thus corresponds to different variations of the equation by the variation of the external variables a and b. One such typical path is the path indicated by the left-right arrow on all four diagrams which crosses the cusp from inside out, giving rise to a diagram of the further level (c) – depending on the interpretation of the minima as simultaneous states. Here, thus, we find diagram transformations on three different, nested levels.

The concept of transformation plays several roles in this formalism. The most spectacular one refers, of course, to the change in external control variables, determining a trajectory through phase space where the function controlled changes type. This transformation thus searches the possibility for a change of the subtypes of the function in question, that is, it plays the role of eidetic variation mapping how the function is ‘unfolded’ (the basic theorem of catastrophe theory refers to such unfolding of simple functions). Another transformation finds stable classes of such local trajectory pieces including such shifts – making possible the recognition of such types of shifts in different empirical phenomena. On the most empirical level, finally, one running of such a trajectory piece provides, in itself, a transformation of one state into another, whereby the two states are rationally interconnected. Generally, it is possible to make a given transformation the object of a higher order transformation which by abstraction may investigate aspects of the lower one’s type and conditions. Thus, the central unfolding of a function germ in Catastrophe Theory constitutes a transformation having the character of an eidetic variation making clear which possibilities lie in the function germ in question. As an abstract formalism, the higher of these transformations may determine the lower one as invariant in a series of empirical cases.

Complexity theory is a broader and more inclusive term covering the general study of the macro-behavior of composite systems, also using phase space representation. The theoretical biologist Stuart Kauffman (intro) argues that in a phase space of all possible genotypes, biological evolution must unfold in a rather small and specifically qualified sub-space characterized by many, closely located and stable states (corresponding to the possibility of a species to ‘jump’ to another and better genotype in the face of environmental change) – as opposed to phase space areas with few, very stable states (which will only be optimal in certain, very stable environments and thus fragile when exposed to change), and also opposed, on the other hand, to sub-spaces with a high plurality of only metastable states (here, the species will tend to merge into neighboring species and hence never stabilize). On the base of this argument, only a small subset of the set of virtual genotypes possesses ‘evolvability’ as this special combination between plasticity and stability. The overall argument thus goes that order in biology is not a pure product of evolution; the possibility of order must be present in certain types of organized matter before selection begins – conversely, selection requires already organized material on which to work. The identification of a species with a co-localized group of stable states in genome space thus provides a (local) invariance for the transformation taking a trajectory through space, and larger groups of neighboring stabilities – lineages – again provide invariants defined by various more or less general transformations. Species, in this view, are in a certain limited sense ‘natural kinds’ and thus naturally signifying entities. Kauffman’s speculations over genotypical phase space have a crucial bearing on a transformation concept central to biology, namely mutation. On this basis far from all virtual mutations are really possible – even apart from their degree of environmental relevance. A mutation into a stable but remotely placed species in phase space will be impossible (evolution cannot cross the distance in phase space), just like a mutation in an area with many, unstable proto-species will not allow for any stabilization of species at all and will thus fall prey to arbitrary small environment variations. Kauffman takes a spontaneous and non-formalized transformation concept (mutation) and attempts a formalization by investigating its condition of possibility as movement between stable genomes in genotype phase space. A series of constraints turn out to determine type formation on a higher level (the three different types of local geography in phase space). If the trajectory of mutations must obey the possibility of walking between stable species, then the space of possibility of trajectories is highly limited. Self-organized criticality as developed by Per Bak (How Nature Works the science of self-organized criticality) belongs to the same type of theories. Criticality is here defined as that state of a complicated system where sudden developments in all sizes spontaneously occur.

Advertisement

The Locus of Renormalization. Note Quote.

IMM-3-200-g005

Since symmetries and the related conservation properties have a major role in physics, it is interesting to consider the paradigmatic case where symmetry changes are at the core of the analysis: critical transitions. In these state transitions, “something is not preserved”. In general, this is expressed by the fact that some symmetries are broken or new ones are obtained after the transition (symmetry changes, corresponding to state changes). At the transition, typically, there is the passage to a new “coherence structure” (a non-trivial scale symmetry); mathematically, this is described by the non-analyticity of the pertinent formal development. Consider the classical para-ferromagnetic transition: the system goes from a disordered state to sudden common orientation of spins, up to the complete ordered state of a unique orientation. Or percolation, often based on the formation of fractal structures, that is the iteration of a statistically invariant motif. Similarly for the formation of a snow flake . . . . In all these circumstances, a “new physical object of observation” is formed. Most of the current analyses deal with transitions at equilibrium; the less studied and more challenging case of far form equilibrium critical transitions may require new mathematical tools, or variants of the powerful renormalization methods. These methods change the pertinent object, yet they are based on symmetries and conservation properties such as energy or other invariants. That is, one obtains a new object, yet not necessarily new observables for the theoretical analysis. Another key mathematical aspect of renormalization is that it analyzes point-wise transitions, that is, mathematically, the physical transition is seen as happening in an isolated mathematical point (isolated with respect to the interval topology, or the topology induced by the usual measurement and the associated metrics).

One can say in full generality that a mathematical frame completely handles the determination of the object it describes as long as no strong enough singularity (i.e. relevant infinity or divergences) shows up to break this very mathematical determination. In classical statistical fields (at criticality) and in quantum field theories this leads to the necessity of using renormalization methods. The point of these methods is that when it is impossible to handle mathematically all the interaction of the system in a direct manner (because they lead to infinite quantities and therefore to no relevant account of the situation), one can still analyze parts of the interactions in a systematic manner, typically within arbitrary scale intervals. This allows us to exhibit a symmetry between partial sets of “interactions”, when the arbitrary scales are taken as a parameter.

In this situation, the intelligibility still has an “upward” flavor since renormalization is based on the stability of the equational determination when one considers a part of the interactions occurring in the system. Now, the “locus of the objectivity” is not in the description of the parts but in the stability of the equational determination when taking more and more interactions into account. This is true for critical phenomena, where the parts, atoms for example, can be objectivized outside the system and have a characteristic scale. In general, though, only scale invariance matters and the contingent choice of a fundamental (atomic) scale is irrelevant. Even worse, in quantum fields theories, the parts are not really separable from the whole (this would mean to separate an electron from the field it generates) and there is no relevant elementary scale which would allow ONE to get rid of the infinities (and again this would be quite arbitrary, since the objectivity needs the inter-scale relationship).

In short, even in physics there are situations where the whole is not the sum of the parts because the parts cannot be summed on (this is not specific to quantum fields and is also relevant for classical fields, in principle). In these situations, the intelligibility is obtained by the scale symmetry which is why fundamental scale choices are arbitrary with respect to this phenomena. This choice of the object of quantitative and objective analysis is at the core of the scientific enterprise: looking only at molecules as the only pertinent observable of life is worse than reductionist, it is against the history of physics and its audacious unifications and invention of new observables, scale invariances and even conceptual frames.

As for criticality in biology, there exists substantial empirical evidence that living organisms undergo critical transitions. These are mostly analyzed as limit situations, either never really reached by an organism or as occasional point-wise transitions. Or also, as researchers nicely claim in specific analysis: a biological system, a cell genetic regulatory networks, brain and brain slices …are “poised at criticality”. In other words, critical state transitions happen continually.

Thus, as for the pertinent observables, the phenotypes, we propose to understand evolutionary trajectories as cascades of critical transitions, thus of symmetry changes. In this perspective, one cannot pre-give, nor formally pre-define, the phase space for the biological dynamics, in contrast to what has been done for the profound mathematical frame for physics. This does not forbid a scientific analysis of life. This may just be given in different terms.

As for evolution, there is no possible equational entailment nor a causal structure of determination derived from such entailment, as in physics. The point is that these are better understood and correlated, since the work of Noether and Weyl in the last century, as symmetries in the intended equations, where they express the underlying invariants and invariant preserving transformations. No theoretical symmetries, no equations, thus no laws and no entailed causes allow the mathematical deduction of biological trajectories in pre-given phase spaces – at least not in the deep and strong sense established by the physico-mathematical theories. Observe that the robust, clear, powerful physico-mathematical sense of entailing law has been permeating all sciences, including societal ones, economics among others. If we are correct, this permeating physico-mathematical sense of entailing law must be given up for unentailed diachronic evolution in biology, in economic evolution, and cultural evolution.

As a fundamental example of symmetry change, observe that mitosis yields different proteome distributions, differences in DNA or DNA expressions, in membranes or organelles: the symmetries are not preserved. In a multi-cellular organism, each mitosis asymmetrically reconstructs a new coherent “Kantian whole”, in the sense of the physics of critical transitions: a new tissue matrix, new collagen structure, new cell-to-cell connections . . . . And we undergo millions of mitosis each minute. More, this is not “noise”: this is variability, which yields diversity, which is at the core of evolution and even of stability of an organism or an ecosystem. Organisms and ecosystems are structurally stable, also because they are Kantian wholes that permanently and non-identically reconstruct themselves: they do it in an always different, thus adaptive, way. They change the coherence structure, thus its symmetries. This reconstruction is thus random, but also not random, as it heavily depends on constraints, such as the proteins types imposed by the DNA, the relative geometric distribution of cells in embryogenesis, interactions in an organism, in a niche, but also on the opposite of constraints, the autonomy of Kantian wholes.

In the interaction with the ecosystem, the evolutionary trajectory of an organism is characterized by the co-constitution of new interfaces, i.e. new functions and organs that are the proper observables for the Darwinian analysis. And the change of a (major) function induces a change in the global Kantian whole as a coherence structure, that is it changes the internal symmetries: the fish with the new bladder will swim differently, its heart-vascular system will relevantly change ….

Organisms transform the ecosystem while transforming themselves and they can stand/do it because they have an internal preserved universe. Its stability is maintained also by slightly, yet constantly changing internal symmetries. The notion of extended criticality in biology focuses on the dynamics of symmetry changes and provides an insight into the permanent, ontogenetic and evolutionary adaptability, as long as these changes are compatible with the co-constituted Kantian whole and the ecosystem. As we said, autonomy is integrated in and regulated by constraints, with an organism itself and of an organism within an ecosystem. Autonomy makes no sense without constraints and constraints apply to an autonomous Kantian whole. So constraints shape autonomy, which in turn modifies constraints, within the margin of viability, i.e. within the limits of the interval of extended criticality. The extended critical transition proper to the biological dynamics does not allow one to prestate the symmetries and the correlated phase space.

Consider, say, a microbial ecosystem in a human. It has some 150 different microbial species in the intestinal tract. Each person’s ecosystem is unique, and tends largely to be restored following antibiotic treatment. Each of these microbes is a Kantian whole, and in ways we do not understand yet, the “community” in the intestines co-creates their worlds together, co-creating the niches by which each and all achieve, with the surrounding human tissue, a task closure that is “always” sustained even if it may change by immigration of new microbial species into the community and extinction of old species in the community. With such community membership turnover, or community assembly, the phase space of the system is undergoing continual and open ended changes. Moreover, given the rate of mutation in microbial populations, it is very likely that these microbial communities are also co-evolving with one another on a rapid time scale. Again, the phase space is continually changing as are the symmetries.

Can one have a complete description of actual and potential biological niches? If so, the description seems to be incompressible, in the sense that any linguistic description may require new names and meanings for the new unprestable functions, where functions and their names make only sense in the newly co-constructed biological and historical (linguistic) environment. Even for existing niches, short descriptions are given from a specific perspective (they are very epistemic), looking at a purpose, say. One finds out a feature in a niche, because you observe that if it goes away the intended organisms dies. In other terms, niches are compared by differences: one may not be able to prove that two niches are identical or equivalent (in supporting life), but one may show that two niches are different. Once more, there are no symmetries organizing over time these spaces and their internal relations. Mathematically, no symmetry (groups) nor (partial-) order (semigroups) organize the phase spaces of phenotypes, in contrast to physical phase spaces.

Finally, here is one of the many logical challenges posed by evolution: the circularity of the definition of niches is more than the circularity in the definitions. The “in the definitions” circularity concerns the quantities (or quantitative distributions) of given observables. Typically, a numerical function defined by recursion or by impredicative tools yields a circularity in the definition and poses no mathematical nor logical problems, in contemporary logic (this is so also for recursive definitions of metabolic cycles in biology). Similarly, a river flow, which shapes its own border, presents technical difficulties for a careful equational description of its dynamics, but no mathematical nor logical impossibility: one has to optimize a highly non linear and large action/reaction system, yielding a dynamically constructed geodetic, the river path, in perfectly known phase spaces (momentum and space or energy and time, say, as pertinent observables and variables).

The circularity “of the definitions” applies, instead, when it is impossible to prestate the phase space, so the very novel interaction (including the “boundary conditions” in the niche and the biological dynamics) co-defines new observables. The circularity then radically differs from the one in the definition, since it is at the meta-theoretical (meta-linguistic) level: which are the observables and variables to put in the equations? It is not just within prestatable yet circular equations within the theory (ordinary recursion and extended non – linear dynamics), but in the ever changing observables, the phenotypes and the biological functions in a circularly co-specified niche. Mathematically and logically, no law entails the evolution of the biosphere.

Symplectic Manifolds

2000px-Limitcycle

The canonical example of the n-symplectic manifold is that of the frame bundle, so the question is whether this formalism can be generalized to other principal bundles, and distinguished from the quantization arising from symplectic geometry on the prototype manifold, the bundle of linear frames, a good place to motivate the formalism.

Let us start with an n-dimensional manifold M, and let π : LM → M be the space of linear frames over a base manifold M, the set of pairs (m,ek), where m ∈ M and {ek},k = 1,···,n is a linear frame at m. This gives LM dimension n(n + 1), with GL(n,R) as the structure group acting freely on the right. We define local coordinates on LM in terms of those on the manifold M – for a chart on M with coordinates {xi}, let

qi(m,ek) = xi ◦ π(m,ek) = xi(m)

πji(m,ek) = ej ∂/∂xj

where {ej} denotes the coframe dual to {ej}. These coordinates are analogous to those on the cotangent bundle, except, instead of a single momentum coordinate, we now have a momentum frame. We want to place some kind of structure on LM, which is the prototype of n-symplectic geometry that is similar to symplectic geometry of the cotangent bundle T∗M. The structure equation for symplectic geometry

df= _| X dθ

gives Hamilton’s equations for the phase space of a particle, where θ is the canonical symplectic 2-form. There is a naturally defined Rn-valued 1-form on LM, the soldering form, given by

θ(X) ≡ u−1[π∗(X)] ∀X ∈ TuLM

where the point u = (m,ek) ∈ LM gives the isomorphism u : Rn → Tπ(u)M by ξiri → ξiei, where {ri} is the standard basis of Rn. The Rn-valued 2-form dθ can be shown to be non-degenerate, that is,

X _| dθ = 0 ⇔ X = 0

where we mean that each component of X dθ is identically zero. Finally, since there is also a structure group on LM, there are also group transformation properties. Let ρ be the standard representation of GL(n, R) on Rn. Then it can be shown that the pullback of dθ under right translation by g ∈ GL (n,R) is Rg dθ = ρ(g−1) · dθ.

Thus, we have an Rn-valued generalization of symplectic geometry, which motivates the following definition.

Let P be a principal fiber bundle with structure group G over an m-dimensional manifold M . Let ρ : G → GL(n, R) be a linear representation of G. An n-symplectic structure on P is a Rn-valued 2-form ω on P that is (i) closed and non-degenerate, in the sense that

X _| ω = 0 ⇔ X = 0

for a vector field X on P, and (ii) ω is equivariant, such that under the right action of G, Rg ω = ρ(g−1) · ω. The pair (P, ω) is called an n-symplectic manifold.

vanderPolEquation_1000

Here, we have modeled n-symplectic geometry after the frame bundle by defining the general n-symplectic manifold as a principal bundle. There is no reason, however, to limit ourselves to this, since we can let P be any manifold with a group action defined on it. One example of this would be to look at the action of the conformal group on R4. Since this group is locally isomorphic to O(2, 4), which is not a subgroup of GL(4, R), then forming a O(2,4) bundle over R4 cannot be thought of as simply a reduction of the frame bundle.

Vector Fields Tangent to the Surfaces of Foliation. Note Quote.

events-as-foliations-2

Although we are interested in gauge field theories, we will use mainly the language of mechanics that is, of a finite number of degrees of freedom, which is sufficient for our purposes. A quick switch to the field theory language can be achieved by using DeWitt’s condensed notation. Consider, as our starting point a time-independent first- order Lagrangian L(q, q ̇) defined in configuration-velocity space TQ, that is, the tangent bundle of some configuration manifold Q that we assume to be of dimension n. Gauge theories rely on singular as opposed to regular Lagrangians, that is, Lagrangians whose Hessian matrix with respect to the velocities (where q stands, in a free index notation, for local coordinates in Q),

Wij ≡ ∂2L/∂q.i∂q.j —– (1)

is not invertible. Two main consequences are drawn from this non-invertibility. First notice that the Euler-Lagrange equations of motion [L]i = 0, with

[L]i : = αi − Wijq ̈j

and

αi := ∂2L/∂q.i∂q.j q.j

cannot be written in a normal form, that is, isolating on one side the accelerations q ̈ = f (q, q ̇). This makes the usual theorems about the existence and uniqueness of solutions of ordinary differential equations inapplicable. Consequently, there may be points in the tangent bundle where there are no solutions passing through the point, and others where there is more than one solution.

The second consequence of the Hessian matrix being singular concerns the construction of the canonical formalism. The Legendre map from the tangent bundle TQ to the cotangent bundle —or phase space— T ∗Q (we use the notation pˆ(q, q ̇) := ∂L/∂q ̇),

FL : TQ → T ∗ Q —– (2)

(q, q ̇) → (q, p=pˆ) —– (3)

is no longer invertible because ∂pˆ/∂q ̇ = ∂L/∂q ̇∂q ̇ is the Hessian matrix. There appears then an issue about the projectability of structures from the tangent bundle to phase space: there will be functions defined on TQ that cannot be translated (projected) to functions on phase space. This feature of the formalisms propagates in a corresponding way to the tensor structures, forms, vector fields, etc.

In order to better identify the problem and to obtain the conditions of projectability, we must be more specific. We will make a single assumption, which is that the rank of the Hessian matrix is constant everywhere. If this condition is not satisfied throughout the whole tangent bundle, we will restrict our considerations to a region of it, with the same dimensionality, where this condition holds. So we are assuming that the rank of the Legendre map FL is constant throughout T Q and equal to, say, 2n − k. The image of FL will be locally defined by the vanishing of k independent functions, φμ(q, p), μ = 1, 2, .., k. These functions are the primary constraints, and their pullback FL ∗ φμ to the tangent bundle is identically zero:

(FL ∗ φμ)(q, q ̇) := φμ(q, pˆ) = 0, ∀ q, q ̇—– (4)

The primary constraints form a generating set of the ideal of functions that vanish on the image of the Legendre map. With their help it is easy to obtain a basis of null vectors for the Hessian matrix. Indeed, applying ∂/∂q. to (4) we get

Wij = (∂φμ/∂pj)|p=pˆ = 0, ∀ q, q ̇ —– (5)

With this result in hand, let us consider some geometrical aspects of the Legendre map. We already know that its image in T∗Q is given by the primary constraints’ surface. A foliation in TQ is also defined, with each element given as the inverse image of a point in the primary constraints’ surface in T∗Q. One can easily prove that the vector fields tangent to the surfaces of the foliation are generated by

Γμ= (∂φμ/∂pj)|p=pˆ = ∂/∂q.j —– (6)

The proof goes as follows. Consider two neighboring points in TQ belonging to the same sheet, (q, q ̇) and (q, q ̇ + δq ̇) (the configuration coordinates q must be the same because they are preserved by the Legendre map). Then, using the definition of the Legendre map, we must have pˆ(q, q ̇) = pˆ(q, q ̇ + δq ̇), which implies, expanding to first order,

∂pˆ/ ∂q ̇ δ q ̇ = 0

which identifies δq ̇ as a null vector of the Hessian matrix (here expressed as ∂pˆ/∂q ̇). Since we already know a basis for such null vectors, (∂φμ /∂pj)|p=pˆ, μ = 1, 2, …, k, it follows that the vector fields Γμ form a basis for the vector fields tangent to the foliation.

The knowledge of these vector fields is instrumental for addressing the issue of the projectability of structures. Consider a real-valued function fL: TQ → R. It will — locally— define a function fH: T∗Q −→ R iff it is constant on the sheets of the foliation, that is, when

ΓμfL = 0, μ = 1,2,…,k. (7)

Equation (7) is the projectability condition we were looking for. We express it in the following way:

ΓμfL = 0, μ = 1,2,…,k ⇔ there exists fH such that FL ∗ fH = fL