Microcausality

1-s2.0-S1355219803000662-fx1

If e0 ∈ R1+1 is a future-directed timelike unit vector, and if e1 is the unique spacelike unit vector with e0e1 = 0 that “points to the right,” then coordinates x0 and x1 on R1+1 are defined by x0(q) := qe0 and x1(q) := qe1. The partial differential operator

x : = ∂2x0 − ∂2x1

does not depend on the choice of e0.

The Fourier transform of the Klein-Gordon equation

(□ + m2)u = 0 —– (1)

where m > 0 is a given mass, is

(−p2 + m2)û(p) = 0 —– (2)

As a consequence, the support of û has to be a subset of the hyperbola Hm ⊂ R1+1 specified by the condition p2 = m2. One connected component of Hm consists of positive-energy vectors only; it is called the upper mass shell Hm+. The elements of Hm+ are the 4-momenta of classical relativistic point particles.

Denote by L1 the restricted Lorentz group, i.e., the connected component of the Lorentz group containing its unit element. In 1 + 1 dimensions, L1 coincides with the one-parameter Abelian group B(χ), χ ∈ R, of boosts. Hm+ is an orbit of L1 without fixed points. So if one chooses any point p′ ∈ Hm+, then there is, for each p ∈ Hm+, a unique χ(p) ∈ R with p = B(χ(p))p′. By construction, χ(B(ξ)p) = χ(p) + ξ, so the measure dχ on Hm+ is invariant under boosts and does note depend on the choice of p′.

For each p ∈ Hm+, the plane wave q ↦ e±ipq on R1+1 is a classical solution of the Klein-Gordon equation. The Klein-Gordon equation is linear, so if a+ and a are, say, integrable functions on Hm+, then

F(q) := ∫Hm+ (a+(p)e-ipq + a(p)eipq dχ(p) —– (3)

is a solution of the Klein-Gordon equation as well. If the functions a± are not integrable, the field F may still be well defined as a distribution. As an example, put a± ≡ (2π)−1, then

F(q) = (2π)−1 Hm+ (e-ipq + eipq) dχ(p) = π−1Hm+ cos(pq) dχ(p) =: Φ(q) —– (4)

and for a± ≡ ±(2πi)−1, F equals

F(q) = (2πi)−1Hm+ (e-ipq – eipq) dχ(p) = π−1Hm+ sin(pq) dχ(p) =: ∆(q) —– (5)

Quantum fields are obtained by “plugging” classical field equations and their solutions into the well-known second quantization procedure. This procedure replaces the complex (or, more generally speaking, finite-dimensional vector) field values by linear operators in an infinite-dimensional Hilbert space, namely, a Fock space. The Hilbert space of the hermitian scalar field is constructed from wave functions that are considered as the wave functions of one or several particles of mass m. The single-particle wave functions are the elements of the Hilbert space H1 := L2(Hm+, dχ). Put the vacuum (zero-particle) space H0 equal to C, define the vacuum vector Ω := 1 ∈ H0, and define the N-particle space HN as the Hilbert space of symmetric wave functions in L2((Hm+)N, dNχ), i.e., all wave functions ψ with

ψ(pπ(1) ···pπ(N)) = ψ(p1 ···pN)

∀ permutations π ∈ SN. The bosonic Fock space H is defined by

H := ⊕N∈N HN.

The subspace

D := ∪M∈N ⊕0≤M≤N HN is called a finite particle space.

The definition of the N-particle wave functions as symmetric functions endows the field with a Bose–Einstein statistics. To each wave function φ ∈ H1, assign a creation operator a+(φ) by

a+(φ)ψ := CNφ ⊗s ψ, ψ ∈ D,

where ⊗s denotes the symmetrized tensor product and where CN is a constant.

(a+(φ)ψ)(p1 ···pN) = CN/N ∑v φ(pν)ψ(pπ(1) ···p̂ν ···pπ(N)) —– (6)

where the hat symbol indicates omission of the argument. This defines a+(φ) as a linear operator on the finite-particle space D.

The adjoint operator a(φ) := a+(φ) is called an annihilation operator; it assigns to each ψ ∈ HN, N ≥ 1, the wave function a(φ)ψ ∈ HN−1 defined by

(a(φ)ψ)(p1 ···pN) := CN ∫Hm+ φ(p)ψ(p1 ···pN−1, p) dχ(p)

together with a(φ)Ω := 0, this suffices to specify a(φ) on D. Annihilation operators can also be defined for sharp momenta. Namely, one can define to each p ∈ Hm+ the annihilation operator a(p) assigning to

each ψ ∈ HN, N ≥ 1, the wave function a(p)ψ ∈ HN−1 given by

(a(p)ψ)(p1 ···pN−1) := Cψ(p, p1 ···pN−1), ψ ∈ HN,

and assigning 0 ∈ H to Ω. a(p) is, like a(φ), well defined on the finite-particle space D as an operator, but its hermitian adjoint is ill-defined as an operator, since the symmetric tensor product of a wave function by a delta function is no wave function.

Given any single-particle wave functions ψ, φ ∈ H1, the commutators [a(ψ), a(φ)] and [a+(ψ), a+(φ)] vanish by construction. It is customary to choose the constants CN in such a fashion that creation and annihilation operators exhibit the commutation relation

[a(φ), a+(ψ)] = ⟨φ, ψ⟩ —– (7)

which requires CN = N. With this choice, all creation and annihilation operators are unbounded, i.e., they are not continuous.

When defining the hermitian scalar field as an operator valued distribution, it must be taken into account that an annihilation operator a(φ) depends on its argument φ in an antilinear fashion. The dependence is, however, R-linear, and one can define the scalar field as a C-linear distribution in two steps.

For each real-valued test function φ on R1+1, define

Φ(φ) := a(φˆ|Hm+) + a+(φˆ|Hm+)

then one can define for an arbitrary complex-valued φ

Φ(φ) := Φ(Re(φ)) + iΦ(Im(φ))

Referring to (4), Φ is called the hermitian scalar field of mass m.

Thereafter, one could see

[Φ(q), Φ(q′)] = i∆(q − q′) —– (8)

Referring to (5), which is to be read as an equation of distributions. The distribution ∆ vanishes outside the light cone, i.e., ∆(q) = 0 if q2 < 0. Namely, the integrand in (5) is odd with respect to some p′ ∈ Hm+ if q is spacelike. Note that pq > 0 for all p ∈ Hm+ if q ∈ V+. The consequence of this is called microcausality: field operators located in spacelike separated regions commute (for the hermitian scalar field).

Advertisements

Momentum Space Topology Generates Massive Fermions. Thought of the Day 142.0

Untitled

Topological quantum phase transitions: The vacua at b0 ≠ 0 and b > M have Fermi surfaces. At b2 > b20 + M2, these Fermi surfaces have nonzero global topological charges N3 = +1 and N3 = −1. At the quantum phase transition occurring on the line b0 = 0, b > M (thick horizontal line) the Fermi surfaces shrink to the Fermi points with nonzero N3. At M2 < b2 < b20 + M2 the global topology of the Fermi surfaces is trivial, N3 = 0. At the quantum phase transition occurring on the line b = M (thick vertical line), the Fermi surfaces shrink to the points; and since their global topology is trivial the zeroes disappear at b < M where the vacuum is fully gapped. The quantum phase transition between the Fermi surfaces with and without topological charge N3 occurs at b2 = b20 + M2 (dashed line). At this transition, the Fermi surfaces touch each other, and their topological charges annihilate each other.

What we have assumed here is that the Fermi point in the Standard Model above the electroweak energy scale is marginal, i.e. its total topological charge is N3 = 0. Since the topology does not protect such a point, everything depends on symmetry, which is more subtle. In principle, one may expect that the vacuum is always fully gapped. This is supported by the Monte-Carlo simulations which suggest that in the Standard Model there is no second-order phase transition at finite temperature, instead one has either the first-order electroweak transition or crossover depending on the ratio of masses of the Higgs and gauge bosons. This would actually mean that the fermions are always massive.

Such scenario does not contradict to the momentum-space topology, only if the total topological charge N3 is zero. However, from the point of view of the momentum-space topology there is another scheme of the description of the Standard Model. Let us assume that the Standard Model follows from the GUT with SO(10) group. Here, the 16 Standard Model fermions form at high energy the 16-plet of the SO(10) group. All the particles of this multiplet are left-handed fermions. These are: four left-handed SU(2) doublets (neutrino-electron and 3 doublets of quarks) + eight left SU(2) singlets of anti-particles (antineutrino, positron and 6 anti-quarks). The total topological charge of the Fermi point at p = 0 is N3 = −16, and thus such a vacuum is topologically stable and is protected against the mass of fermions. This topological protection works even if the SU (2) × U (1) symmetry is violated perturbatively, say, due to the mixing of different species of the 16-plet. Mixing of left leptonic doublet with left singlets (antineutrino and positron) violates SU(2) × U(1) symmetry, but this does not lead to annihilation of Fermi points and mass formation since the topological charge N3 is conserved.

What this means in a nutshell is that if the total topological charge of the Fermi surfaces is non-zero, the gap cannot appear perturbatively. It can only arise due to the crucial reconstruction of the fermionic spectrum with effective doubling of fermions. In the same manner, in the SO(10) GUT model the mass generation can only occur non-perturbatively. The mixing of the left and right fermions requires the introduction of the right fermions, and thus the effective doubling of the number of fermions. The corresponding Gor’kov’s Green’s function in this case will be the (16 × 2) × (16 × 2) matrix. The nullification of the topological charge N3 = −16 occurs exactly in the same manner, as in superconductors. In the extended (Gor’kov) Green’s function formalism appropriate below the transition, the topological charge of the original Fermi point is annihilated by the opposite charge N3 = +16 of the Fermi point of “holes” (right-handed particles).

This demonstrates that the mechanism of generation of mass of fermions essentially depends on the momentum space topology. If the Standard Model originates from the SO(10) group, the vacuum belongs to the universality class with the topologically non-trivial chiral Fermi point (i.e. with N3 ≠ 0), and the smooth crossover to the fully-gapped vacuum is impossible. On the other hand, if the Standard Model originates from the left-right symmetric Pati–Salam group such as SU(2)L × SU(2)R × SU(4), and its vacuum has the topologically trivial (marginal) Fermi point with N3 = 0, the smooth crossover to the fully-gapped vacuum is possible.

Black Hole Analogue: Extreme Blue Shift Disturbance. Thought of the Day 141.0

One major contribution of the theoretical study of black hole analogues has been to help clarify the derivation of the Hawking effect, which leads to a study of Hawking radiation in a more general context, one that involves, among other features, two horizons. There is an apparent contradiction in Hawking’s semiclassical derivation of black hole evaporation, in that the radiated fields undergo arbitrarily large blue-shifting in the calculation, thus acquiring arbitrarily large masses, which contravenes the underlying assumption that the gravitational effects of the quantum fields may be ignored. This is known as the trans-Planckian problem. A similar issue arises in condensed matter analogues such as the sonic black hole.

Untitled

Sonic horizons in a moving fluid, in which the speed of sound is 1. The velocity profile of the fluid, v(z), attains the value −1 at two values of z; these are horizons for sound waves that are right-moving with respect to the fluid. At the right-hand horizon right-moving waves are trapped, with waves just to the left of the horizon being swept into the supersonic flow region v < −1; no sound can emerge from this region through the horizon, so it is reminiscent of a black hole. At the left-hand horizon right-moving waves become frozen and cannot enter the supersonic flow region; this is reminiscent of a white hole.

Considering the sonic horizons in one-dimensional fluid flow, the velocity profile of the fluid as depicted in the figure above, the two horizons are formed for sound waves that propagate to the right with respect to the fluid. The horizon on the right of the supersonic flow region v < −1 behaves like a black hole horizon for right-moving waves, while the horizon on the left of the supersonic flow region behaves like a white hole horizon for these waves. In such a system, the equation for a small perturbation φ of the velocity potential is

(∂t + ∂zv)(∂t + v∂z)φ − ∂z2φ = 0 —– (1)

In terms of a new coordinate τ defined by

dτ := dt + v/(1 – v2) dz

(1) is the equation φ = 0 of a scalar field in the black-hole-type metric

ds2 = (1 – v2)dτ2 – dz2/(1 – v2)

Each horizon will produce a thermal spectrum of phonons with a temperature determined by the quantity that corresponds to the surface gravity at the horizon, namely the absolute value of the slope of the velocity profile:

kBT = ħα/2π, α := |dv/dz|v=-1 —– (2)

Untitled

Hawking phonons in the fluid flow: Real phonons have positive frequency in the fluid-element frame and for right-moving phonons this frequency (ω − vk) is ω/(1 + v) = k. Thus in the subsonic-flow regions ω (conserved 1 + v for each ray) is positive, whereas in the supersonic-flow region it is negative; k is positive for all real phonons. The frequency in the fluid-element frame diverges at the horizons – the trans-Planckian problem.

The trajectories of the created phonons are formally deduced from the dispersion relation of the sound equation (1). Geometrical acoustics applied to (1) gives the dispersion relation

ω − vk = ±k —– (3)

and the Hamiltonians

dz/dt = ∂ω/∂k = v ± 1 —– (4)

dk/dt = -∂ω/∂z = − v′k —– (5)

The left-hand side of (3) is the frequency in the frame co-moving with a fluid element, whereas ω is the frequency in the laboratory frame; the latter is constant for a time-independent fluid flow (“time-independent Hamiltonian” dω/dt = ∂ω/∂t = 0). Since the Hawking radiation is right-moving with respect to the fluid, we clearly must choose the positive sign in (3) and hence in (4) also. By approximating v(z) as a linear function near the horizons we obtain from (4) and (5) the ray trajectories. The disturbing feature of the rays is the behavior of the wave vector k: at the horizons the radiation is exponentially blue-shifted, leading to a diverging frequency in the fluid-element frame. These runaway frequencies are unphysical since (1) asserts that sound in a fluid element obeys the ordinary wave equation at all wavelengths, in contradiction with the atomic nature of fluids. Moreover the conclusion that this Hawking radiation is actually present in the fluid also assumes that (1) holds at all wavelengths, as exponential blue-shifting of wave packets at the horizon is a feature of the derivation. Similarly, in the black-hole case the equation does not hold at arbitrarily high frequencies because it ignores the gravity of the fields. For the black hole, a complete resolution of this difficulty will require inputs from the gravitational physics of quantum fields, i.e. quantum gravity, but for the dumb hole the physics is available for a more realistic treatment.

 

Adjacency of the Possible: Teleology of Autocatalysis. Thought of the Day 140.0

abiogenesisautocatalysis

Given a network of catalyzed chemical reactions, a (sub)set R of such reactions is called:

  1. Reflexively autocatalytic (RA) if every reaction in R is catalyzed by at least one molecule involved in any of the reactions in R;
  2. F-generated (F) if every reactant in R can be constructed from a small “food set” F by successive applications of reactions from R;
  3. Reflexively autocatalytic and F-generated (RAF) if it is both RA and F.

The food set F contains molecules that are assumed to be freely available in the environment. Thus, an RAF set formally captures the notion of “catalytic closure”, i.e., a self-sustaining set supported by a steady supply of (simple) molecules from some food set….

Stuart Kauffman begins with the Darwinian idea of the origin of life in a biological ‘primordial soup’ of organic chemicals and investigates the possibility of one chemical substance to catalyze the reaction of two others, forming new reagents in the soup. Such catalyses may, of course, form chains, so that one reagent catalyzes the formation of another catalyzing another, etc., and self-sustaining loops of reaction chains is an evident possibility in the appropriate chemical environment. A statistical analysis would reveal that such catalytic reactions may form interdependent networks when the rate of catalyzed reactions per molecule approaches one, creating a self-organizing chemical cycle which he calls an ‘autocatalytic set’. When the rate of catalyses per reagent is low, only small local reaction chains form, but as the rate approaches one, the reaction chains in the soup suddenly ‘freeze’ so that what was a group of chains or islands in the soup now connects into one large interdependent network, constituting an ‘autocatalytic set’. Such an interdependent reaction network constitutes the core of the body definition unfolding in Kauffman, and its cyclic character forms the basic precondition for self-sustainment. ‘Autonomous agent’ is an autocatalytic set able to reproduce and to undertake at least one thermodynamic work cycle.

This definition implies two things: reproduction possibility, and the appearance of completely new, interdependent goals in work cycles. The latter idea requires the ability of the autocatalytic set to save energy in order to spend it in its own self-organization, in its search for reagents necessary to uphold the network. These goals evidently introduce a – restricted, to be sure – teleology defined simply by the survival of the autocatalytic set itself: actions supporting this have a local teleological character. Thus, the autocatalytic set may, as it evolves, enlarge its cyclic network by recruiting new subcycles supporting and enhancing it in a developing structure of subcycles and sub-sub-cycles. 

Kauffman proposes that the concept of ‘autonomous agent’ implies a whole new cluster of interdependent concepts. Thus, the autonomy of the agent is defined by ‘catalytic closure’ (any reaction in the network demanding catalysis will get it) which is a genuine Gestalt property in the molecular system as a whole – and thus not in any way derivable from the chemistry of single chemical reactions alone.

Kauffman’s definitions on the basis of speculative chemistry thus entail not only the Kantian cyclic structure, but also the primitive perception and action phases of Uexküll’s functional circle. Thus, Kauffman’s definition of the organism in terms of an ‘autonomous agent’ basically builds on an Uexküllian intuition, namely the idea that the most basic property in a body is metabolism: the constrained, organizing processing of high-energy chemical material and the correlated perception and action performed to localize and utilize it – all of this constituting a metabolic cycle coordinating the organism’s in- and outside, defining teleological action. Perception and action phases are so to speak the extension of the cyclical structure of the closed catalytical set to encompass parts of its surroundings, so that the circle of metabolism may only be completed by means of successful perception and action parts.

The evolution of autonomous agents is taken as the empirical basis for the hypothesis of a general thermodynamic regularity based on non-ergodicity: the Big Bang universe (and, consequently, the biosphere) is not at equilibrium and will not reach equilibrium during the life-time of the universe. This gives rise to Kauffman’s idea of the ‘adjacent possible’. At a given point in evolution, one can define the set of chemical substances which do not exist in the universe – but which is at a distance of one chemical reaction only from a substance already existing in the universe. Biological evolution has, evidently, led to an enormous growth of types of organic macromolecules, and new such substances come into being every day. Maybe there is a sort of chemical potential leading from the actually realized substances and into the adjacent possible which is in some sense driving the evolution? In any case, Kauffman claims the hypothesis that the biosphere as such is supercritical in the sense that there is, in general, more than one action catalyzed by each reagent. Cells, in order not to be destroyed by this chemical storm, must be internally subcritical (even if close to the critical boundary). But if the biosphere as such is, in fact, supercritical, then this distinction seemingly a priori necessitates the existence of a boundary of the agent, protecting it against the environment.

Bacteria’s Perception-Action Circle: Materiality of the Ontological. Thought of the Day 136.0

diatoms_in_the_ice

The unicellular organism has thin filaments protruding from its cell membrane, and in the absence of any stimuli, it simply wanders randomly around by changing between two characteristical movement patterns. One is performed by rotating the flagella counterclockwise. In that case, they form a bundle which pushes the cell forward along a curved path, a ‘run’ of random duration with these runs interchanging with ‘tumbles’ where the flagella shifts to clockwise rotation, making them work independently and hence moving the cell erratically around with small net displacement. The biased random walk now consists in the fact than in the presence of a chemical attractant, the runs happening to carry the cell closer to the attractant are extended, while runs in other directions are not. The sensation of the chemical attractant is performed temporally rather than spatially, because the cell moves too rapidly for concentration comparisons between its two ends to be possible. A chemical repellant in the environment gives rise to an analogous behavioral structure – now the biased random walk takes the cell away from the repellant. The bias saturates very quickly – which is what prevents the cell from continuing in a ‘false’ direction, because a higher concentration of attractant will now be needed to repeat the bias. The reception system has three parts, one detecting repellants such as leucin, the other detecting sugars, the third oxygen and oxygen-like substances.

Fig-4-Uexkull's-model-of-the-functional-cycle

The cell’s behavior forms a primitive, if full-fledged example of von Uexküll’s functional circle connecting specific perception signs and action signs. Functional circle behavior is thus no privilege for animals equipped with central nervous systems (CNS). Both types of signs involve categorization. First, the sensory receptors of the bacterium evidently are organized after categorization of certain biologically significant chemicals, while most chemicals that remain insignificant for the cell’s metabolism and survival are ignored. The self-preservation of metabolism and cell structure is hence the ultimate regulator which is supported by the perception-action cycles described. The categorization inherent in the very structure of the sensors is mirrored in the categorization of act types. Three act types are outlined: a null-action, composed of random running and tumbling, and two mirroring biased variants triggered by attractants and repellants, respectively. Moreover, a negative feed-back loop governed by quick satiation grants that the window of concentration shifts to which the cell is able to react appropriately is large – it so to speak calibrates the sensory system so that it does not remain blinded by one perception and does not keep moving the cell forward on in one selected direction. This adaptation of the system grants that it works in a large scale of different attractor/repellor concentrations. These simple signals at stake in the cell’s functional circle display an important property: at simple biological levels, the distinction between signs and perception vanish – that distinction is supposedly only relevant for higher CNS-based animals. Here, the signals are based on categorical perception – a perception which immediately categorizes the entity perceived and thus remains blind to internal differences within the category.

Pandemic e coli

The mechanism by which the cell identifies sugar, is partly identical to what goes on in human taste buds. Sensation of sugar gradients must, of course, differ from the consumption of it – while the latter, of course, destroys the sugar molecule, the former merely reads an ‘active site’ on the outside of the macromolecule. E . Coli – exactly like us – may be fooled by artificial sweeteners bearing the same ‘active site’ on their outer perimeter, even if being completely different chemicals (this is, of course, the secret behind such sweeteners, they are not sugars and hence do not enter the digestion process carrying the energy of carbohydrates). This implies that E . coli may be fooled. Bacteria may not lie, but a simpler process than lying (which presupposes two agents and the ability of being fooled) is, in fact, being fooled (presupposing, in turn, only one agent and an ambiguous environment). E . coli has the ability to categorize a series of sugars – but, by the same token, the ability to categorize a series of irrelevant substances along with them. On the one hand, the ability to recognize and categorize an object by a surface property only (due to the weak van der Waal-bonds and hydrogen bonds to the ‘active site’, in contrast to the strong covalent bonds holding the molecule together) facilitates perception economy and quick action adaptability. On the other hand, the economy involved in judging objects from their surface only has an unavoidable flip side: it involves the possibility of mistake, of being fooled by allowing impostors in your categorization. So in the perception-action circle of a bacterium, some of the self-regulatory stability of a metabolism involving categorized signal and action involvement with the surroundings form intercellular communication in multicellular organisms to reach out to complicated perception and communication in higher animals.

The Biological Kant. Note Quote.

Nb3O7(OH)_self-organization2

The biological treatise takes as its object the realm of physics left out of Kant’s critical demarcations of scientific, that is, mathematical and mechanistic, physics. Here, the main idea was that scientifically understandable Nature was defined by lawfulness. In his Metaphysical Foundations of Natural Science, this idea was taken further in the following claim:

I claim, however, that there is only as much proper science to be found in any special doctrine on nature as there is mathematics therein, and further ‘a pure doctrine on nature about certain things in nature (doctrine on bodies and doctrine on minds) is only possible by means of mathematics’.

The basic idea is thus to identify Nature’s lawfulness with its ability to be studied by means of mathematical schemata uniting understanding and intuition. The central schema, to Kant, was numbers, so apt to be used in the understanding of mechanically caused movement. But already here, Kant is very well aware of a whole series of aspects of spontaneuosly experienced Nature is left out of sight by the concentration on matter in movement, and he calls for these further realms of Nature to be studied by a continuation of the Copernican turn, by the mind’s further study of the utmost limits of itself. Why do we spontaneously see natural purposes, in Nature? Purposiveness is wholly different from necessity, crucial to Kant’s definition of Nature. There is no reason in the general concept of Nature (as lawful) to assume that nature’s objects may serve each other as purposes. Nevertheless, we do not stop assuming just that. But what we do when we ascribe purposes to Nature is using the faculties of mind in another way than in science, much closer to the way we use them in the appreciation of beauty and art, the object of the first part of the book immediately before the treatment of teleological judgment. This judgment is characterized by a central distinction, already widely argued in this first part of the book: the difference between determinative and reflective judgments, respectively. While the judgment used scientifically to decide whether a specific case follows a certain rule in explanation by means of a derivation from a principle, and thus constitutes the objectivity of the object in question – the judgment which is reflective lacks all these features. It does not proceed by means of explanation, but by mere analogy; it is not constitutive, but merely regulative; it does not prove anything but merely judges, and it has no principle of reason to rest its head upon but the very act of judging itself. These ideas are now elaborated throughout the critic of teleological judgment.

nrm2357-i1

In the section Analytik der teleologischen Urteilskraft, Kant gradually approaches the question: first is treated the merely formal expediency: We may ascribe purposes to geometry in so far as it is useful to us, just like rivers carrying fertile soils with them for trees to grow in may be ascribed purposes; these are, however, merely contingent purposes, dependent on an external telos. The crucial point is the existence of objects which are only possible as such in so far as defined by purposes:

That its form is not possible after mere natural laws, that is, such things which may not be known by us through understanding applied to objects of the senses; on the contrary that even the empirical knowledge about them, regarding their cause and effect, presupposes concepts of reason.

The idea here is that in order to conceive of objects which may not be explained with reference to understanding and its (in this case, mechanical) concepts only, these must be grasped by the non-empirical ideas of reason itself. If causes are perceived as being interlinked in chains, then such contingencies are to be thought of only as small causal circles on the chain, that is, as things being their own cause. Hence Kant’s definition of the Idea of a natural purpose:

an object exists as natural purpose, when it is cause and effect of itself.

This can be thought as an idea without contradiction, Kant maintains, but not conceived. This circularity (the small causal circles) is a very important feature in Kant’s tentative schematization of purposiveness. Another way of coining this Idea is – things as natural purposes are organized beings. This entails that naturally purposeful objects must possess a certain spatio-temporal construction: the parts of such a thing must be possible only through their relation to the whole – and, conversely, the parts must actively connect themselves to this whole. Thus, the corresponding idea can be summed up as the Idea of the Whole which is necessary to pass judgment on any empirical organism, and it is very interesting to note that Kant sums up the determination of any part of a Whole by all other parts in the phrase that a natural purpose is possible only as an organized and self-organizing being. This is probably the very birth certificate of the metaphysics of self-organization. It is important to keep in mind that Kant does not feel any vitalist temptation at supposing any organizing power or any autonomy on the part of the whole which may come into being only by this process of self-organization between its parts. When Kant talks about the forming power in the formation of the Whole, it is thus nothing outside of this self-organization of its parts.

This leads to Kant’s final definition: an organized being is that in which all that is alternating is ends and means. This idea is extremely important as a formalization of the idea of teleology: the natural purposes do not imply that there exists given, stable ends for nature to pursue, on the contrary, they are locally defined by causal cycles, in which every part interchangeably assumes the role of ends and means. Thus, there is no absolute end in this construal of nature’s teleology; it analyzes teleology formally at the same time as it relativizes it with respect to substance. Kant takes care to note that this maxim needs not be restricted to the beings – animals – which we spontaneously tend to judge as purposeful. The idea of natural purposes thus entails that there might exist a plan in nature rendering processes which we have all reasons to disgust purposeful for us. In this vision, teleology might embrace causality – and even aesthetics:

Also natural beauty, that is, its harmony with the free play of our epistemological faculties in the experience and judgment of its appearance can be seen in the way of objective purposivity of nature in its totality as system, in which man is a member.

An important consequence of Kant’s doctrine is that their teleology is so to speak secularized in two ways: (1) it is formal, and (2) it is local. It is formal because self-organization does not ascribe any special, substantial goal for organisms to pursue – other than the sustainment of self-organization. Thus teleology is merely a formal property in certain types of systems. This is why teleology is also local – it is to be found in certain systems when the causal chain form loops, as Kant metaphorically describes the cycles involved in self-organization – it is no overarching goal governing organisms from the outside. Teleology is a local, bottom-up, process only.

Kant does not in any way doubt the existence of organized beings, what is at stake is the possibility of dealing with them scientifically in terms of mechanics. Even if they exist as a given thing in experience, natural purposes can not receive any concept. This implies that biology is evident in so far as the existence of organisms cannot be doubted. Biology will never rise to the heights of science, its attempts at doing so are beforehand delimited, all scientific explanations of organisms being bound to be mechanical. Following this line of argument, it corresponds very well to present-day reductionism in biology, trying to take all problems of phenotypical characters, organization, morphogenesis, behavior, ecology, etc. back to the biochemistry of genetics. But the other side of the argument is that no matter how successful this reduction may prove, it will never be able to reduce or replace the teleological point of view necessary in order to understand the organism as such in the first place.

Evidently, there is something deeply unsatisfactory in this conclusion which is why most biologists have hesitated at adopting it and cling to either full-blown reductionism or to some brand of vitalism, subjecting themselves to the dangers of ‘transcendental illusion’ and allowing for some Goethe-like intuitive idea without any schematization. Kant tries to soften up the question by philosophical means by establishing an crossing over from metaphysics to physics, or, from the metaphysical constraints on mechanical physics and to physics in its empirical totality, including the organized beings of biology. Pure mechanics leaves physics as a whole unorganized, and this organization is sought to be established by means of mediating concepts’. Among them is the formative power, which is not conceived of in a vitalist substantialist manner, but rather a notion referring to the means by which matter manages to self-organize. It thus comprehends not only biological organization, but macrophysic solid matter physics as well. Here, he adds an important argument to the critic of judgment:

Because man is conscious of himself as a self-moving machine, without being able to further understand such a possibility, he can, and is entitled to, introduce a priori organic-moving forces of bodies into the classification of bodies in general and thus to distinguish mere mechanical bodies from self-propelled organic bodies.

Revisiting Catastrophes. Thought of the Day 134.0

The most explicit influence from mathematics in semiotics is probably René Thom’s controversial theory of catastrophes (here and here), with philosophical and semiotic support from Jean Petitot. Catastrophe theory is but one of several formalisms in the broad field of qualitative dynamics (comprising also chaos theory, complexity theory, self-organized criticality, etc.). In all these cases, the theories in question are in a certain sense phenomenological because the focus is different types of qualitative behavior of dynamic systems grasped on a purely formal level bracketing their causal determination on the deeper level. A widespread tool in these disciplines is phase space – a space defined by the variables governing the development of the system so that this development may be mapped as a trajectory through phase space, each point on the trajectory mapping one global state of the system. This space may be inhabited by different types of attractors (attracting trajectories), repellors (repelling them), attractor basins around attractors, and borders between such basins characterized by different types of topological saddles which may have a complicated topology.

Catastrophe theory has its basis in differential topology, that is, the branch of topology keeping various differential properties in a function invariant under transformation. It is, more specifically, the so-called Whitney topology whose invariants are points where the nth derivative of a function takes the value 0, graphically corresponding to minima, maxima, turning tangents, and, in higher dimensions, different complicated saddles. Catastrophe theory takes its point of departure in singularity theory whose object is the shift between types of such functions. It thus erects a distinction between an inner space – where the function varies – and an outer space of control variables charting the variation of that function including where it changes type – where, e.g. it goes from having one minimum to having two minima, via a singular case with turning tangent. The continuous variation of control parameters thus corresponds to a continuous variation within one subtype of the function, until it reaches a singular point where it discontinuously, ‘catastrophically’, changes subtype. The philosophy-of-science interpretation of this formalism now conceives the stable subtype of function as representing the stable state of a system, and the passage of the critical point as the sudden shift to a new stable state. The configuration of control parameters thus provides a sort of map of the shift between continuous development and discontinuous ‘jump’. Thom’s semiotic interpretation of this formalism entails that typical catastrophic trajectories of this kind may be interpreted as stable process types phenomenologically salient for perception and giving rise to basic verbal categories.

Untitled

One of the simpler catastrophes is the so-called cusp (a). It constitutes a meta-diagram, namely a diagram of the possible type-shifts of a simpler diagram (b), that of the equation ax4 + bx2 + cx = 0. The upper part of (a) shows the so-called fold, charting the manifold of solutions to the equation in the three dimensions a, b and c. By the projection of the fold on the a, b-plane, the pointed figure of the cusp (lower a) is obtained. The cusp now charts the type-shift of the function: Inside the cusp, the function has two minima, outside it only one minimum. Different paths through the cusp thus corresponds to different variations of the equation by the variation of the external variables a and b. One such typical path is the path indicated by the left-right arrow on all four diagrams which crosses the cusp from inside out, giving rise to a diagram of the further level (c) – depending on the interpretation of the minima as simultaneous states. Here, thus, we find diagram transformations on three different, nested levels.

The concept of transformation plays several roles in this formalism. The most spectacular one refers, of course, to the change in external control variables, determining a trajectory through phase space where the function controlled changes type. This transformation thus searches the possibility for a change of the subtypes of the function in question, that is, it plays the role of eidetic variation mapping how the function is ‘unfolded’ (the basic theorem of catastrophe theory refers to such unfolding of simple functions). Another transformation finds stable classes of such local trajectory pieces including such shifts – making possible the recognition of such types of shifts in different empirical phenomena. On the most empirical level, finally, one running of such a trajectory piece provides, in itself, a transformation of one state into another, whereby the two states are rationally interconnected. Generally, it is possible to make a given transformation the object of a higher order transformation which by abstraction may investigate aspects of the lower one’s type and conditions. Thus, the central unfolding of a function germ in Catastrophe Theory constitutes a transformation having the character of an eidetic variation making clear which possibilities lie in the function germ in question. As an abstract formalism, the higher of these transformations may determine the lower one as invariant in a series of empirical cases.

Complexity theory is a broader and more inclusive term covering the general study of the macro-behavior of composite systems, also using phase space representation. The theoretical biologist Stuart Kauffman (intro) argues that in a phase space of all possible genotypes, biological evolution must unfold in a rather small and specifically qualified sub-space characterized by many, closely located and stable states (corresponding to the possibility of a species to ‘jump’ to another and better genotype in the face of environmental change) – as opposed to phase space areas with few, very stable states (which will only be optimal in certain, very stable environments and thus fragile when exposed to change), and also opposed, on the other hand, to sub-spaces with a high plurality of only metastable states (here, the species will tend to merge into neighboring species and hence never stabilize). On the base of this argument, only a small subset of the set of virtual genotypes possesses ‘evolvability’ as this special combination between plasticity and stability. The overall argument thus goes that order in biology is not a pure product of evolution; the possibility of order must be present in certain types of organized matter before selection begins – conversely, selection requires already organized material on which to work. The identification of a species with a co-localized group of stable states in genome space thus provides a (local) invariance for the transformation taking a trajectory through space, and larger groups of neighboring stabilities – lineages – again provide invariants defined by various more or less general transformations. Species, in this view, are in a certain limited sense ‘natural kinds’ and thus naturally signifying entities. Kauffman’s speculations over genotypical phase space have a crucial bearing on a transformation concept central to biology, namely mutation. On this basis far from all virtual mutations are really possible – even apart from their degree of environmental relevance. A mutation into a stable but remotely placed species in phase space will be impossible (evolution cannot cross the distance in phase space), just like a mutation in an area with many, unstable proto-species will not allow for any stabilization of species at all and will thus fall prey to arbitrary small environment variations. Kauffman takes a spontaneous and non-formalized transformation concept (mutation) and attempts a formalization by investigating its condition of possibility as movement between stable genomes in genotype phase space. A series of constraints turn out to determine type formation on a higher level (the three different types of local geography in phase space). If the trajectory of mutations must obey the possibility of walking between stable species, then the space of possibility of trajectories is highly limited. Self-organized criticality as developed by Per Bak (How Nature Works the science of self-organized criticality) belongs to the same type of theories. Criticality is here defined as that state of a complicated system where sudden developments in all sizes spontaneously occur.