Revisiting Catastrophes. Thought of the Day 134.0

The most explicit influence from mathematics in semiotics is probably René Thom’s controversial theory of catastrophes (here and here), with philosophical and semiotic support from Jean Petitot. Catastrophe theory is but one of several formalisms in the broad field of qualitative dynamics (comprising also chaos theory, complexity theory, self-organized criticality, etc.). In all these cases, the theories in question are in a certain sense phenomenological because the focus is different types of qualitative behavior of dynamic systems grasped on a purely formal level bracketing their causal determination on the deeper level. A widespread tool in these disciplines is phase space – a space defined by the variables governing the development of the system so that this development may be mapped as a trajectory through phase space, each point on the trajectory mapping one global state of the system. This space may be inhabited by different types of attractors (attracting trajectories), repellors (repelling them), attractor basins around attractors, and borders between such basins characterized by different types of topological saddles which may have a complicated topology.

Catastrophe theory has its basis in differential topology, that is, the branch of topology keeping various differential properties in a function invariant under transformation. It is, more specifically, the so-called Whitney topology whose invariants are points where the nth derivative of a function takes the value 0, graphically corresponding to minima, maxima, turning tangents, and, in higher dimensions, different complicated saddles. Catastrophe theory takes its point of departure in singularity theory whose object is the shift between types of such functions. It thus erects a distinction between an inner space – where the function varies – and an outer space of control variables charting the variation of that function including where it changes type – where, e.g. it goes from having one minimum to having two minima, via a singular case with turning tangent. The continuous variation of control parameters thus corresponds to a continuous variation within one subtype of the function, until it reaches a singular point where it discontinuously, ‘catastrophically’, changes subtype. The philosophy-of-science interpretation of this formalism now conceives the stable subtype of function as representing the stable state of a system, and the passage of the critical point as the sudden shift to a new stable state. The configuration of control parameters thus provides a sort of map of the shift between continuous development and discontinuous ‘jump’. Thom’s semiotic interpretation of this formalism entails that typical catastrophic trajectories of this kind may be interpreted as stable process types phenomenologically salient for perception and giving rise to basic verbal categories.

Untitled

One of the simpler catastrophes is the so-called cusp (a). It constitutes a meta-diagram, namely a diagram of the possible type-shifts of a simpler diagram (b), that of the equation ax4 + bx2 + cx = 0. The upper part of (a) shows the so-called fold, charting the manifold of solutions to the equation in the three dimensions a, b and c. By the projection of the fold on the a, b-plane, the pointed figure of the cusp (lower a) is obtained. The cusp now charts the type-shift of the function: Inside the cusp, the function has two minima, outside it only one minimum. Different paths through the cusp thus corresponds to different variations of the equation by the variation of the external variables a and b. One such typical path is the path indicated by the left-right arrow on all four diagrams which crosses the cusp from inside out, giving rise to a diagram of the further level (c) – depending on the interpretation of the minima as simultaneous states. Here, thus, we find diagram transformations on three different, nested levels.

The concept of transformation plays several roles in this formalism. The most spectacular one refers, of course, to the change in external control variables, determining a trajectory through phase space where the function controlled changes type. This transformation thus searches the possibility for a change of the subtypes of the function in question, that is, it plays the role of eidetic variation mapping how the function is ‘unfolded’ (the basic theorem of catastrophe theory refers to such unfolding of simple functions). Another transformation finds stable classes of such local trajectory pieces including such shifts – making possible the recognition of such types of shifts in different empirical phenomena. On the most empirical level, finally, one running of such a trajectory piece provides, in itself, a transformation of one state into another, whereby the two states are rationally interconnected. Generally, it is possible to make a given transformation the object of a higher order transformation which by abstraction may investigate aspects of the lower one’s type and conditions. Thus, the central unfolding of a function germ in Catastrophe Theory constitutes a transformation having the character of an eidetic variation making clear which possibilities lie in the function germ in question. As an abstract formalism, the higher of these transformations may determine the lower one as invariant in a series of empirical cases.

Complexity theory is a broader and more inclusive term covering the general study of the macro-behavior of composite systems, also using phase space representation. The theoretical biologist Stuart Kauffman (intro) argues that in a phase space of all possible genotypes, biological evolution must unfold in a rather small and specifically qualified sub-space characterized by many, closely located and stable states (corresponding to the possibility of a species to ‘jump’ to another and better genotype in the face of environmental change) – as opposed to phase space areas with few, very stable states (which will only be optimal in certain, very stable environments and thus fragile when exposed to change), and also opposed, on the other hand, to sub-spaces with a high plurality of only metastable states (here, the species will tend to merge into neighboring species and hence never stabilize). On the base of this argument, only a small subset of the set of virtual genotypes possesses ‘evolvability’ as this special combination between plasticity and stability. The overall argument thus goes that order in biology is not a pure product of evolution; the possibility of order must be present in certain types of organized matter before selection begins – conversely, selection requires already organized material on which to work. The identification of a species with a co-localized group of stable states in genome space thus provides a (local) invariance for the transformation taking a trajectory through space, and larger groups of neighboring stabilities – lineages – again provide invariants defined by various more or less general transformations. Species, in this view, are in a certain limited sense ‘natural kinds’ and thus naturally signifying entities. Kauffman’s speculations over genotypical phase space have a crucial bearing on a transformation concept central to biology, namely mutation. On this basis far from all virtual mutations are really possible – even apart from their degree of environmental relevance. A mutation into a stable but remotely placed species in phase space will be impossible (evolution cannot cross the distance in phase space), just like a mutation in an area with many, unstable proto-species will not allow for any stabilization of species at all and will thus fall prey to arbitrary small environment variations. Kauffman takes a spontaneous and non-formalized transformation concept (mutation) and attempts a formalization by investigating its condition of possibility as movement between stable genomes in genotype phase space. A series of constraints turn out to determine type formation on a higher level (the three different types of local geography in phase space). If the trajectory of mutations must obey the possibility of walking between stable species, then the space of possibility of trajectories is highly limited. Self-organized criticality as developed by Per Bak (How Nature Works the science of self-organized criticality) belongs to the same type of theories. Criticality is here defined as that state of a complicated system where sudden developments in all sizes spontaneously occur.

Fermi Surface Singularities

In ideal Fermi gases, the Fermi surface at p = pF = √2μm is the boundary in p-space between the occupied states (np = 1) at p2/2m < μ and empty states (np = 0) at p2/2m > μ. At this boundary (the surface in 3D momentum space) the energy is zero. What happens when the interaction between particles is introduced? Due to interaction the distribution function np of particles in the ground state is no longer exactly 1 or 0. However, it appears that the Fermi surface survives as the singularity in np. Such stability of the Fermi surface comes from a topological property of the one-particle Green’s function at imaginary frequency:

G-1 = iω – p2/2m + μ —– (1)

Let us for simplicity skip one spatial dimension pz so that the Fermi surface becomes the line in 2D momentum space (px,py); this does not change the co-dimension of zeroes which remains 1 = 3−2 = 2−1. The Green’s function has singularities lying on a closed line ω = 0, p2x + p2y = p2F in the 3D momentum-frequency space (ω,px,py). This is the line of the quantized vortex in the momentum space, since the phase Φ of the Green’s function G = |G|e changes by 2πN1 around the path embracing any element of this vortex line. In the considered case the phase winding number is N1 = 1. If we add the third momentum dimension pz the vortex line becomes the surface in the 4D momentum-frequency space (ω,px,py,pz) – the Fermi surface – but again the phase changes by 2π along any closed loop empracing the element of the 2D surface in the 4D momentum-frequency space.

Untitled

Fermi surface is a topological object in momentum space – a vortex loop. When the chemical potential μ decreases the loop shrinks and disappears at μ < 0. The point μ = T = 0 marks the Lifshitz transition between the gapless ground state at μ > 0 to the fully gapped vacuum at μ < 0.

The winding number cannot change by continuous deformation of the Green’s function: the momentum-space vortex is robust toward any perturbation. Thus the singularity of the Green’s function on the Fermi surface is preserved, even when interaction between fermions is introduced. The invariant is the same for any space dimension, since the co-dimension remains 1.

The Green function is generally a matrix with spin indices. In addition, it may have the band indices (in the case of electrons in the periodic potential of crystals). In such a case the phase of the Green’s function becomes meaning-less; however, the topological property of the Green’s function remains robust. The general analysis demonstrates that topologically stable Fermi surfaces are described by the group Z of integers. The winding number N1 is expressed analytically in terms of the Green’s function:

N= tr ∮C dl/2πi G(μ,p) ∂lG-1(μ,p) —– (2)

Here the integral is taken over an arbitrary contour C around the momentum- space vortex, and tr is the trace over the spin, band and/or other indices.

The Fermi surface cannot be destroyed by small perturbations, since it is protected by topology and thus is robust to perturbations. But the Fermi surface can be removed by large perturbations in the processes which reproduces the processes occurring for the real-space counterpart of the Fermi surface – the loop of quantized vortex in superfluids and superconductors. The vortex ring can continuously shrink to a point and then disappear, or continuously expand and leave the momentum space. The first scenario occurs when one continuously changes the chemical potential from the positive to the negative value: at μ < 0 there is no vortex loop in momentum space and the ground state (vacuum) is fully gapped. The point μ = 0 marks the quantum phase transition – the Lifshitz transition – at which the topology of the energy spectrum changes. At this transition the symmetry of the ground state does not changes. The second scenario of the quantum phase transition to the fully gapped states occurs when the inverse mass 1/m in (1) crosses zero.

Similar Lifshitz transitions from the fully gapped state to the state with the Fermi surface may occur in superfluids and superconductors. This happens, for example, when the superfluid velocity crosses the Landau critical velocity. The symmetry of the order parameter does not change across such a quantum phase transition. In the non-superconduting states, the transition from the gapless to gapped state is the metal-insulator transition.

Untitled

The Lifshitz transitions involving the vortex lines in p-space may occur be- tween the gapless states. They are accompanied by the change of the topology of the Fermi surface itself. The simplest example of such a phase transition discussed in terms of the vortex lines is provided by the reconnection of the vortex lines.

Untitled

 

Ideological Morphology. Thought of the Day 105.1

34-bundle-of-sticks-fasce-logo-640x427

When applied to generic fascism, the combined concepts of ideal type and ideological morphology have profound implications for both the traditional liberal and Marxist definitions of fascism. For one thing it means that fascism is no longer defined in terms of style, for e.g. spectacular politics, uniformed paramilitary forces, the pervasive use of symbols like fasces and Swastika, or organizational structure, but in terms of ideology. Moreover, the ideology is not seen as essentially nihilistic or negative (anti-liberalism, anti-Marxism, resistance to transcendence etc.), or as the mystification and aestheticization of capitalist power. Instead, it is constructed in the positive, but not apologetic or revisionist terms of the fascists’ own diagnosis of society’s structural crisis and the remedies they propose to solve it, paying particular attention to the need to separate out the ineliminable, definitional conceptions from time- or place- specific adjacent or peripheral ones. However, for decades the state of fascist studies would have made Michael Freeden’s analysis well-nigh impossible to apply to generic fascism, because precisely what was lacking was any conventional wisdom embedded in common-sense usage of the term about what constituted the ineliminable cluster of concepts at its non-essentialist core. Despite a handful of attempts to establish its definitional constituents that combined deep comparative historiographical knowledge of the subject with a high degree of conceptual sophistication, there was a conspicuous lack of scholarly consensus over what constituted the fascist minimum. Whether there was such an entity as generic fascism even was a question to think through. Or whether Nazism’s eugenic racism and the euthanasia campaign it led to, combined with a policy of physically eliminating racial enemies that led to the systematic persecution and mass murder, was simply unique, and too exceptional to be located within the generic category was another question to think through. Both these positions suggest a naivety about the epistemological and ontological status of generic concepts most regrettable among professional intellectuals, since every generic entity is a utopian heuristic construct, not a real thing and every historically singularity is by definition unique no matter how many generic terms can be applied to it. Other common positions that implied considerable naivety were the ones that dismissed fascism’s ideology as too irrational or nihilistic to be part of the fascist minimum, or generalized about its generic traits by blending fascism and nazism.

Pareto Optimality

There are some solutions. (“If you don’t give a solution, you are part of the problem”). Most important: Human wealth should be set as the only goal in society and economy. Liberalism is ruinous for humans, while it may be optimal for fitter entities. Nobody is out there to take away the money of others without working for it. In a way of ‘revenge’ or ‘envy’, (basically justifying laziness) taking away the hard-work earnings of others. No way. Nobody wants it. Thinking that yours can be the only way a rational person can think. Anybody not ‘winning’ the game is a ‘loser’. Some of us, actually, do not even want to enter the game.

Yet – the big dilemma – that money-grabbing mentality is essential for the economy. Without it we would be equally doomed. But, what we will see now is that you’ll will lose every last penny either way, even without divine intervention.

Having said that, the solution is to take away the money. Seeing that the system is not stable and accumulates the capital on a big pile, disconnected from humans, mathematically there are two solutions:

1) Put all the capital in the hands of people. If profit is made M’-M, this profit falls to the hands of the people that caused it. This seems fair, and mathematically stable. However, how the wealth is then distributed? That would be the task of politicians, and history has shown that they are a worse pest than capital. Politicians, actually, always wind up representing the capital. No country in the world ever managed to avoid it.

2) Let the system be as it is, which is great for giving people incentives to work and develop things, but at the end of the year, redistribute the wealth to follow an ideal curve that optimizes both wealth and increments of wealth.

The latter is an interesting idea. Also since it does not need rigorous restructuring of society, something that would only be possible after a total collapse of civilization. While unavoidable in the system we have, it would be better to act pro-actively and do something before it happens. Moreover, since money is air – or worse, vacuum – there is actually nothing that is ‘taken away’. Money is just a right to consume and can thus be redistributed at will if there is a just cause to do so. In normal cases this euphemistic word ‘redistribution’ amounts to theft and undermines incentives for work and production and thus causes poverty. Yet, if it can be shown to actually increase incentives to work, and thus increase overall wealth, it would need no further justification.

We set out to calculate this idea. However, it turned out to give quite remarkable results. Basically, the optimal distribution is slavery. Let us present them here. Let’s look at the distribution of wealth. Figure below shows a curve of wealth per person, with the richest conventionally placed at the right and the poor on the left, to result in what is in mathematics called a monotonously-increasing function. This virtual country has 10 million inhabitants and a certain wealth that ranges from nearly nothing to millions, but it can easily be mapped to any country.

Untitled

Figure 1: Absolute wealth distribution function

As the overall wealth increases, it condenses over time at the right side of the curve. Left unchecked, the curve would become ever-more skew, ending eventually in a straight horizontal line at zero up to the last uttermost right point, where it shoots up to an astronomical value. The integral of the curve (total wealth/capital M) always increases, but it eventually goes to one person. Here it is intrinsically assumed that wealth, actually, is still connected to people and not, as it in fact is, becomes independent of people, becomes ‘capital’ autonomously by itself. If independent of people, this wealth can anyway be without any form of remorse whatsoever be confiscated and redistributed. Ergo, only the system where all the wealth is owned by people is needed to be studied.

A more interesting figure is the fractional distribution of wealth, with the normalized wealth w(x) plotted as a function of normalized population x (that thus runs from 0 to 1). Once again with the richest plotted on the right. See Figure below.

Untitled

Figure 2: Relative wealth distribution functions: ‘ideal communist’ (dotted line. constant distribution), ‘ideal capitalist’ (one person owns all, dashed line) and ‘ideal’ functions (work-incentive optimized, solid line).

Every person x in this figure feels an incentive to work harder, because it wants to overtake his/her right-side neighbor and move to the right on the curve. We can define an incentive i(x) for work for person x as the derivative of the curve, divided by the curve itself (a person will work harder proportional to the relative increase in wealth)

i(x) = dw(x)/dx/w(x) —– (1)

A ‘communistic’ (in the negative connotation) distribution is that everybody earns equally, that means that w(x) is constant, with the constant being one

‘ideal’ communist: w(x) = 1.

and nobody has an incentive to work, i(x) = 0 ∀ x. However, in a utopic capitalist world, as shown, the distribution is ‘all on a big pile’. This is what mathematicians call a delta-function

‘ideal’ capitalist: w(x) = δ(x − 1),

and once again, the incentive is zero for all people, i(x) = 0. If you work, or don’t work, you get nothing. Except one person who, working or not, gets everything.

Thus, there is somewhere an ‘ideal curve’ w(x) that optimizes the sum of incentives I defined as the integral of i(x) over x.

I = ∫01i(x)dx = ∫01(dw(x)/dx)/w(x) dx = ∫x=0x=1dw(x)/w(x) = ln[w(x)]|x=0x=1 —– (2)

Which function w is that? Boundary conditions are

1. The total wealth is normalized: The integral of w(x) over x from 0 to 1 is unity.

01w(x)dx = 1 —– (3)

2. Everybody has a at least a minimal income, defined as the survival minimum. (A concept that actually many societies implement). We can call this w0, defined as a percentage of the total wealth, to make the calculation easy (every year this parameter can be reevaluated, for instance when the total wealth increased, but not the minimum wealth needed to survive). Thus, w(0) = w0.

The curve also has an intrinsic parameter wmax. This represents the scale of the figure, and is the result of the other boundary conditions and therefore not really a parameter as such. The function basically has two parameters, minimal subsistence level w0 and skewness b.

As an example, we can try an exponentially-rising function with offset that starts by being forced to pass through the points (0, w0) and (1, wmax):

w(x) = w0 + (wmax − w0)(ebx −1)/(eb − 1) —– (4)

An example of such a function is given in the above Figure. To analytically determine which function is ideal is very complicated, but it can easily be simulated in a genetic algorithm way. In this, we start with a given distribution and make random mutations to it. If the total incentive for work goes up, we keep that new distribution. If not, we go back to the previous distribution.

The results are shown in the figure 3 below for a 30-person population, with w0 = 10% of average (w0 = 1/300 = 0.33%).

Untitled

Figure 3: Genetic algorithm results for the distribution of wealth (w) and incentive to work (i) in a liberal system where everybody only has money (wealth) as incentive. 

Depending on the starting distribution, the system winds up in different optima. If we start with a communistic distribution of figure 2, we wind up with a situation in which the distribution stays homogeneous ‘everybody equal’, with the exception of two people. A ‘slave’ earns the minimum wages and does nearly all the work, and a ‘party official’ that does not do much, but gets a large part of the wealth. Everybody else is equally poor (total incentive/production equal to 21), w = 1/30 = 10w0, with most people doing nothing, nor being encouraged to do anything. The other situation we find when we start with a random distribution or linear increasing distribution. The final situation is shown in situation 2 of the figure 3. It is equal to everybody getting minimum wealth, w0, except the ‘banker’ who gets 90% (270 times more than minimum), while nobody is doing anything, except, curiously, the penultimate person, which we can call the ‘wheedler’, for cajoling the banker into giving him money. The total wealth is higher (156), but the average person gets less, w0.

Note that this isn’t necessarily an evolution of the distribution of wealth over time. Instead, it is a final, stable, distribution calculated with an evolutionary (‘genetic’) algorithm. Moreover, this analysis can be made within a country, analyzing the distribution of wealth between people of the same country, as well as between countries.

We thus find that a liberal system, moreover one in which people are motivated by the relative wealth increase they might attain, winds up with most of the wealth accumulated by one person who not necessarily does any work. This is then consistent with the tendency of liberal capitalist societies to have indeed the capital and wealth accumulate in a single point, and consistent with Marx’s theories that predict it as well. A singularity of distribution of wealth is what you get in a liberal capitalist society where personal wealth is the only driving force of people. Which is ironic, in a way, because by going only for personal wealth, nobody gets any of it, except the big leader. It is a form of Prisoner’s Dilemma.

The Locus of Renormalization. Note Quote.

IMM-3-200-g005

Since symmetries and the related conservation properties have a major role in physics, it is interesting to consider the paradigmatic case where symmetry changes are at the core of the analysis: critical transitions. In these state transitions, “something is not preserved”. In general, this is expressed by the fact that some symmetries are broken or new ones are obtained after the transition (symmetry changes, corresponding to state changes). At the transition, typically, there is the passage to a new “coherence structure” (a non-trivial scale symmetry); mathematically, this is described by the non-analyticity of the pertinent formal development. Consider the classical para-ferromagnetic transition: the system goes from a disordered state to sudden common orientation of spins, up to the complete ordered state of a unique orientation. Or percolation, often based on the formation of fractal structures, that is the iteration of a statistically invariant motif. Similarly for the formation of a snow flake . . . . In all these circumstances, a “new physical object of observation” is formed. Most of the current analyses deal with transitions at equilibrium; the less studied and more challenging case of far form equilibrium critical transitions may require new mathematical tools, or variants of the powerful renormalization methods. These methods change the pertinent object, yet they are based on symmetries and conservation properties such as energy or other invariants. That is, one obtains a new object, yet not necessarily new observables for the theoretical analysis. Another key mathematical aspect of renormalization is that it analyzes point-wise transitions, that is, mathematically, the physical transition is seen as happening in an isolated mathematical point (isolated with respect to the interval topology, or the topology induced by the usual measurement and the associated metrics).

One can say in full generality that a mathematical frame completely handles the determination of the object it describes as long as no strong enough singularity (i.e. relevant infinity or divergences) shows up to break this very mathematical determination. In classical statistical fields (at criticality) and in quantum field theories this leads to the necessity of using renormalization methods. The point of these methods is that when it is impossible to handle mathematically all the interaction of the system in a direct manner (because they lead to infinite quantities and therefore to no relevant account of the situation), one can still analyze parts of the interactions in a systematic manner, typically within arbitrary scale intervals. This allows us to exhibit a symmetry between partial sets of “interactions”, when the arbitrary scales are taken as a parameter.

In this situation, the intelligibility still has an “upward” flavor since renormalization is based on the stability of the equational determination when one considers a part of the interactions occurring in the system. Now, the “locus of the objectivity” is not in the description of the parts but in the stability of the equational determination when taking more and more interactions into account. This is true for critical phenomena, where the parts, atoms for example, can be objectivized outside the system and have a characteristic scale. In general, though, only scale invariance matters and the contingent choice of a fundamental (atomic) scale is irrelevant. Even worse, in quantum fields theories, the parts are not really separable from the whole (this would mean to separate an electron from the field it generates) and there is no relevant elementary scale which would allow ONE to get rid of the infinities (and again this would be quite arbitrary, since the objectivity needs the inter-scale relationship).

In short, even in physics there are situations where the whole is not the sum of the parts because the parts cannot be summed on (this is not specific to quantum fields and is also relevant for classical fields, in principle). In these situations, the intelligibility is obtained by the scale symmetry which is why fundamental scale choices are arbitrary with respect to this phenomena. This choice of the object of quantitative and objective analysis is at the core of the scientific enterprise: looking only at molecules as the only pertinent observable of life is worse than reductionist, it is against the history of physics and its audacious unifications and invention of new observables, scale invariances and even conceptual frames.

As for criticality in biology, there exists substantial empirical evidence that living organisms undergo critical transitions. These are mostly analyzed as limit situations, either never really reached by an organism or as occasional point-wise transitions. Or also, as researchers nicely claim in specific analysis: a biological system, a cell genetic regulatory networks, brain and brain slices …are “poised at criticality”. In other words, critical state transitions happen continually.

Thus, as for the pertinent observables, the phenotypes, we propose to understand evolutionary trajectories as cascades of critical transitions, thus of symmetry changes. In this perspective, one cannot pre-give, nor formally pre-define, the phase space for the biological dynamics, in contrast to what has been done for the profound mathematical frame for physics. This does not forbid a scientific analysis of life. This may just be given in different terms.

As for evolution, there is no possible equational entailment nor a causal structure of determination derived from such entailment, as in physics. The point is that these are better understood and correlated, since the work of Noether and Weyl in the last century, as symmetries in the intended equations, where they express the underlying invariants and invariant preserving transformations. No theoretical symmetries, no equations, thus no laws and no entailed causes allow the mathematical deduction of biological trajectories in pre-given phase spaces – at least not in the deep and strong sense established by the physico-mathematical theories. Observe that the robust, clear, powerful physico-mathematical sense of entailing law has been permeating all sciences, including societal ones, economics among others. If we are correct, this permeating physico-mathematical sense of entailing law must be given up for unentailed diachronic evolution in biology, in economic evolution, and cultural evolution.

As a fundamental example of symmetry change, observe that mitosis yields different proteome distributions, differences in DNA or DNA expressions, in membranes or organelles: the symmetries are not preserved. In a multi-cellular organism, each mitosis asymmetrically reconstructs a new coherent “Kantian whole”, in the sense of the physics of critical transitions: a new tissue matrix, new collagen structure, new cell-to-cell connections . . . . And we undergo millions of mitosis each minute. More, this is not “noise”: this is variability, which yields diversity, which is at the core of evolution and even of stability of an organism or an ecosystem. Organisms and ecosystems are structurally stable, also because they are Kantian wholes that permanently and non-identically reconstruct themselves: they do it in an always different, thus adaptive, way. They change the coherence structure, thus its symmetries. This reconstruction is thus random, but also not random, as it heavily depends on constraints, such as the proteins types imposed by the DNA, the relative geometric distribution of cells in embryogenesis, interactions in an organism, in a niche, but also on the opposite of constraints, the autonomy of Kantian wholes.

In the interaction with the ecosystem, the evolutionary trajectory of an organism is characterized by the co-constitution of new interfaces, i.e. new functions and organs that are the proper observables for the Darwinian analysis. And the change of a (major) function induces a change in the global Kantian whole as a coherence structure, that is it changes the internal symmetries: the fish with the new bladder will swim differently, its heart-vascular system will relevantly change ….

Organisms transform the ecosystem while transforming themselves and they can stand/do it because they have an internal preserved universe. Its stability is maintained also by slightly, yet constantly changing internal symmetries. The notion of extended criticality in biology focuses on the dynamics of symmetry changes and provides an insight into the permanent, ontogenetic and evolutionary adaptability, as long as these changes are compatible with the co-constituted Kantian whole and the ecosystem. As we said, autonomy is integrated in and regulated by constraints, with an organism itself and of an organism within an ecosystem. Autonomy makes no sense without constraints and constraints apply to an autonomous Kantian whole. So constraints shape autonomy, which in turn modifies constraints, within the margin of viability, i.e. within the limits of the interval of extended criticality. The extended critical transition proper to the biological dynamics does not allow one to prestate the symmetries and the correlated phase space.

Consider, say, a microbial ecosystem in a human. It has some 150 different microbial species in the intestinal tract. Each person’s ecosystem is unique, and tends largely to be restored following antibiotic treatment. Each of these microbes is a Kantian whole, and in ways we do not understand yet, the “community” in the intestines co-creates their worlds together, co-creating the niches by which each and all achieve, with the surrounding human tissue, a task closure that is “always” sustained even if it may change by immigration of new microbial species into the community and extinction of old species in the community. With such community membership turnover, or community assembly, the phase space of the system is undergoing continual and open ended changes. Moreover, given the rate of mutation in microbial populations, it is very likely that these microbial communities are also co-evolving with one another on a rapid time scale. Again, the phase space is continually changing as are the symmetries.

Can one have a complete description of actual and potential biological niches? If so, the description seems to be incompressible, in the sense that any linguistic description may require new names and meanings for the new unprestable functions, where functions and their names make only sense in the newly co-constructed biological and historical (linguistic) environment. Even for existing niches, short descriptions are given from a specific perspective (they are very epistemic), looking at a purpose, say. One finds out a feature in a niche, because you observe that if it goes away the intended organisms dies. In other terms, niches are compared by differences: one may not be able to prove that two niches are identical or equivalent (in supporting life), but one may show that two niches are different. Once more, there are no symmetries organizing over time these spaces and their internal relations. Mathematically, no symmetry (groups) nor (partial-) order (semigroups) organize the phase spaces of phenotypes, in contrast to physical phase spaces.

Finally, here is one of the many logical challenges posed by evolution: the circularity of the definition of niches is more than the circularity in the definitions. The “in the definitions” circularity concerns the quantities (or quantitative distributions) of given observables. Typically, a numerical function defined by recursion or by impredicative tools yields a circularity in the definition and poses no mathematical nor logical problems, in contemporary logic (this is so also for recursive definitions of metabolic cycles in biology). Similarly, a river flow, which shapes its own border, presents technical difficulties for a careful equational description of its dynamics, but no mathematical nor logical impossibility: one has to optimize a highly non linear and large action/reaction system, yielding a dynamically constructed geodetic, the river path, in perfectly known phase spaces (momentum and space or energy and time, say, as pertinent observables and variables).

The circularity “of the definitions” applies, instead, when it is impossible to prestate the phase space, so the very novel interaction (including the “boundary conditions” in the niche and the biological dynamics) co-defines new observables. The circularity then radically differs from the one in the definition, since it is at the meta-theoretical (meta-linguistic) level: which are the observables and variables to put in the equations? It is not just within prestatable yet circular equations within the theory (ordinary recursion and extended non – linear dynamics), but in the ever changing observables, the phenotypes and the biological functions in a circularly co-specified niche. Mathematically and logically, no law entails the evolution of the biosphere.

Bernard Cache’s Earth Moves: The Furnishing of Territories (Writing Architecture)

bernard_cache_lectur

Take the concept of singularity. In mathematics, what is said to be singular is not a given point, but rather a set of points on a given curve. A point is not singular; it becomes singularized on a continuum. And several types of singularity exist, starting with fractures in curves and other bumps in the road. We will discount them at the outset, for singularities that are marked by discontinuity signal events that are exterior to the curvature and are themselves easily identifiable. In the same way, we will eliminate singularities such as backup points [points de rebroussement]. For though they are indeed discontinuous, they refer to a vector that is tangential to the curve and thus trace a symmetrical axis that constitutive of the backup point. Whether it be a reflection of the tan- gential plane or a rebound with respect to the orthogonal plane, the backup point is thus not a basic singularity. It is rather the result of an operation effectuated on any part of the curve. Here again, the singular would be the sign of too noisy, too memorable an event, while what we want to do is to deal with what is most smooth: ordinary continua, sleek and polished.

On one hand there are the extrema, the maximum and minimum on a given curve. And on the other there are those singular points that, in relation to the extrema, figure as in-betweens. These are known as points of inflection. They are different from the extrema in that they are defined only in relation to themselves, whereas the definition of the extrema presupposes the prior choice of an axis or an orientation, that is to say of a vector.

Indeed, a maximum or a minimum is a point where the tangent to the curve is directed perpendicularly to the axis of the ordinates [y-axis]. Any new orientation of the coordinate axes repositions the maxima and the min- ima; they are thus extrinsic singularities. The point of inflection, however, designates a pure event of curvature where the tangent crosses the curve; yet this event does not depend in any way on the orientation of the axes, which is why it can be said that inflection is an intrinsic singularity. On either side of the inflection, we know that there will be a highest point and a lowest point, but we cannot designate them as long as the curve has not been related to the orientation of a vector. Points of inflection are singularities in and of themselves, while they confer an indeterminacy to the rest of the curve. Preceding the vector, inflection makes of each of the points a possible extremum in relation to its inverse: virtual maxima and minima. In this way, inflection represents a totality of possibilities, as well as an openness, a receptiveness, or an anticipation……

Bernard Cache Earth Moves The Furnishing of Territories

Topological Drifts in Deleuze. Note Quote.

Brion Gysin: How do you get in… get into these paintings?

William Burroughs: Usually I get in by a port of entry, as I call it. It is often a face through whose eyes the picture opens into a landscape and I go literally right through that eye into that landscape. Sometimes it is rather like an archway… a number of little details or a special spot of colours makes the port of entry and then the entire picture will suddenly become a three-dimensional frieze in plaster or jade or some other precious material.

The word fornix means “an archway” or “vault” (in Rome, prostitutes could be solicited there). More directly, fornicatio means “done in the archway”; thus a euphemism for prostitution.

Diagrammatic praxis proposes a contractual (push, pull) approach in which the movement between abstract machine, biogram (embodied, inflected diagram), formal diagram (drawing of, drawing off) and artaffect (realized thing) is topologically immanent. It imagines the practice of writing, of this writing, interleaved with the mapping processes with which it folds and unfolds – forming, deforming and reforming both processes. The relations of non-relations that power the diagram, the thought intensities that resonate between fragments, between content ad expression, the seeable and the sayable, the discursive and the non-discursive, mark entry points; portals of entry through which all points of the diagram pass – push, pull, fold, unfold – without the designation of arrival and departure, without the input/output connotations of a black boxed confection. Ports, as focal points of passage, attract lines of resistance or lines of flight through which the diagram may become both an effectuating concrete assemblage (thing) and remain outside the stratified zone of the audiovisual. It’s as if the port itself is a bifurcating point, a figural inflected archway. The port, as a bifurcation point of resistance (contra black box), modulates and changes the unstable, turbulent interplay between pure Matter and pure Function of the abstract machine. These ports are marked out, localized, situated, by the continuous movement of power-relations:

These power-relations … simultaneously local, unstable and diffuse, do not emanate from a central point or unique locus of sovereignty, but at each moment move from one point to another in a field of forces, marking inflections, resistances, twists and turns when one changes direction or retraces one’s steps… (Gilles Deleuze, Sean Hand-Foucault)

An inflection point, marked out by the diagram, is not a symmetrical form but the difference between concavity and convexity, a pure temporality, a “true atom of form, the true object of geography.” (Bernard Cache)

Untitled

Figure: Left: A bifurcating event presented figurally as an archway, a port of entry through order and chaos. Right: Event/entry with inflexion points, points of suspension, of pure temporality, that gives a form “of an absolute exteriority that is not even the exteriority of any given interiority, but which arise from that most interior place that can be perceived or even conceived […] that of which the perceiving itself is radically temporal or transitory”. The passing through of passage.

Cache’s absolute exteriority is equivalent to Deleuze’s description of the Outside “more distant than any exterior […] ‘twisted’, folded and doubled by an Inside that is deeper than any interior, and alone creates the possibility of the derived relation between the interior and the exterior”. This folded and doubled interior is diagrammed by Deleuze in the folds chapter of Foucault.

Thinking does not depend on a beautiful interiority that reunites the visible ad articulable elements, but is carried under the intrusion of an outside that eats into the interval and forces or dismembers the internal […] when there are only environments and whatever lies betwen them, when words and things are opened up by the environment without ever coinciding, there is a liberation of forces which come from the outside and exist only in a mixed up state of agitation, modification and mutation. In truth they are dice throws, for thinking involves throwing the dice. If the outside, farther away than any external world, is also closer than any internal world, is this not a sign that thought affects itself, by revealing the outside to be its own unthought element?

“It cannot discover the unthought […] without immediately bringing the unthought nearer to itself – or even, perhaps, without pushing it farther away, and in any case without causing man’s own being to undergo a change by the very fact, since it is deployed in the distance between them” (Gilles Deleuze, Sean Hand-Foucault)

Untitled

Figure: Left: a simulation of Deleuze’s central marking in his diagram of the Foucaultian diagram. This is the line of the Outside as Fold. Right: To best express the relations of diagrammatic praxis between content and expression (theory and practice) the Fold figure needs to be drawn as a double Fold (“twice twice” as Massumi might say) – a folded möbius strip. Here the superinflections between inside/outside and content/expression provide transversal vectors.

A topology or topological becoming-shapeshift retains its connectivity, its interconnectedness to preserve its autonomy as a singularity. All the points of all its matter reshape as difference in itself. A topology does not resemble itself. The möbius strip and the infamous torus-to-coffe cup are examples of 2d and 3d topologies. technically a topological surface is totalized, it can not comprise fragments cut or glued to produce a whole. Its change is continuous. It is not cut-copy-pasted. But the cut and its interval are requisite to an emergent new.

For Deleuze, the essence of meaning, the essence of essence, is best expressed in two infinitives; ‘to cut ” and “to die” […] Definite tenses keeping company in time. In the slash between their future and their past: “to cut” as always timeless and alone (Massumi).

Add the individuating “to shift” to the infinitives that reside in the timeless zone of indetermination of future-past. Given the paradigm of the topological-becoming, how might we address writing in the age of copy-paste and hypertext? The seamless and the stitched? As potential is it diagram? A linguistic multiplicity whose virtual immanence is the metalanguage potentiality between the phonemes that gives rise to all language?

Untitled

An overview diagram of diagrammatic praxis based on Deleuze’s diagram of the Foucaultian model shown below. The main modification is to the representation of the Fold. In the top figure, the Fold or zone of subjectification becomes a double-folded möbius strip.

Four folds of subjectification:

1. material part of ourselves which is to be surrounded and folded

2. the fold of the relation between forces always according to a particular rule that the relation between forces is bent back in order to become a relation to oneself (rule ; natural, divine, rational, aesthetic, etc)

3. fold of knowledge constitutes the relation of truth to our being and our being to truth which will serve as the formal condition for any kind of knowledge

4. the fold of the outside itself is the ultimate fold: an ‘interiority of expectation’ from which the subject, in different ways, hopes for immortality, eternity, salvation, freedom or death or detachment.

Diagrammatic Political Via The Exaptive Processes

thing politics v2x copy

The principle of individuation is the operation that in the matter of taking form, by means of topological conditions […] carries out an energy exchange between the matter and the form until the unity leads to a state – the energy conditions express the whole system. Internal resonance is a state of the equilibrium. One could say that the principle of individuation is the common allagmatic system which requires this realization of the energy conditions the topological conditions […] it can produce the effects in all the points of the system in an enclosure […]

This operation rests on the singularity or starting from a singularity of average magnitude, topologically definite.

If we throw in a pinch of Gilbert Simondon’s concept of transduction there’s a basis recipe, or toolkit, for exploring the relational intensities between the three informal (theoretical) dimensions of knowledge, power and subjectification pursued by Foucault with respect to formal practice. Supplanting Foucault’s process of subjectification with Simondon’s more eloquent process of individuation marks an entry for imagining the continuous, always partial, phase-shifting resolutions of the individual. This is not identity as fixed and positionable, it’s a preindividual dynamic that affects an always becoming- individual. It’s the pre-formative as performative. Transduction is a process of individuation. It leads to individuated beings, such as things, gadgets, organisms, machines, self and society, which could be the object of knowledge. It is an ontogenetic operation which provisionally resolves incompatibilities between different orders or different zones of a domain.

What is at stake in the bigger picture, in a diagrammatic politics, is double-sided. Just as there is matter in expression and expression in matter, there is event-value in an  exchange-value paradigm, which in fact amplifies the force of its power relations. The economic engine of our time feeds on event potential becoming-commodity. It grows and flourishes on the mass production of affective intensities. Reciprocally, there are degrees of exchange-value in eventness. It’s the recursive loopiness of our current Creative Industries diagram in which the social networking praxis of Web 2.0 is emblematic and has much to learn.

Black Holes. Thought of the Day 23.0

bhdiagram_1

The formation of black holes can be understood, at least partially, within the context of general relativity. According to general relativity the gravitational collapse leads to a spacetime singularity. But this spacetime singularity can not be adequately described within general relativity, because the equivalence principle of general relativity is not valid for spacetime singularities; therefore, general relativity does not give a complete description of black holes. The same problem exists with regard to the postulated initial singularity of the expanding cosmos. In these cases, quantum mechanics and quantum field theory also reach their limit; they are not applicable for highly curved spacetimes. For a certain curving parameter (the famous Planck scale), gravity has the same strength as the other interactions; then it is not possible to ignore gravity in the context of a quantum field theoretical description. So, there exists no theory which would be able to describe gravitational collapses or which could explain, why (although they are predicted by general relativity) they don’t happen, or why there is no spacetime singularity. And the real problems start, if one brings general relativity and quantum field theory together to describe black holes. Then it comes to rather strange forms of contradictions, and the mutual conceptual incompatibility of general relativity and quantum field theory becomes very clear:

Black holes are according to general relativity surrounded by an event horizon. Material objects and radiation can enter the black hole, but nothing inside its event horizon can leave this region, because the gravitational pull is strong enough to hold back even radiation; the escape velocity is greater than the speed of light. Not even photons can leave a black hole. Black holes have a mass; in the case of the Schwarzschild metrics, they have exclusively a mass. In the case of the Reissner-Nordström metrics, they have a mass and an electric charge; in case of the Kerr metrics, they have a mass and an angular momentum; and in case of the Kerr-Newman metrics, they have mass, electric charge and angular momentum. These are, according to the no-hair theorem, all the characteristics a black hole has at its disposal. Let’s restrict the argument in the following to the Reissner-Nordström metrics in which a black hole has only mass and electric charge. In the classical picture, the electric charge of a black hole becomes noticeable in form of a force exerted on an electrically charged probe outside its event horizon. In the quantum field theoretical picture, interactions are the result of the exchange of virtual interaction bosons, in case of an electric charge: virtual photons. But how can photons be exchanged between an electrically charged black hole and an electrically charged probe outside its event horizon, if no photon can leave a black hole – which can be considered a definition of a black hole? One could think, that virtual photons, mediating electrical interaction, are possibly able (in contrast to real photons, representing radiation) to leave the black hole. But why? There is no good reason and no good answer for that within our present theoretical framework. The same problem exists for the gravitational interaction, for the gravitational pull of the black hole exerted on massive objects outside its event horizon, if the gravitational force is understood as an exchange of gravitons between massive objects, as the quantum field theoretical picture in its extrapolation to gravity suggests. How could (virtual) gravitons leave a black hole at all?

There are three possible scenarios resulting from the incompatibility of our assumptions about the characteristics of a black hole, based on general relativity, and on the picture quantum field theory draws with regard to interactions:

(i) Black holes don’t exist in nature. They are a theoretical artifact, demonstrating the asymptotic inadequacy of Einstein’s general theory of relativity. Only a quantum theory of gravity will explain where the general relativistic predictions fail, and why.

(ii) Black holes exist, as predicted by general relativity, and they have a mass and, in some cases, an electric charge, both leading to physical effects outside the event horizon. Then, we would have to explain, how these effects are realized physically. The quantum field theoretical picture of interactions is either fundamentally wrong, or we would have to explain, why virtual photons behave completely different, with regard to black holes, from real radiation photons. Or the features of a black hole – mass, electric charge and angular momentum – would be features imprinted during its formation onto the spacetime surrounding the black hole or onto its event horizon. Then, interactions between a black hole and its environment would rather be interactions between the environment and the event horizon or even interactions within the environmental spacetime.

(iii) Black holes exist as the product of gravitational collapses, but they do not exert any effects on their environment. This is the craziest of all scenarios. For this scenario, general relativity would have to be fundamentally wrong. In contrast to the picture given by general relativity, black holes would have no physically effective features at all: no mass, no electric charge, no angular momentum, nothing. And after the formation of a black hole, there would be no spacetime curvature, because there remains no mass. (Or, the spacetime curvature has to result from other effects.) The mass and the electric charge of objects falling (casually) into a black hole would be irretrievably lost. They would simply disappear from the universe, when they pass the event horizon. Black holes would not exert any forces on massive or electrically charged objects in their environment. They would not pull any massive objects into their event horizon and increase thereby their mass. Moreover, their event horizon would mark a region causally disconnected with our universe: a region outside of our universe. Everything falling casually into the black hole, or thrown intentionally into this region, would disappear from the universe.

Techno-Commercial Singularity: Decelerator / Diagram.

H/T Antinomia Imediata

If the Cathedral is actually efficient, the more it happens, the less it happens. Decelerator.

  1. taxation: this deviates resources from capital and buries them into the consumption of the tax-receivers (namely the Cathedral bureaucracy). trash and shit.
  2. regulation: there are various ways this could work, insofar as regulation is very inventive. but the main pattern has to do with deviating capital from the most rentable (i.e., (self-re)productive) investments, into those that are most likely to become un-recyclable trash, at least in the long run.
  3. politicization: this deviates brain-power from technological producing theories into, well, bullshit research departments, especially through politicization of academic funding of hard sciences.
  4. protectionism: since this protects technical developments from properly feeding back into the commercial cycle, it breaks the link between technical advantage and capital accumulation, leading lots of resources into stupid gadgetry.

all these being forms of fucking up the incentive structures that allow the accelerative cycle to be. in diagram form:unnamed (2)