Adjacency of the Possible: Teleology of Autocatalysis. Thought of the Day 140.0

abiogenesisautocatalysis

Given a network of catalyzed chemical reactions, a (sub)set R of such reactions is called:

  1. Reflexively autocatalytic (RA) if every reaction in R is catalyzed by at least one molecule involved in any of the reactions in R;
  2. F-generated (F) if every reactant in R can be constructed from a small “food set” F by successive applications of reactions from R;
  3. Reflexively autocatalytic and F-generated (RAF) if it is both RA and F.

The food set F contains molecules that are assumed to be freely available in the environment. Thus, an RAF set formally captures the notion of “catalytic closure”, i.e., a self-sustaining set supported by a steady supply of (simple) molecules from some food set….

Stuart Kauffman begins with the Darwinian idea of the origin of life in a biological ‘primordial soup’ of organic chemicals and investigates the possibility of one chemical substance to catalyze the reaction of two others, forming new reagents in the soup. Such catalyses may, of course, form chains, so that one reagent catalyzes the formation of another catalyzing another, etc., and self-sustaining loops of reaction chains is an evident possibility in the appropriate chemical environment. A statistical analysis would reveal that such catalytic reactions may form interdependent networks when the rate of catalyzed reactions per molecule approaches one, creating a self-organizing chemical cycle which he calls an ‘autocatalytic set’. When the rate of catalyses per reagent is low, only small local reaction chains form, but as the rate approaches one, the reaction chains in the soup suddenly ‘freeze’ so that what was a group of chains or islands in the soup now connects into one large interdependent network, constituting an ‘autocatalytic set’. Such an interdependent reaction network constitutes the core of the body definition unfolding in Kauffman, and its cyclic character forms the basic precondition for self-sustainment. ‘Autonomous agent’ is an autocatalytic set able to reproduce and to undertake at least one thermodynamic work cycle.

This definition implies two things: reproduction possibility, and the appearance of completely new, interdependent goals in work cycles. The latter idea requires the ability of the autocatalytic set to save energy in order to spend it in its own self-organization, in its search for reagents necessary to uphold the network. These goals evidently introduce a – restricted, to be sure – teleology defined simply by the survival of the autocatalytic set itself: actions supporting this have a local teleological character. Thus, the autocatalytic set may, as it evolves, enlarge its cyclic network by recruiting new subcycles supporting and enhancing it in a developing structure of subcycles and sub-sub-cycles. 

Kauffman proposes that the concept of ‘autonomous agent’ implies a whole new cluster of interdependent concepts. Thus, the autonomy of the agent is defined by ‘catalytic closure’ (any reaction in the network demanding catalysis will get it) which is a genuine Gestalt property in the molecular system as a whole – and thus not in any way derivable from the chemistry of single chemical reactions alone.

Kauffman’s definitions on the basis of speculative chemistry thus entail not only the Kantian cyclic structure, but also the primitive perception and action phases of Uexküll’s functional circle. Thus, Kauffman’s definition of the organism in terms of an ‘autonomous agent’ basically builds on an Uexküllian intuition, namely the idea that the most basic property in a body is metabolism: the constrained, organizing processing of high-energy chemical material and the correlated perception and action performed to localize and utilize it – all of this constituting a metabolic cycle coordinating the organism’s in- and outside, defining teleological action. Perception and action phases are so to speak the extension of the cyclical structure of the closed catalytical set to encompass parts of its surroundings, so that the circle of metabolism may only be completed by means of successful perception and action parts.

The evolution of autonomous agents is taken as the empirical basis for the hypothesis of a general thermodynamic regularity based on non-ergodicity: the Big Bang universe (and, consequently, the biosphere) is not at equilibrium and will not reach equilibrium during the life-time of the universe. This gives rise to Kauffman’s idea of the ‘adjacent possible’. At a given point in evolution, one can define the set of chemical substances which do not exist in the universe – but which is at a distance of one chemical reaction only from a substance already existing in the universe. Biological evolution has, evidently, led to an enormous growth of types of organic macromolecules, and new such substances come into being every day. Maybe there is a sort of chemical potential leading from the actually realized substances and into the adjacent possible which is in some sense driving the evolution? In any case, Kauffman claims the hypothesis that the biosphere as such is supercritical in the sense that there is, in general, more than one action catalyzed by each reagent. Cells, in order not to be destroyed by this chemical storm, must be internally subcritical (even if close to the critical boundary). But if the biosphere as such is, in fact, supercritical, then this distinction seemingly a priori necessitates the existence of a boundary of the agent, protecting it against the environment.

Advertisement

Grand Unification Theory/(Anti-GUT): Emerging Symmetry, Topology in a Momentum Space. Thought of the Day 129.0

Untitled

Quantum phase transition between two ground states with the same symmetry but of different universality class – gapless at q < qc and fully gapped at q > qc – as isolated point (a) as the termination point of first order transition (b)

There are two schemes for the classification of states in condensed matter physics and relativistic quantum fields: classification by symmetry (GUT scheme) and by momentum space topology (anti-GUT scheme).

For the first classification method, a given state of the system is characterized by a symmetry group H which is a subgroup of the symmetry group G of the relevant physical laws. The thermodynamic phase transition between equilibrium states is usually marked by a change of the symmetry group H. This classification reflects the phenomenon of spontaneously broken symmetry. In relativistic quantum fields the chain of successive phase transitions, in which the large symmetry group existing at high energy is reduced at low energy, is in the basis of the Grand Unification models (GUT). In condensed matter the spontaneous symmetry breaking is a typical phenomenon, and the thermodynamic states are also classified in terms of the subgroup H of the relevant group G. The groups G and H are also responsible for topological defects, which are determined by the nontrivial elements of the homotopy groups πn(G/H).

The second classification method reflects the opposite tendency – the anti Grand Unification (anti-GUT) – when instead of the symmetry breaking the symmetry gradually emerges at low energy. This method deals with the ground states of the system at zero temperature (T = 0), i.e., it is the classification of quantum vacua. The universality classes of quantum vacua are determined by momentum-space topology, which is also responsible for the type of the effective theory, emergent physical laws and symmetries at low energy. Contrary to the GUT scheme, where the symmetry of the vacuum state is primary giving rise to topology, in the anti-GUT scheme the topology in the momentum space is primary while the vacuum symmetry is the emergent phenomenon in the low energy corner.

At the moment, we live in the ultra-cold Universe. All the characteristic temperatures in our Universe are extremely small compared to the Planck energy scale EP. That is why all the massive fermions, whose natural mass must be of order EP, are frozen out due to extremely small factor exp(−EP/T). There is no matter in our Universe unless there are massless fermions, whose masslessness is protected with extremely high accuracy. It is the topology in the momentum space, which provides such protection.

For systems living in 3D space, there are four basic universality classes of fermionic vacua provided by topology in momentum space:

(i)  Vacua with fully-gapped fermionic excitations, such as semiconductors and conventional superconductors.

(ii)  Vacua with fermionic excitations characterized by Fermi points – points in 3D momentum space at which the energy of fermionic quasiparticle vanishes. Examples are provided by the quantum vacuum of Standard Model above the electroweak transition, where all elementary particles are Weyl fermions with Fermi points in the spectrum. This universality class manifests the phenomenon of emergent relativistic quantum fields at low energy: close to the Fermi points the fermionic quasiparticles behave as massless Weyl fermions, while the collective modes of the vacuum interact with these fermions as gauge and gravitational fields.

(iii)  Vacua with fermionic excitations characterized by lines in 3D momentum space or points in 2D momentum space. We call them Fermi lines, though in general it is better to characterize zeroes by co-dimension, which is the dimension of p-space minus the dimension of the manifold of zeros. Lines in 3D momentum space and points in 2D momentum space have co-dimension 2: since 3−1 = 2−0 = 2. The Fermi lines are topologically stable only if some special symmetry is obeyed.

(iv) Vacua with fermionic excitations characterized by Fermi surfaces. This universality class also manifests the phenomenon of emergent physics, though non-relativistic: at low temperature all the metals behave in a similar way, and this behavior is determined by the Landau theory of Fermi liquid – the effective theory based on the existence of Fermi surface. Fermi surface has co-dimension 1: in 3D system it is the surface (co-dimension = 3 − 2 = 1), in 2D system it is the line (co- dimension = 2 − 1 = 1), and in 1D system it is the point (co-dimension = 1 − 0 = 1; in one dimensional system the Landau Fermi-liquid theory does not work, but the Fermi surface survives).

The possibility of the Fermi band class (v), where the energy vanishes in the finite region of the 3D momentum space and thus zeroes have co-dimension 0, and such topologically stable flat band may exist in the spectrum of fermion zero modes, i.e. for fermions localized in the core of the topological objects. The phase transitions which follow from this classification scheme are quantum phase transitions which occur at T = 0. It may happen that by changing some parameter q of the system we transfer the vacuum state from one universality class to another, or to the vacuum of the same universality class but different topological quantum number, without changing its symmetry group H. The point qc, where this zero-temperature transition occurs, marks the quantum phase transition. For T ≠ 0, the second order phase transition is absent, as the two states belong to the same symmetry class H, but the first order phase transition is not excluded. Hence, there is an isolated singular point (qc, 0) in the (q, T) plane, or the end point of the first order transition. The quantum phase transitions which occur in classes (iv) and (i) or be- tween these classes are well known. In the class (iv) the corresponding quantum phase transition is known as Lifshitz transition, at which the Fermi surface changes its topology or emerges from the fully gapped state of class (i). The transition between the fully gapped states characterized by different topological charges occurs in 2D systems exhibiting the quantum Hall and spin-Hall effect: this is the plateau-plateau transition between the states with different values of the Hall (or spin-Hall) conductance. The less known transitions involve nodes of co-dimension 3 and nodes of co-dimension 2.

Utopia as Emergence Initiating a Truth. Thought of the Day 104.0

chernikhov-architecture-of-industrial-forms-1934a

It is true that, in our contemporary world, traditional utopian models have withered, but today a new utopia of canonical majority has taken over the space of any action transformative of current social relations. Instead of radicalness, conformity has become the main expression of solidarity for the subject abandoned to her consecrated individuality. Where past utopias inscribed a collective vision to be fulfilled for future generations, the present utopia confiscates the future of the individual, unless she registers in a collective, popularized expression of the norm that reaps culture, politics, morality, and the like. The ideological outcome of the canonical utopia is the belief that the majority constitutes a safety net for individuality. If the future of the individual is bleak, at least there is some hope in saving his/her present.

This condition reiterates Ernst Bloch’s distinction between anticipatory and compensatory utopia, with the latter gaining ground today (Ruth Levitas). By discarding the myth of a better future for all, the subject succumbs to the immobilizing myth of a safe present for herself (the ultimate transmutation of individuality to individualism). The world can surmount Difference, simply by taking away its painful radicalness, replacing it with a non-violent, pluralistic, and multi-cultural present, as Žižek harshly criticized it for its anti-rational status. In line with Badiou and Jameson, Žižek discerns behind the multitude of identities and lifestyles in our world the dominance of the One and the eradication of Difference (the void of antagonism). It would have been ideal, if pluralism were not translated to populism and the non-violent to a sanctimonious respect of Otherness.

Badiou also points to the nihilism that permeates modern ethicology that puts forward the “recognition of the other”, the respect of “differences”, and “multi-culturalism”. Such ethics is supposed to protect the subject from discriminatory behaviours on the basis of sex, race, culture, religion, and so on, as one must display “tolerance” towards others who maintain different thinking and behaviour patterns. For Badiou, this ethical discourse is far from effective and truthful, as is revealed by the competing axes it forges (e.g., opposition between “tolerance” and “fanaticism”, “recognition of the other” and “identitarian fixity”).

Badiou denounces the decomposed religiosity of current ethical discourse, in the face of the pharisaic advocates of the right to difference who are “clearly horrified by any vigorously sustained difference”. The pharisaism of this respect for difference lies in the fact that it suggests the acceptance of the other, in so far as s/he is a “good other”; in other words, in so far as s/he is the same as everyone else. Such an ethical attitude ironically affirms the hegemonic identity of those who opt for integration of the different other, which is to say, the other is requested to suppress his/her difference, so that he partakes in the “Western identity”.

Rather than equating being with the One, the law of being is the multiple “without one”, that is, every multiple being is a multiple of multiples, stretching alterity into infinity; alterity is simply “what there is” and our experience is “the infinite deployment of infinite differences”. Only the void can discontinue this multiplicity of being, through the event that “breaks” with the existing order and calls for a “new way of being”. Thus, a radical utopian gesture needs to emerge from the perspective of the event, initiating a truth process.

Fundamental Theorem of Asset Pricing: Tautological Meeting of Mathematical Martingale and Financial Arbitrage by the Measure of Probability.

thinkstockphotos-496599823

The Fundamental Theorem of Asset Pricing (FTAP hereafter) has two broad tenets, viz.

1. A market admits no arbitrage, if and only if, the market has a martingale measure.

2. Every contingent claim can be hedged, if and only if, the martingale measure is unique.

The FTAP is a theorem of mathematics, and the use of the term ‘measure’ in its statement places the FTAP within the theory of probability formulated by Andrei Kolmogorov (Foundations of the Theory of Probability) in 1933. Kolmogorov’s work took place in a context captured by Bertrand Russell, who observed that

It is important to realise the fundamental position of probability in science. . . . As to what is meant by probability, opinions differ.

In the 1920s the idea of randomness, as distinct from a lack of information, was becoming substantive in the physical sciences because of the emergence of the Copenhagen Interpretation of quantum mechanics. In the social sciences, Frank Knight argued that uncertainty was the only source of profit and the concept was pervading John Maynard Keynes’ economics (Robert Skidelsky Keynes the return of the master).

Two mathematical theories of probability had become ascendant by the late 1920s. Richard von Mises (brother of the Austrian economist Ludwig) attempted to lay down the axioms of classical probability within a framework of Empiricism, the ‘frequentist’ or ‘objective’ approach. To counter–balance von Mises, the Italian actuary Bruno de Finetti presented a more Pragmatic approach, characterised by his claim that “Probability does not exist” because it was only an expression of the observer’s view of the world. This ‘subjectivist’ approach was closely related to the less well-known position taken by the Pragmatist Frank Ramsey who developed an argument against Keynes’ Realist interpretation of probability presented in the Treatise on Probability.

Kolmogorov addressed the trichotomy of mathematical probability by generalising so that Realist, Empiricist and Pragmatist probabilities were all examples of ‘measures’ satisfying certain axioms. In doing this, a random variable became a function while an expectation was an integral: probability became a branch of Analysis, not Statistics. Von Mises criticised Kolmogorov’s generalised framework as un-necessarily complex. About a decade and a half back, the physicist Edwin Jaynes (Probability Theory The Logic Of Science) champions Leonard Savage’s subjectivist Bayesianism as having a “deeper conceptual foundation which allows it to be extended to a wider class of applications, required by current problems of science”.

The objections to measure theoretic probability for empirical scientists can be accounted for as a lack of physicality. Frequentist probability is based on the act of counting; subjectivist probability is based on a flow of information, which, following Claude Shannon, is now an observable entity in Empirical science. Measure theoretic probability is based on abstract mathematical objects unrelated to sensible phenomena. However, the generality of Kolmogorov’s approach made it flexible enough to handle problems that emerged in physics and engineering during the Second World War and his approach became widely accepted after 1950 because it was practically more useful.

In the context of the first statement of the FTAP, a ‘martingale measure’ is a probability measure, usually labelled Q, such that the (real, rather than nominal) price of an asset today, X0, is the expectation, using the martingale measure, of its (real) price in the future, XT. Formally,

X0 = EQ XT

The abstract probability distribution Q is defined so that this equality exists, not on any empirical information of historical prices or subjective judgement of future prices. The only condition placed on the relationship that the martingale measure has with the ‘natural’, or ‘physical’, probability measures usually assigned the label P, is that they agree on what is possible.

The term ‘martingale’ in this context derives from doubling strategies in gambling and it was introduced into mathematics by Jean Ville in a development of von Mises’ work. The idea that asset prices have the martingale property was first proposed by Benoit Mandelbrot in response to an early formulation of Eugene Fama’s Efficient Market Hypothesis (EMH), the two concepts being combined by Fama. For Mandelbrot and Fama the key consequence of prices being martingales was that the current price was independent of the future price and technical analysis would not prove profitable in the long run. In developing the EMH there was no discussion on the nature of the probability under which assets are martingales, and it is often assumed that the expectation is calculated under the natural measure. While the FTAP employs modern terminology in the context of value-neutrality, the idea of equating a current price with a future, uncertain, has ethical ramifications.

The other technical term in the first statement of the FTAP, arbitrage, has long been used in financial mathematics. Liber Abaci Fibonacci (Laurence Sigler Fibonaccis Liber Abaci) discusses ‘Barter of Merchandise and Similar Things’, 20 arms of cloth are worth 3 Pisan pounds and 42 rolls of cotton are similarly worth 5 Pisan pounds; it is sought how many rolls of cotton will be had for 50 arms of cloth. In this case there are three commodities, arms of cloth, rolls of cotton and Pisan pounds, and Fibonacci solves the problem by having Pisan pounds ‘arbitrate’, or ‘mediate’ as Aristotle might say, between the other two commodities.

Within neo-classical economics, the Law of One Price was developed in a series of papers between 1954 and 1964 by Kenneth Arrow, Gérard Debreu and Lionel MacKenzie in the context of general equilibrium, in particular the introduction of the Arrow Security, which, employing the Law of One Price, could be used to price any asset. It was on this principle that Black and Scholes believed the value of the warrants could be deduced by employing a hedging portfolio, in introducing their work with the statement that “it should not be possible to make sure profits” they were invoking the arbitrage argument, which had an eight hundred year history. In the context of the FTAP, ‘an arbitrage’ has developed into the ability to formulate a trading strategy such that the probability, under a natural or martingale measure, of a loss is zero, but the probability of a positive profit is not.

To understand the connection between the financial concept of arbitrage and the mathematical idea of a martingale measure, consider the most basic case of a single asset whose current price, X0, can take on one of two (present) values, XTD < XTU, at time T > 0, in the future. In this case an arbitrage would exist if X0 ≤ XTD < XTU: buying the asset now, at a price that is less than or equal to the future pay-offs, would lead to a possible profit at the end of the period, with the guarantee of no loss. Similarly, if XTD < XTU ≤ X0, short selling the asset now, and buying it back would also lead to an arbitrage. So, for there to be no arbitrage opportunities we require that

XTD < X0 < XTU

This implies that there is a number, 0 < q < 1, such that

X0 = XTD + q(XTU − XTD)

= qXTU + (1−q)XTD

The price now, X0, lies between the future prices, XTU and XTD, in the ratio q : (1 − q) and represents some sort of ‘average’. The first statement of the FTAP can be interpreted simply as “the price of an asset must lie between its maximum and minimum possible (real) future price”.

If X0 < XTD ≤ XTU we have that q < 0 whereas if XTD ≤ XTU < X0 then q > 1, and in both cases q does not represent a probability measure which by Kolmogorov’s axioms, must lie between 0 and 1. In either of these cases an arbitrage exists and a trader can make a riskless profit, the market involves ‘turpe lucrum’. This account gives an insight as to why James Bernoulli, in his moral approach to probability, considered situations where probabilities did not sum to 1, he was considering problems that were pathological not because they failed the rules of arithmetic but because they were unfair. It follows that if there are no arbitrage opportunities then quantity q can be seen as representing the ‘probability’ that the XTU price will materialise in the future. Formally

X0 = qXTU + (1−q) XTD ≡ EQ XT

The connection between the financial concept of arbitrage and the mathematical object of a martingale is essentially a tautology: both statements mean that the price today of an asset must lie between its future minimum and maximum possible value. This first statement of the FTAP was anticipated by Frank Ramsey when he defined ‘probability’ in the Pragmatic sense of ‘a degree of belief’ and argues that measuring ‘degrees of belief’ is through betting odds. On this basis he formulates some axioms of probability, including that a probability must lie between 0 and 1. He then goes on to say that

These are the laws of probability, …If anyone’s mental condition violated these laws, his choice would depend on the precise form in which the options were offered him, which would be absurd. He could have a book made against him by a cunning better and would then stand to lose in any event.

This is a Pragmatic argument that identifies the absence of the martingale measure with the existence of arbitrage and today this forms the basis of the standard argument as to why arbitrages do not exist: if they did the, other market participants would bankrupt the agent who was mis-pricing the asset. This has become known in philosophy as the ‘Dutch Book’ argument and as a consequence of the fact/value dichotomy this is often presented as a ‘matter of fact’. However, ignoring the fact/value dichotomy, the Dutch book argument is an alternative of the ‘Golden Rule’– “Do to others as you would have them do to you.”– it is infused with the moral concepts of fairness and reciprocity (Jeffrey Wattles The Golden Rule).

FTAP is the ethical concept of Justice, capturing the social norms of reciprocity and fairness. This is significant in the context of Granovetter’s discussion of embeddedness in economics. It is conventional to assume that mainstream economic theory is ‘undersocialised’: agents are rational calculators seeking to maximise an objective function. The argument presented here is that a central theorem in contemporary economics, the FTAP, is deeply embedded in social norms, despite being presented as an undersocialised mathematical object. This embeddedness is a consequence of the origins of mathematical probability being in the ethical analysis of commercial contracts: the feudal shackles are still binding this most modern of economic theories.

Ramsey goes on to make an important point

Having any definite degree of belief implies a certain measure of consistency, namely willingness to bet on a given proposition at the same odds for any stake, the stakes being measured in terms of ultimate values. Having degrees of belief obeying the laws of probability implies a further measure of consistency, namely such a consistency between the odds acceptable on different propositions as shall prevent a book being made against you.

Ramsey is arguing that an agent needs to employ the same measure in pricing all assets in a market, and this is the key result in contemporary derivative pricing. Having identified the martingale measure on the basis of a ‘primal’ asset, it is then applied across the market, in particular to derivatives on the primal asset but the well-known result that if two assets offer different ‘market prices of risk’, an arbitrage exists. This explains why the market-price of risk appears in the Radon-Nikodym derivative and the Capital Market Line, it enforces Ramsey’s consistency in pricing. The second statement of the FTAP is concerned with incomplete markets, which appear in relation to Arrow-Debreu prices. In mathematics, in the special case that there are as many, or more, assets in a market as there are possible future, uncertain, states, a unique pricing vector can be deduced for the market because of Cramer’s Rule. If the elements of the pricing vector satisfy the axioms of probability, specifically each element is positive and they all sum to one, then the market precludes arbitrage opportunities. This is the case covered by the first statement of the FTAP. In the more realistic situation that there are more possible future states than assets, the market can still be arbitrage free but the pricing vector, the martingale measure, might not be unique. The agent can still be consistent in selecting which particular martingale measure they choose to use, but another agent might choose a different measure, such that the two do not agree on a price. In the context of the Law of One Price, this means that we cannot hedge, replicate or cover, a position in the market, such that the portfolio is riskless. The significance of the second statement of the FTAP is that it tells us that in the sensible world of imperfect knowledge and transaction costs, a model within the framework of the FTAP cannot give a precise price. When faced with incompleteness in markets, agents need alternative ways to price assets and behavioural techniques have come to dominate financial theory. This feature was realised in The Port Royal Logic when it recognised the role of transaction costs in lotteries.

Meillassoux’s Principle of Unreason Towards an Intuition of the Absolute In-itself. Note Quote.

geotime_usgs

The principle of reason such as it appears in philosophy is a principle of contingent reason: not only how philosophical reason concerns difference instead of identity, we but also why the Principle of Sufficient Reason can no longer be understood in terms of absolute necessity. In other words, Deleuze disconnects the Principle of Sufficient Reason from the ontotheological tradition no less than from its Heideggerian deconstruction. What remains then of Meillassoux’s criticism in After finitude: An Essay on the Necessity of Contigency that Deleuze no less than Hegel hypostatizes or absolutizes the correlation between thinking and being and thus brings back a vitalist version of speculative idealism through the back door?

At stake in Meillassoux’s criticism of the Principle of Sufficient Reason is a double problem: the conditions of possibility of thinking and knowing an absolute and subsequently the conditions of possibility of rational ideology critique. The first problem is primarily epistemological: how can philosophy justify scientific knowledge claims about a reality that is anterior to our relation to it and that is hence not given in the transcendental object of possible experience (the arche-fossil )? This is a problem for all post-Kantian epistemologies that hold that we can only ever know the correlate of being and thought. Instead of confronting this weak correlationist position head on, however, Meillassoux seeks a solution in the even stronger correlationist position that denies not only the knowability of the in itself, but also its very thinkability or imaginability. Simplified: if strong correlationists such as Heidegger or Wittgenstein insist on the historicity or facticity (non-necessity) of the correlation of reason and ground in order to demonstrate the impossibility of thought’s self-absolutization, then the very force of their argument, if it is not to contradict itself, implies more than they are willing to accept: the necessity of the contingency of the transcendental structure of the for itself. As a consequence, correlationism is incapable of demonstrating itself to be necessary. This is what Meillassoux calls the principle of factiality or the principle of unreason. It says that it is possible to think of two things that exist independently of thought’s relation to it: contingency as such and the principle of non-contradiction. The principle of unreason thus enables the intellectual intuition of something that is absolutely in itself, namely the absolute impossibility of a necessary being. And this in turn implies the real possibility of the completely random and unpredictable transformation of all things from one moment to the next. Logically speaking, the absolute is thus a hyperchaos or something akin to Time in which nothing is impossible, except it be necessary beings or necessary temporal experiences such as the laws of physics.

There is, moreover, nothing mysterious about this chaos. Contingency, and Meillassoux consistently refers to this as Hume’s discovery, is a purely logical and rational necessity, since without the principle of non-contradiction not even the principle of factiality would be absolute. It is thus a rational necessity that puts the Principle of Sufficient Reason out of action, since it would be irrational to claim that it is a real necessity as everything that is is devoid of any reason to be as it is. This leads Meillassoux to the surprising conclusion that [t]he Principle of Sufficient Reason is thus another name for the irrational… The refusal of the Principle of Sufficient Reason is not the refusal of reason, but the discovery of the power of chaos harboured by its fundamental principle (non-contradiction). (Meillassoux 2007: 61) The principle of factiality thus legitimates or founds the rationalist requirement that reality be perfectly amenable to conceptual comprehension at the same time that it opens up [a] world emancipated from the Principle of Sufficient Reason (Meillassoux) but founded only on that of non-contradiction.

This emancipation brings us to the practical problem Meillassoux tries to solve, namely the possibility of ideology critique. Correlationism is essentially a discourse on the limits of thought for which the deabsolutization of the Principle of Sufficient Reason marks reason’s discovery of its own essential inability to uncover an absolute. Thus if the Galilean-Copernican revolution of modern science meant the paradoxical unveiling of thought’s capacity to think what there is regardless of whether thought exists or not, then Kant’s correlationist version of the Copernican revolution was in fact a Ptolemaic counterrevolution. Since Kant and even more since Heidegger, philosophy has been adverse precisely to the speculative import of modern science as a formal, mathematical knowledge of nature. Its unintended consequence is therefore that questions of ultimate reasons have been dislocated from the domain of metaphysics into that of non-rational, fideist discourse. Philosophy has thus made the contemporary end of metaphysics complicit with the religious belief in the Principle of Sufficient Reason beyond its very thinkability. Whence Meillassoux’s counter-intuitive conclusion that the refusal of the Principle of Sufficient Reason furnishes the minimal condition for every critique of ideology, insofar as ideology cannot be identified with just any variety of deceptive representation, but is rather any form of pseudo-rationality whose aim is to establish that what exists as a matter of fact exists necessarily. In this way a speculative critique pushes skeptical rationalism’s relinquishment of the Principle of Sufficient Reason to the point where it affirms that there is nothing beneath or beyond the manifest gratuitousness of the given nothing, but the limitless and lawless power of its destruction, emergence, or persistence. Such an absolutizing even though no longer absolutist approach would be the minimal condition for every critique of ideology: to reject dogmatic metaphysics means to reject all real necessity, and a fortiori to reject the Principle of Sufficient Reason, as well as the ontological argument.

On the one hand, Deleuze’s criticism of Heidegger bears many similarities to that of Meillassoux when he redefines the Principle of Sufficient Reason in terms of contingent reason or with Nietzsche and Mallarmé: nothing rather than something such that whatever exists is a fiat in itself. His Principle of Sufficient Reason is the plastic, anarchic and nomadic principle of a superior or transcendental empiricism that teaches us a strange reason, that of the multiple, chaos and difference. On the other hand, however, the fact that Deleuze still speaks of reason should make us wary. For whereas Deleuze seeks to reunite chaotic being with systematic thought, Meillassoux revives the classical opposition between empiricism and rationalism precisely in order to attack the pre-Kantian, absolute validity of the Principle of Sufficient Reason. His argument implies a return to a non-correlationist version of Kantianism insofar as it relies on the gap between being and thought and thus upon a logic of representation that renders Deleuze’s Principle of Sufficient Reason unrecognizable, either through a concept of time, or through materialism.

Excessive Subjective Transversalities. Thought of the Day 33.0

In other words, object and subject, in their mutual difference and reciprocal trajectories, emerge and re-emerge together, from transformation. The everything that has already happened is emergence, registered after its fact in a subject-object relation. Where there was classically and in modernity an external opposition between object and subject, there is now a double distinction internal to the transformation. 1) After-the-fact: subject-object is to emergence as stoppage is to process. 2) In-fact: “objective” and “subjective” are inseparable, as matter of transformation to manner of transformation… (Brian Massumi Deleuze Guattari and Philosophy of Expression)

chaosmos__79422.1408518169.650.650

Massumi makes the case, after Simondon and Deleuze and Guattari, for a dynamic process of subjectivity in which subject and object are other but their relation is transformative to their terms. That relation is emergence. In Felix Guattari’s last book, Chaosmosis, he outlines the production of subjectivity as transversal. He states that subjectivity is

the ensemble of conditions which render possible the emergence of individual and/or collective instances as self-referential existential Territories, adjacent, or in a delimiting relation, to an alterity that is itself subjective.

This is the subject in excess (Simondon; Deleuze), overpowering the transcendental. The subject as constituted by all the forces that simultaneously impinge upon it; are in relation to it. Similarly, Simondon characterises this subjectivity as the transindividual, which refers to

a relation to others, which is not determined by a constituted subject position, but by pre-individuated potentials only experienced as affect (Adrian Mackenzie-Transductions_ bodies and machines at speed).

Equating this proposition to technologically enabled relations exerts a strong attraction on the experience of felt presence and interaction in distributed networks. Simondon’s principle of individuation, an ontogenetic process similar to Deleuze’s morphogenetic process, is committed to the guiding principle

of the conservation of being through becoming. This conservation is effected by means of the exchanges made between structure and process… (Simondon).

Or think of this as structure and organisation, which is autopoietic process; the virtual organisation of the affective interval. These leanings best situate ideas circulating through collectives and their multiple individuations. These approaches reflect one of Bergson’s lasting contributions to philosophical practice: his anti-dialectical methodology that debunks duality and the synthesised composite for a differentiated multiplicity that is also a unified (yet heterogeneous) continuity of duration. Multiplicities replace the transcendental concept of essences.

Duality’s Anti-Realism or Poisoning Ontological Realism: The Case of Vanishing Ontology. Note Quote.

M_Systems_-_A__html_m65d67aa7

If the intuitive quality of the external ontological object is diminished piece by piece during the evolutionary progress of physical theory (which must be acknowledged also in a hidden parameter framework), is there any core of the notion of an ontological object at all that can be trusted to be immune against scientific decomposition?

Quantum mechanics cannot answer this question. Contemporary physics is in a quite different position. The full dissolution of ontology is a characteristic process of particle physics whose unfolding starts with quantum mechanics and gains momentum in gauge field theory until, in string theory, the ontological object has simply vanished.

The concept to be considered is string duality, with the remarkable phenomenon of T-duality according to which a string wrapped around a small compact dimension can as well be understood as a string that is not wrapped but moves freely along a large compact dimension. The phenomenon is rooted in the quantum principles but clearly transcends what one is used to in the quantum world. It is not a mere case of quantum indeterminacy concerning two states of the system. We rather face two theoretical formulations which are undistinguishable in principle so that they cannot be interpreted as referring to two different states at all. Nevertheless the two formulations differ in characteristics which lie at the core of any meaningful ontology of an external world. They differ in the shape of space-time and they differ in form and topological position of the elementary objects. The fact that those characteristics are reduced to technical parameters whose values depend on the choice of the theoretical formulation contradicts ontological scientific realism in the most straightforward way. If a situation can be described by two different sets of elementary objects depending on the choice of the theoretical framework, how can it make sense to assert that these ontological objects actually exist in an external world?

The question gets even more virulent as T-duality by no means remains the only duality relation that surfaces in string theory. It turns out that the existence of dualities is one of string theory’s most characteristic features. They seem to pop up wherever one looks for them. Probably the most important role played by duality relations today is to connect all different superstring theories. Before 1995 physicists knew 5 different types of superstring theory. Then it turned out that these 5 theories and a 6th by then unknown theory named ‘M-theory’ are interconnected by duality relations. Two types of duality are involved. Some theories can be transformed into each other through inversion of a compactification radius, which is the phenomenon we know already under the name of T-duality. Others can be transformed into each other by inversion of the string coupling constant. This duality is called S-duality. Then there is M-theory, where the string coupling constant is transformed into an additional 11th dimension whose size is proportional to the coupling strength of the dual theory. The described web of dualities connects theories whose elementary objects have different symmetry structure and different dimensionality. M-theory even has a different number of spatial dimensions than its co-theories. Duality nevertheless implies that M-theory and the 5 possible superstring theories only represent different formulations of one single actual theory. This statement constitutes the basis for string theory’s uniqueness claims and shows the pivotal role played by the duality principle.

An evaluation of the philosophical implications of duality in modern string theory must first acknowledge that the problems to identify uniquely the ontological basis of a scientific theory are as old as the concept of invisible scientific objects itself. Complex theories tend to allow the insertion of ontology at more than one level of their structure. It is not a priori clear in classical electromagnetism whether the field or the potential should be understood as the fundamental physical object and one may wonder similarly in quantum field theory whether that concept’s basic object is the particle or the field. Questions of this type clearly pose a serious philosophical problem. Some philosophers like Quine have drawn the conclusion to deny any objective basis for the imputation of ontologies. Philosophers with a stronger affinity for realism however often stress that there do exist arguments which are able to select a preferable ontological set after all. It might also be suggested that ontological alternatives at different levels of the theoretical structure do not pose a threat to realism but should be interpreted merely as different parameterisations of ontological reality. The problem is created at a philosophical level by imputing an ontology to a physical theory whose structure neither depends on nor predetermines uniquely that imputation. The physicist puts one compact theoretical structure into space-time and the philosopher struggles with the question at which level ontological claims should be inserted.

The implications of string-duality have an entirely different quality. String duality really posits different ‘parallel’ empirically indistinguishable versions of structure in spacetime which are based on different sets of elementary objects. This statement is placed at the physical level independently of any philosophical interpretation. Thus it transfers the problem of the lack of ontological uniqueness from a philosophical to a physical level and makes it much more difficult to cure. If theories with different sets of elementary objects give the same physical world (i. e. show the same pattern of observables), the elementary object cannot be seen as the unique foundation of the physical world any more. There seems to be no way to avoid this conclusion. There exists an additional aspect of duality that underlines its anti-ontological character. Duality does not just spell destruction for the notion of the ontological scientific object but in a sense offers a replacement as well.

Do there remain any loop-holes in duality’s anti-realist implications which could be used by the die-hard realist? A natural objection to the asserted crucial philosophical importance of duality can be based on the fact, that duality was not invented in the context of string theory. It is known since the times of P. M. Dirac that quantum electrodynamics with magnetic monopoles would be dual to a theory with inverted coupling constant and exchanged electric and magnetic charges. The question arises, if duality is poison to ontological realism, why didn’t it have its effect already at the level of quantum electrodynamics. The answer gives a nice survey of possible measures to save ontological realism. As it will turn out, they all fail in string theory.

In the case of quantum-electrodynamics the realist has several arguments to counter the duality threat. First, duality looks more like an accidental oddity that appears in an unrealistic scenario than like a characteristic feature of the world. No one has observed magnetic monopoles, which renders the problem hypothetical. And even if there were magnetic monopoles, an embedding of electromagnetism into a fuller description of the natural forces would destroy the dual structure anyway.

In string theory the situation is very different. Duality is no ‘lucky strike’ any more, which just by chance arises in a certain scenario that is not the real one anyway. As we have seen, it rather represents a core feature of the emerging theoretical structure and cannot be ignored. A second option open to the realist at the level of quantum electrodynamics is to shift the ontological posit. Some philosophers of quantum physics argue that the natural elementary object of quantum field theory is the quantum field, which represents something like the potentiality to produce elementary particles. One quantum field covers the full sum over all variations of particle exchange which have to be accounted for in a quantum process. The philosopher who posits the quantum field to be the fundamental real object discovered by quantum field theory understands the single elementary particles as mere mathematical entities introduced to calculate the behaviour of the quantum field. Dual theories from his perspective can be taken as different technical procedures to calculate the behaviour of the univocal ontological object, the electromagnetic quantum field. The phenomenon of duality then does not appear as a threat to the ontological concept per se but merely as an indication in favour of an ontologisation of the field instead of the particle.

The field theoretical approach to interpret the quantum field as the ontological object does not have any pendent in string theory. String theory only exists as a perturbative theory. There seems to be no way to introduce anything like a quantum field that would cover the full expansion of string exchanges. In the light of duality this lack of a unique ontological object arguably appears rather natural. The reason is related to another point that makes string dualities more dramatic than its field theoretical predecessor. String theory includes gravitation. Therefore object (the string geometry) and space-time are not independent. Actually it turns out that the string geometry in a way carries all information about space-time as well. This dependence of space-time on string-geometry makes it difficult already to imagine how it should be possible to put into this very spacetime some kind of overall field whose coverage of all string realisations actually implies coverage of variations of spacetime itself. The duality context makes the paradoxical quality of such an attempt more transparent. If two dual theories with different radii of a compactified dimension shall be covered by the same ontological object in analogy to the quantum field in field theory, this object obviously cannot live in space and time. If it would, it had to choose one of the two spacetime versions endorsed by the dual theories, thereby discriminating the other one. This theory however should not be expected to be a theory of objects in spacetime and therefore does not rise any hopes to redeem the external ontological perspective.

A third strategy to save ontological realism is based on the following argument: In quantum electrodynamics the difference between the dual theories boils down to a mere replacement of a weak coupling constant which allows perturbative calculation by a strong one which does not. Therefore the choice is open between a natural formulation and a clumsy untreatable one which maybe should just be discarded as an artificial construction.

Today string theory cannot tell whether its final solution will put its parameters comfortably into the low-coupling-constant-and-large-compact-dimension-regime of one of the 5 superstring theories or M-theory. This might be the case but it might as well happen, that the solution lies in a region of parameter space where no theory clearly stands out in this sense. However, even if there was one preferred theory, the simple discarding of the others could not save realism as in the case of field theory. First, the argument of natural choice is not really applicable to T-duality. A small compactification radius does not render a theory intractable like a large coupling constant. The choice of the dual version with a large radius thus looks more like a convention than anything else. Second, the choice of both compactification radii and string coupling constants in string theory is the consequence of a dynamical process that has to be calculated itself. Calculation thus stands before the selection of a certain point in parameter space and consequently also before a possible selection of the ontological objects. The ontological objects therefore, even if one wanted to hang on to their meaningfulness in the final scenario, would appear as a mere product of prior dynamics and not as a priori actors in the game.

Summing up, the phenomenon of duality is admittedly a bit irritating for the ontological realist in field theory but he can live with it. In string theory however, the field theoretical strategies to save realism all fail. The position assumed by the duality principle in string theory clearly renders obsolete the traditional realist understanding of scientific objects as smaller cousins of visible ones. The theoretical posits of string theory get their meaning only relative to their theoretical framework and must be understood as mathematical concepts without any claim to ‘corporal’ existence in an external world. The world of string theory has cut all ties with classical theories about physical bodies. To stick to ontological realism in this altered context, would be inadequate to the elementary changes which characterize the new situation. The demise of ontology in string theory opens new perspectives on the positions where the stress is on the discontinuity of ontological claims throughout the history of scientific theories.

Quantum Geometrodynamics and Emergence of Time in Quantum Gravity

fan_kappa_1_4

It is clear that, like quantum geometrodynamics, the functional integral approach makes fundamental use of a manifold. This means not just that it uses mathematical continua, such as the real numbers (to represent the values of coordinates, or physical quantities); it also postulates a 4-dimensional manifold M as an ‘arena for physical events’. However, its treatment of this manifold is very different from the treatment of spacetime in general relativity in so far as it has a Euclidean, not Lorentzian metric (which, apart from anything else, makes the use of the word ‘event’ distinctly problematic). Also, we may wish to make a summation over different such manifolds, it is in general necessary to consider complex metrics in the functional integral (so that the ‘distance squared’ between two spacetime points can be a complex number), whereas classical general relativity uses only real metrics.

Thus one might think that the manifold (or manifolds!) does not (do not) deserve the name ‘spacetime’. But what is in a name?! Let us in any case now ask how spacetime as understood in present-day physics could emerge from the above use of Riemannian manifolds M, perhaps taken together with other theoretical structures.

In particular: if we choose to specify the boundary conditions using the no-boundary proposal, this means that we take only those saddle-points of the action as contributors (to the semi-classical approximation of the wave function) that correspond to solutions of the Einstein field equations on a compact manifold M with a single boundary Σ and that induce the given values h and φ0 on Σ.

In this way, the question of whether the wave function defined by the functional integral is well approximated by this semi-classical approximation (and thus whether it predicts classical spacetime) turns out to be a question of choosing a contour of integration C in the space of complex spacetime metrics. For the approximation to be valid, we must be able to distort the contour C into a steepest-descents contour that passes through one or more of these stationary points and elsewhere follows a contour along which |e−I| decreases as rapidly as possible away from these stationary points. The wave function is then given by:

Ψ[h, φ0, Σ] ≈ ∑p e−Ip/ ̄h

where Ip are the stationary points of the action through which the contour passes, corresponding to classical solutions of the field equations satisfying the given boundary conditions. Although in general the integral defining the wave function will have many saddle-points, typically there is only a small number of saddle-points making the dominant contribution to the path integral.

For generic boundary conditions, no real Euclidean solutions to the classical Einstein field equations exist. Instead we have complex classical solutions, with a complex action. This accords with the account of the emergence of time via the semiclassical limit in quantum geometrodynamics.

On the Emergence of Time in Quantum Gravity

Autopoiesis Revisited

33_10klein1

Autopoiesis principally dealt with determining the essence of living beings to start off with, thus calling to attention a clarification between organization and structure. This distinction was highlighted with organization subtending the set of all possible relations of the autopoietic processes of an organism and structure as a synchronic snapshot from the organizational set that was active at any given instant. This distinction was tension ridden, for a possibility of a production of a novel functional structure was inhibited, and especially so, when the system had perturbations vis-à-vis the environment that housed it. Thus within the realm of autopoiesis, a diachronic emergence was conceivable only as a natural drift. John Protevi throws light on this perspective with his insistence on synchronic emergence as autonomous, and since autonomy is interest directed, the question of autopoiesis in the social realm is ruled out. The case of understanding rejection of extending autopoiesis to the social realm, especially Varela’s rejection, is a move conceived more to move beyond autopoiesis, rather than beyond neocybernetics as concerned with the organizational closure of informational systems, lest a risk of slipping into polarization should loom large. The aggrandizing threat of fascistic and authoritarian tendencies in Varela were indeed ill-conceived. This polarity that Varela considered later in his intellectual trajectory as comprising of fragments that constituted the whole, and collectively constructed, was a launch pad for Luhmann to enter the fray and use autopoiesis to social systems. Autopoiesis forms the central notion for his self-referential systems, where the latter are characterized by acknowledging their referring to themselves in every operation. Autopoietic system while organizationally closed nevertheless references an environment, background or context. This is an indication that pure auto-referentiality is generally lacking, replaced instead by a broader process of self- referentiality which comprises hetero-referentiality with a reference to an environment. This process is watchful of the distinction between itself and the environment, lest it should fail to take off. As Luhmann says that if an autopoietic system did not have an environment, it would be forced to invent one as the horizon of its auto-referentiality.

A system distinguishes itself from the environment by boundaries, where the latter is a zone of high-degree complexity, the former is a one of reduced complexity. Even Luhmann’s system believes in being interest-driven, where the communication is selective with the available information to the best of its efficiency. Luhmann likens the operation of autopoiesis to a program, making a series of logical distinctions. Here, Luhmann refers to the British mathematician G. Spencer Brown’s logic of distinctions that Maturana and Varela had identified as a model for the functioning of any cognitive process. The supreme criteria guiding the “self-creation” of any given system is a defining binary code. This binary code is taken by Luhmann to problematize the auto-referential system’s continuous confrontation with the dilemma of disintegration/continuation. Importantly, Luhmann treats systems on an ontological level, that is, systems exist, and this paradigm is attempted to be changed through the differential relations between the system and the environment.

Philosophically, complexity and self-organizational principles shifts trends into interdisciplinarity. To take a case of holism, emergentism within complexity abhors a study through reductionism. Scientifically, this notion of holism failed to stamp its authority due to a lack of any solid scientificity, and the hubristic Newtonian paradigm of reductionism as the panacea for all ills came to stay. The rapprochement was not possible until a German biologist Ludwig von Bertalanffy shocked the prevalent world view with his thesis on the openness of living systems through interactions with the surrounding systems for their continual survival. This idea deliberated on a system embedded within an environment separated by a boundary that lent the system its own identity. Input from the environment and output from the system could be conceived as a plurality of systems interacting with one another to form a network, which, if functionally coherent is a system in its own right, or a supersystem, with the initial conditions as its subsystems. This strips the subsystems of any independence, but determinable within a network via relations and/or mapping. This in general is termed constraint, that abhors independence from relations between the coupled systems (supersystem/subsystem). If the coupling between the systems is tight enough, an organization with its identity and autonomy results. Cybernetics deals precisely with such a formulation, where the autonomy in question is maintained through goal-directed seemingly intelligent action in line with the thoughts of Varela and Luhmann. This is significant because the perturbations originating in the environment are compensated for by the system actively in order to maintain its preferred state of affairs, with greater the amount of perturbations implying greater compensatory actions on the part of the system. One consequence of such a systemic perspective has gotten rid of Cartesian mind-matter split by thinking of it as nothing more than a special kind of relation. Such is the efficacy of autopoiesis in negotiating the dilemma surrounding the metaphysical question concerning the origin of order.

Emergentic Philosophy or Defining Complexity

techno-worlds-complexity-and-complications-clockwork-silver-serge-averbukh

If the potential of emergence is not pregnant with what emerges from it, then emergence becomes just a gobbledygook (generally unintelligible) of abstraction and obscurity. What is this differentiation all about? The origin of differentiation is to be located in what has already been actualized. Thus, potential is not only abstract, but relative. Abstract, since, potential could come to mean a host of other things than that what it is meant for, and relative, since it is dependent on intertwinings within which it could unfold. Potentiality is creative for philosophy, through an expansive notion of unity through assemblages of multiple singularities helping dislodge anthropocentric worldviews that insist on rationale of the world as a solid and stable structure. A way out is to think in terms of liquid structures, where power to self-organize and untouched by any human static control allows for an existence at the edge of creative and flowing chaos. Such a position is tangible in history as a confluence of infinite variations, and rooted in materialism of a revived form. Emergence is a diachronic construction of functional structures in complex systems attaining a synchronic coherence of systemic behavior during the process of arresting the individual component’s behavior, so very crucial in ramifications for addressing burning questions in the philosophy of science, especially the ones concerning reductionism. Complexity investigates emergent properties, certain regularities of behavior that somehow transcend the ingredients that make them up. Complexity argues against reductionism, against reducing the whole to the parts. And in doing so, it transforms scientific understanding of far-from-equilibrium structures of irreversible times and of non-Euclidean spaces.