Epistemological Constraints to Finitism. Thought of the Day 68.0

8a915d7f-3c45-4475-ae9f-60e1ce4759c0_560_420

Hilbert’s substantial philosophical claims about the finitary standpoint are difficult to flesh out. For instance, Hilbert appeals to the role of Kantian intuition for our apprehension of finitary objects (they are given in the faculty of representation). Supposing one accepts this line of epistemic justification in principle, it is plausible that the simplest examples of finitary objects and propositions, and perhaps even simple cases of finitary operations such as concatenations of numerals can be given a satisfactory account.

Of crucial importance to both an understanding of finitism and of Hilbert’s proof theory is the question of what operations and what principles of proof should be allowed from the finitist standpoint. That a general answer is necessary is clear from the demands of Hilbert’s proof theory, i.e., it is not to be expected that given a formal system of mathematics (or even a single sequence of formulas) one can “see” that it is consistent (or that it cannot be a genuine derivation of an inconsistency) the way we can see, e.g., that || + ||| = ||| + ||. What is required for a consistency proof is an operation which, given a formal derivation, transforms such a derivation into one of a special form, plus proofs that the operation in fact succeeds in every case and that proofs of the special kind cannot be proofs of an inconsistency.

Hilbert said that intuitive thought “includes recursion and intuitive induction for finite existing totalities.” All of this in its application in the domain of numbers, can be formalized in a system known as primitive recursive arithmetic (PRA), which allows definitions of functions by primitive recursion and induction on quantifier-free formulas. However, Hilbert never claimed that only primitive recursive operations count as finitary. Although Hilbert and his collaborators used methods which go beyond the primitive recursive and accepted them as finitary, it is still unclear whether they (a) realized that these methods were not primitive recursive and (b) whether they would still have accepted them as finitary if they had. The conceptual issue is which operations should be considered as finitary. Since Hilbert was less than completely clear on what the finitary standpoint consists in, there is some leeway in setting up the constraints, epistemological and otherwise, an analysis of finitist operation and proof must fulfill. Hilbert characterized the objects of finitary number theory as “intuitively given,” as “surveyable in all their parts,” and said that their having basic properties must “exist intuitively” for us. This characterization of finitism as primarily to do with intuition and intuitive knowledge has been emphasized in that what can count as finitary on this understanding is not more than those arithmetical operations that can be defined from addition and multiplication using bounded recursion.

Rejecting the aspect of representability in intuition as the hallmark of the finitary; one could take finitary reasoning to be “a minimal kind of reasoning supposed by all non-trivial mathematical reasoning about numbers” and analyze finitary operations and methods of proof as those that are implicit in the very notion of number as the form of a finite sequence. This analysis of finitism is supported by Hilbert’s contention that finitary reasoning is a precondition for logical and mathematical, indeed, any scientific thinking.

Weyl and Automorphism of Nature. Drunken Risibility.

MTH6105spider

In classical geometry and physics, physical automorphisms could be based on the material operations used for defining the elementary equivalence concept of congruence (“equality and similitude”). But Weyl started even more generally, with Leibniz’ explanation of the similarity of two objects, two things are similar if they are indiscernible when each is considered by itself. Here, like at other places, Weyl endorsed this Leibnzian argument from the point of view of “modern physics”, while adding that for Leibniz this spoke in favour of the unsubstantiality and phenomenality of space and time. On the other hand, for “real substances” the Leibnizian monads, indiscernability implied identity. In this way Weyl indicated, prior to any more technical consideration, that similarity in the Leibnizian sense was the same as objective equality. He did not enter deeper into the metaphysical discussion but insisted that the issue “is of philosophical significance far beyond its purely geometric aspect”.

Weyl did not claim that this idea solves the epistemological problem of objectivity once and for all, but at least it offers an adequate mathematical instrument for the formulation of it. He illustrated the idea in a first step by explaining the automorphisms of Euclidean geometry as the structure preserving bijective mappings of the point set underlying a structure satisfying the axioms of “Hilbert’s classical book on the Foundations of Geometry”. He concluded that for Euclidean geometry these are the similarities, not the congruences as one might expect at a first glance. In the mathematical sense, we then “come to interpret objectivity as the invariance under the group of automorphisms”. But Weyl warned to identify mathematical objectivity with that of natural science, because once we deal with real space “neither the axioms nor the basic relations are given”. As the latter are extremely difficult to discern, Weyl proposed to turn the tables and to take the group Γ of automorphisms, rather than the ‘basic relations’ and the corresponding relata, as the epistemic starting point.

Hence we come much nearer to the actual state of affairs if we start with the group Γ of automorphisms and refrain from making the artificial logical distinction between basic and derived relations. Once the group is known, we know what it means to say of a relation that it is objective, namely invariant with respect to Γ.

By such a well chosen constitutive stipulation it becomes clear what objective statements are, although this can be achieved only at the price that “…we start, as Dante starts in his Divina Comedia, in mezzo del camin”. A phrase characteristic for Weyl’s later view follows:

It is the common fate of man and his science that we do not begin at the beginning; we find ourselves somewhere on a road the origin and end of which are shrouded in fog.

Weyl’s juxtaposition of the mathematical and the physical concept of objectivity is worthwhile to reflect upon. The mathematical objectivity considered by him is relatively easy to obtain by combining the axiomatic characterization of a mathematical theory with the epistemic postulate of invariance under a group of automorphisms. Both are constituted in a series of acts characterized by Weyl as symbolic construction, which is free in several regards. For example, the group of automorphisms of Euclidean geometry may be expanded by “the mathematician” in rather wide ways (affine, projective, or even “any group of transformations”). In each case a specific realm of mathematical objectivity is constituted. With the example of the automorphism group Γ of (plane) Euclidean geometry in mind Weyl explained how, through the use of Cartesian coordinates, the automorphisms of Euclidean geometry can be represented by linear transformations “in terms of reproducible numerical symbols”.

For natural science the situation is quite different; here the freedom of the constitutive act is severely restricted. Weyl described the constraint for the choice of Γ at the outset in very general terms: The physicist will question Nature to reveal him her true group of automorphisms. Different to what a philosopher might expect, Weyl did not mention, the subtle influences induced by theoretical evaluations of empirical insights on the constitutive choice of the group of automorphisms for a physical theory. He even did not restrict the consideration to the range of a physical theory but aimed at Nature as a whole. Still basing on his his own views and radical changes in the fundamental views of theoretical physics, Weyl hoped for an insight into the true group of automorphisms of Nature without any further specifications.

Nomological Possibility and Necessity

6a010535ce1cf6970c01bb096e3f72970d

An event E is nomologically possible in history h at time t if the initial segment of that history up to t admits at least one continuation in Ω that lies in E; and E is nomologically necessary in h at t if every continuation of the history’s initial segment up to t lies in E.

More formally, we say that one history, h’, is accessible from another, h, at time t if the initial segments of h and h’ up to time t coincide, i.e., ht = ht‘. We then write h’Rth. The binary relation Rt on possible histories is in fact an equivalence relation (reflexive, symmetric, and transitive). Now, an event E ⊆ Ω is nomologically possible in history h at time t if some history h’ in Ω that is accessible from h at t is contained in E. Similarly, an event E ⊆ Ω is nomologically necessary in history h at time t if every history h’ in Ω that is accessible from h at t is contained in E.

In this way, we can define two modal operators, ♦t and ¤t, to express possibility and necessity at time t. We define each of them as a mapping from events to events. For any event E ⊆ Ω,

t E = {h ∈ Ω : for some h’ ∈ Ω with h’Rth, we have h’ ∈ E},

¤t E = {h ∈ Ω : for all h’ ∈ Ω with h’Rth, we have h’ ∈ E}.

So, ♦t E is the set of all histories in which E is possible at time t, and ¤t E is the set of all histories in which E is necessary at time t. Accordingly, we say that “ ♦t E” holds in history h if h is an element of ♦t E, and “ ¤t E” holds in h if h is an element of ¤t E. As one would expect, the two modal operators are duals of each other: for any event E ⊆ Ω, we have ¤t E = ~ ♦t ~E and ♦E = ~ ¤t ~E.

Although we have here defined nomological possibility and necessity, we can analogously define logical possibility and necessity. To do this, we must simply replace every occurrence of the set Ω of nomologically possible histories in our definitions with the set H of logically possible histories. Second, by defining the operators ♦t and ¤t as functions from events to events, we have adopted a semantic definition of these modal notions. However, we could also define them syntactically, by introducing an explicit modal logic. For each point in time t, the logic corresponding to the operators ♦t and ¤t would then be an instance of a standard S5 modal logic.

The analysis shows how nomological possibility and necessity depend on the dynamics of the system. In particular, as time progresses, the notion of possibility becomes more demanding: fewer events remain possible at each time. And the notion of necessity becomes less demanding: more events become necessary at each time, for instance due to having been “settled” in the past. Formally, for any t and t’ in T with t < t’ and any event E ⊆ Ω,

if ♦t’ E then ♦E,

if ¤t E then ¤t’ E.

Furthermore, in a deterministic system, for every event E and any time t, we have ♦t E = ¤t E. In other words, an event is possible in any history h at time t if and only if it is necessary in h at t. In an indeterministic system, by contrast, necessity and possibility come apart.

Let us say that one history, h’, is accessible from another, h, relative to a set T’ of time points, if the restrictions of h and h’ to T’ coincide, i.e., h’T’ = hT’. We then write h’RT’h. Accessibility at time t is the special case where T’ is the set of points in time up to time t. We can define nomological possibility and necessity relative to T’ as follows. For any event E ⊆ Ω,

T’ E = {h ∈ Ω : for some h’ ∈ Ω with h’RT’h, we have h’ ∈ E},

¤T’ E = {h ∈ Ω : for all h’ ∈ Ω with h’RT’h, we have h’ ∈ E}.

Although these modal notions are much less familiar than the standard ones (possibility and necessity at time t), they are useful for some purposes. In particular, they allow us to express the fact that the states of a system during a particular period of time, T’ ⊆ T, render some events E possible or necessary.

Finally, our definitions of possibility and necessity relative to some general subset T’ of T also allow us to define completely “atemporal” notions of possibility and necessity. If we take T’ to be the empty set, then the accessibility relation RT’ becomes the universal relation, under which every history is related to every other. An event E is possible in this atemporal sense (i.e., ♦E) iff E is a non-empty subset of Ω, and it is necessary in this atemporal sense (i.e., ¤E) if E coincides with all of Ω. These notions might be viewed as possibility and necessity from the perspective of some observer who has no temporal or historical location within the system and looks at it from the outside.

Meillassoux’s Principle of Unreason Towards an Intuition of the Absolute In-itself. Note Quote.

geotime_usgs

The principle of reason such as it appears in philosophy is a principle of contingent reason: not only how philosophical reason concerns difference instead of identity, we but also why the Principle of Sufficient Reason can no longer be understood in terms of absolute necessity. In other words, Deleuze disconnects the Principle of Sufficient Reason from the ontotheological tradition no less than from its Heideggerian deconstruction. What remains then of Meillassoux’s criticism in After finitude: An Essay on the Necessity of Contigency that Deleuze no less than Hegel hypostatizes or absolutizes the correlation between thinking and being and thus brings back a vitalist version of speculative idealism through the back door?

At stake in Meillassoux’s criticism of the Principle of Sufficient Reason is a double problem: the conditions of possibility of thinking and knowing an absolute and subsequently the conditions of possibility of rational ideology critique. The first problem is primarily epistemological: how can philosophy justify scientific knowledge claims about a reality that is anterior to our relation to it and that is hence not given in the transcendental object of possible experience (the arche-fossil )? This is a problem for all post-Kantian epistemologies that hold that we can only ever know the correlate of being and thought. Instead of confronting this weak correlationist position head on, however, Meillassoux seeks a solution in the even stronger correlationist position that denies not only the knowability of the in itself, but also its very thinkability or imaginability. Simplified: if strong correlationists such as Heidegger or Wittgenstein insist on the historicity or facticity (non-necessity) of the correlation of reason and ground in order to demonstrate the impossibility of thought’s self-absolutization, then the very force of their argument, if it is not to contradict itself, implies more than they are willing to accept: the necessity of the contingency of the transcendental structure of the for itself. As a consequence, correlationism is incapable of demonstrating itself to be necessary. This is what Meillassoux calls the principle of factiality or the principle of unreason. It says that it is possible to think of two things that exist independently of thought’s relation to it: contingency as such and the principle of non-contradiction. The principle of unreason thus enables the intellectual intuition of something that is absolutely in itself, namely the absolute impossibility of a necessary being. And this in turn implies the real possibility of the completely random and unpredictable transformation of all things from one moment to the next. Logically speaking, the absolute is thus a hyperchaos or something akin to Time in which nothing is impossible, except it be necessary beings or necessary temporal experiences such as the laws of physics.

There is, moreover, nothing mysterious about this chaos. Contingency, and Meillassoux consistently refers to this as Hume’s discovery, is a purely logical and rational necessity, since without the principle of non-contradiction not even the principle of factiality would be absolute. It is thus a rational necessity that puts the Principle of Sufficient Reason out of action, since it would be irrational to claim that it is a real necessity as everything that is is devoid of any reason to be as it is. This leads Meillassoux to the surprising conclusion that [t]he Principle of Sufficient Reason is thus another name for the irrational… The refusal of the Principle of Sufficient Reason is not the refusal of reason, but the discovery of the power of chaos harboured by its fundamental principle (non-contradiction). (Meillassoux 2007: 61) The principle of factiality thus legitimates or founds the rationalist requirement that reality be perfectly amenable to conceptual comprehension at the same time that it opens up [a] world emancipated from the Principle of Sufficient Reason (Meillassoux) but founded only on that of non-contradiction.

This emancipation brings us to the practical problem Meillassoux tries to solve, namely the possibility of ideology critique. Correlationism is essentially a discourse on the limits of thought for which the deabsolutization of the Principle of Sufficient Reason marks reason’s discovery of its own essential inability to uncover an absolute. Thus if the Galilean-Copernican revolution of modern science meant the paradoxical unveiling of thought’s capacity to think what there is regardless of whether thought exists or not, then Kant’s correlationist version of the Copernican revolution was in fact a Ptolemaic counterrevolution. Since Kant and even more since Heidegger, philosophy has been adverse precisely to the speculative import of modern science as a formal, mathematical knowledge of nature. Its unintended consequence is therefore that questions of ultimate reasons have been dislocated from the domain of metaphysics into that of non-rational, fideist discourse. Philosophy has thus made the contemporary end of metaphysics complicit with the religious belief in the Principle of Sufficient Reason beyond its very thinkability. Whence Meillassoux’s counter-intuitive conclusion that the refusal of the Principle of Sufficient Reason furnishes the minimal condition for every critique of ideology, insofar as ideology cannot be identified with just any variety of deceptive representation, but is rather any form of pseudo-rationality whose aim is to establish that what exists as a matter of fact exists necessarily. In this way a speculative critique pushes skeptical rationalism’s relinquishment of the Principle of Sufficient Reason to the point where it affirms that there is nothing beneath or beyond the manifest gratuitousness of the given nothing, but the limitless and lawless power of its destruction, emergence, or persistence. Such an absolutizing even though no longer absolutist approach would be the minimal condition for every critique of ideology: to reject dogmatic metaphysics means to reject all real necessity, and a fortiori to reject the Principle of Sufficient Reason, as well as the ontological argument.

On the one hand, Deleuze’s criticism of Heidegger bears many similarities to that of Meillassoux when he redefines the Principle of Sufficient Reason in terms of contingent reason or with Nietzsche and Mallarmé: nothing rather than something such that whatever exists is a fiat in itself. His Principle of Sufficient Reason is the plastic, anarchic and nomadic principle of a superior or transcendental empiricism that teaches us a strange reason, that of the multiple, chaos and difference. On the other hand, however, the fact that Deleuze still speaks of reason should make us wary. For whereas Deleuze seeks to reunite chaotic being with systematic thought, Meillassoux revives the classical opposition between empiricism and rationalism precisely in order to attack the pre-Kantian, absolute validity of the Principle of Sufficient Reason. His argument implies a return to a non-correlationist version of Kantianism insofar as it relies on the gap between being and thought and thus upon a logic of representation that renders Deleuze’s Principle of Sufficient Reason unrecognizable, either through a concept of time, or through materialism.

Financial Entanglement and Complexity Theory. An Adumbration on Financial Crisis.

entanglement

The complex system approach in finance could be described through the concept of entanglement. The concept of entanglement bears the same features as a definition of a complex system given by a group of physicists working in a field of finance (Stanley et al,). As they defined it – in a complex system all depends upon everything. Just as in the complex system the notion of entanglement is a statement acknowledging interdependence of all the counterparties in financial markets including financial and non-financial corporations, the government and the central bank. How to identify entanglement empirically? Stanley H.E. et al formulated the process of scientific study in finance as a search for patterns. Such a search, going on under the auspices of “econophysics”, could exemplify a thorough analysis of a complex and unstructured assemblage of actual data being finalized in the discovery and experimental validation of an appropriate pattern. On the other side of a spectrum, some patterns underlying the actual processes might be discovered due to synthesizing a vast amount of historical and anecdotal information by applying appropriate reasoning and logical deliberations. The Austrian School of Economic Thought which, in its extreme form, rejects application of any formalized systems, or modeling of any kind, could be viewed as an example. A logical question follows out this comparison: Does there exist any intermediate way of searching for regular patters in finance and economics?

Importantly, patterns could be discovered by developing rather simple models of money and debt interrelationships. Debt cycles were studied extensively by many schools of economic thought (Shiller, Robert J._ Akerlof, George A – Animal Spirits: How Human Psychology Drives the Economy, and Why It Matters for Global Capitalism). The modern financial system worked by spreading risk, promoting economic efficiency and providing cheap capital. It had been formed during the years as bull markets in shares and bonds originated in the early 1990s. These markets were propelled by abundance of money, falling interest rates and new information technology. Financial markets, by combining debt and derivatives, could originate and distribute huge quantities of risky structurized products and sell them to different investors. Meanwhile, financial sector debt, only a tenth of the size of non-financial-sector debt in 1980, became half as big by the beginning of the credit crunch in 2007. As liquidity grew, banks could buy more assets, borrow more against them, and enjoy their value rose. By 2007 financial services were making 40% of America’s corporate profits while employing only 5% of its private sector workers. Thanks to cheap money, banks could have taken on more debt and, by designing complex structurized products, they were able to make their investment more profitable and risky. Securitization facilitating the emergence of the “shadow banking” system foments, simultaneously, bubbles on different segments of a global financial market.

Yet over the past decade this system, or a big part of it, began to lose touch with its ultimate purpose: to reallocate deficit resources in accordance with the social priorities. Instead of writing, managing and trading claims on future cashflows for the rest of the economy, finance became increasingly a game for fees and speculation. Due to disastrously lax regulation, investment banks did not lay aside enough capital in case something went wrong, and, as the crisis began in the middle of 2007, credit markets started to freeze up. Qualitatively, after the spectacular Lehman Brothers disaster in September 2008, laminar flows of financial activity came to an end. Banks began to suffer losses on their holdings of toxic securities and were reluctant to lend to one another that led to shortages of funding system. This only intensified in late 2007 when Nothern Rock, a British mortgage lender, experienced a bank run that started in the money markets. All of a sudden, liquidity became in a short supply, debt was unwound, and investors were forced to sell and write down the assets. For several years, up to now, the market counterparties no longer trust each other. As Walter Bagehot, an authority on bank runs, once wrote:

Every banker knows that if he has to prove that he is worth of credit, however good may be his arguments, in fact his credit is gone.

In an entangled financial system, his axiom should be stretched out to the whole market. And it means, precisely, financial meltdown or the crisis. The most fascinating feature of the post-crisis era on financial markets was the continuation of a ubiquitous liquidity expansion. To fight the market squeeze, all the major central banks have greatly expanded their balance sheets. The latter rose, roughly, from about 10 percent to 25-30 percent of GDP for the appropriate economies. For several years after the credit crunch 2007-09, central banks bought trillions of dollars of toxic and government debts thus increasing, without any precedent in modern history, money issuance. Paradoxically, this enormous credit expansion, though accelerating for several years, has been accompanied by a stagnating and depressed real economy. Yet, until now, central bankers are worried with downside risks and threats of price deflation, mainly. Otherwise, a hectic financial activity that is going on along unbounded credit expansion could be transformed by herding into autocatalytic process that, if being subject to accumulation of a new debt, might drive the entire system at a total collapse. From a financial point of view, this systemic collapse appears to be a natural result of unbounded credit expansion which is ‘supported’ with the zero real resources. Since the wealth of investors, as a whole, becomes nothing but the ‘fool’s gold’, financial process becomes a singular one, and the entire system collapses. In particular, three phases of investors’ behavior – hedge finance, speculation, and the Ponzi game, could be easily identified as a sequence of sub-cycles that unwound ultimately in the total collapse.

Meillassoux, Deleuze, and the Ordinal Relation Un-Grounding Hyper-Chaos. Thought of the Day 41.0

v1v2a

As Heidegger demonstrates in Kant and the Problem of Metaphysics, Kant limits the metaphysical hypostatization of the logical possibility of the absolute by subordinating the latter to a domain of real possibility circumscribed by reason’s relation to sensibility. In this way he turns the necessary temporal becoming of sensible intuition into the sufficient reason of the possible. Instead, the anti-Heideggerian thrust of Meillassoux’s intellectual intuition is that it absolutizes the a priori realm of pure logical possibility and disconnects the domain of mathematical intelligibility from sensibility. (Ray Brassier’s The Enigma of Realism: Robin Mackay – Collapse_ Philosophical Research and Development. Speculative Realism.) Hence the chaotic structure of his absolute time: Anything is possible. Whereas real possibility is bound to correlation and temporal becoming, logical possibility is bound only by non-contradiction. It is a pure or absolute possibility that points to a radical diachronicity of thinking and being: we can think of being without thought, but not of thought without being.

Deleuze clearly situates himself in the camp when he argues with Kant and Heidegger that time as pure auto-affection (folding) is the transcendental structure of thought. Whatever exists, in all its contingency, is grounded by the first two syntheses of time and ungrounded by the third, disjunctive synthesis in the implacable difference between past and future. For Deleuze, it is precisely the eternal return of the ordinal relation between what exists and what may exist that destroys necessity and guarantees contingency. As a transcendental empiricist, he thus agrees with the limitation of logical possibility to real possibility. On the one hand, he thus also agrees with Hume and Meillassoux that [r]eality is not the result of the laws which govern it. The law of entropy or degradation in thermodynamics, for example, is unveiled as nihilistic by Nietzsche s eternal return, since it is based on a transcendental illusion in which difference [of temperature] is the sufficient reason of change only to the extent that the change tends to negate difference. On the other hand, Meillassoux’s absolute capacity-to-be-other relative to the given (Quentin Meillassoux, Ray Brassier, Alain Badiou – After finitude: an essay on the necessity of contingency) falls away in the face of what is actual here and now. This is because although Meillassoux s hyper-chaos may be like time, it also contains a tendency to undermine or even reject the significance of time. Thus one may wonder with Jon Roffe (Time_and_Ground_A_Critique_of_Meillassou) how time, as the sheer possibility of any future or different state of affairs, can provide the (non-)ground for the realization of this state of affairs in actuality. The problem is less that Meillassoux’s contingency is highly improbable than that his ontology includes no account of actual processes of transformation or development. As Peter Hallward (Levi Bryant, Nick Srnicek and Graham Harman (editors) – The Speculative Turn: Continental Materialism and Realism) has noted, the abstract logical possibility of change is an empty and indeterminate postulate, completely abstracted from all experience and worldly or material affairs. For this reason, the difference between Deleuze and Meillassoux seems to come down to what is more important (rather than what is more originary): the ordinal sequences of sensible intuition or the logical lack of reason.

But for Deleuze time as the creatio ex nihilo of pure possibility is not just irrelevant in relation to real processes of chaosmosis, which are both chaotic and probabilistic, molecular and molar. Rather, because it puts the Principle of Sufficient Reason as principle of difference out of real action it is either meaningless with respecting to the real or it can only have a negative or limitative function. This is why Deleuze replaces the possible/real opposition with that of virtual/actual. Whereas conditions of possibility always relate asymmetrically and hierarchically to any real situation, the virtual as sufficient reason is no less real than the actual since it is first of all its unconditioned or unformed potential of becoming-other.

The Locus of Renormalization. Note Quote.

IMM-3-200-g005

Since symmetries and the related conservation properties have a major role in physics, it is interesting to consider the paradigmatic case where symmetry changes are at the core of the analysis: critical transitions. In these state transitions, “something is not preserved”. In general, this is expressed by the fact that some symmetries are broken or new ones are obtained after the transition (symmetry changes, corresponding to state changes). At the transition, typically, there is the passage to a new “coherence structure” (a non-trivial scale symmetry); mathematically, this is described by the non-analyticity of the pertinent formal development. Consider the classical para-ferromagnetic transition: the system goes from a disordered state to sudden common orientation of spins, up to the complete ordered state of a unique orientation. Or percolation, often based on the formation of fractal structures, that is the iteration of a statistically invariant motif. Similarly for the formation of a snow flake . . . . In all these circumstances, a “new physical object of observation” is formed. Most of the current analyses deal with transitions at equilibrium; the less studied and more challenging case of far form equilibrium critical transitions may require new mathematical tools, or variants of the powerful renormalization methods. These methods change the pertinent object, yet they are based on symmetries and conservation properties such as energy or other invariants. That is, one obtains a new object, yet not necessarily new observables for the theoretical analysis. Another key mathematical aspect of renormalization is that it analyzes point-wise transitions, that is, mathematically, the physical transition is seen as happening in an isolated mathematical point (isolated with respect to the interval topology, or the topology induced by the usual measurement and the associated metrics).

One can say in full generality that a mathematical frame completely handles the determination of the object it describes as long as no strong enough singularity (i.e. relevant infinity or divergences) shows up to break this very mathematical determination. In classical statistical fields (at criticality) and in quantum field theories this leads to the necessity of using renormalization methods. The point of these methods is that when it is impossible to handle mathematically all the interaction of the system in a direct manner (because they lead to infinite quantities and therefore to no relevant account of the situation), one can still analyze parts of the interactions in a systematic manner, typically within arbitrary scale intervals. This allows us to exhibit a symmetry between partial sets of “interactions”, when the arbitrary scales are taken as a parameter.

In this situation, the intelligibility still has an “upward” flavor since renormalization is based on the stability of the equational determination when one considers a part of the interactions occurring in the system. Now, the “locus of the objectivity” is not in the description of the parts but in the stability of the equational determination when taking more and more interactions into account. This is true for critical phenomena, where the parts, atoms for example, can be objectivized outside the system and have a characteristic scale. In general, though, only scale invariance matters and the contingent choice of a fundamental (atomic) scale is irrelevant. Even worse, in quantum fields theories, the parts are not really separable from the whole (this would mean to separate an electron from the field it generates) and there is no relevant elementary scale which would allow ONE to get rid of the infinities (and again this would be quite arbitrary, since the objectivity needs the inter-scale relationship).

In short, even in physics there are situations where the whole is not the sum of the parts because the parts cannot be summed on (this is not specific to quantum fields and is also relevant for classical fields, in principle). In these situations, the intelligibility is obtained by the scale symmetry which is why fundamental scale choices are arbitrary with respect to this phenomena. This choice of the object of quantitative and objective analysis is at the core of the scientific enterprise: looking only at molecules as the only pertinent observable of life is worse than reductionist, it is against the history of physics and its audacious unifications and invention of new observables, scale invariances and even conceptual frames.

As for criticality in biology, there exists substantial empirical evidence that living organisms undergo critical transitions. These are mostly analyzed as limit situations, either never really reached by an organism or as occasional point-wise transitions. Or also, as researchers nicely claim in specific analysis: a biological system, a cell genetic regulatory networks, brain and brain slices …are “poised at criticality”. In other words, critical state transitions happen continually.

Thus, as for the pertinent observables, the phenotypes, we propose to understand evolutionary trajectories as cascades of critical transitions, thus of symmetry changes. In this perspective, one cannot pre-give, nor formally pre-define, the phase space for the biological dynamics, in contrast to what has been done for the profound mathematical frame for physics. This does not forbid a scientific analysis of life. This may just be given in different terms.

As for evolution, there is no possible equational entailment nor a causal structure of determination derived from such entailment, as in physics. The point is that these are better understood and correlated, since the work of Noether and Weyl in the last century, as symmetries in the intended equations, where they express the underlying invariants and invariant preserving transformations. No theoretical symmetries, no equations, thus no laws and no entailed causes allow the mathematical deduction of biological trajectories in pre-given phase spaces – at least not in the deep and strong sense established by the physico-mathematical theories. Observe that the robust, clear, powerful physico-mathematical sense of entailing law has been permeating all sciences, including societal ones, economics among others. If we are correct, this permeating physico-mathematical sense of entailing law must be given up for unentailed diachronic evolution in biology, in economic evolution, and cultural evolution.

As a fundamental example of symmetry change, observe that mitosis yields different proteome distributions, differences in DNA or DNA expressions, in membranes or organelles: the symmetries are not preserved. In a multi-cellular organism, each mitosis asymmetrically reconstructs a new coherent “Kantian whole”, in the sense of the physics of critical transitions: a new tissue matrix, new collagen structure, new cell-to-cell connections . . . . And we undergo millions of mitosis each minute. More, this is not “noise”: this is variability, which yields diversity, which is at the core of evolution and even of stability of an organism or an ecosystem. Organisms and ecosystems are structurally stable, also because they are Kantian wholes that permanently and non-identically reconstruct themselves: they do it in an always different, thus adaptive, way. They change the coherence structure, thus its symmetries. This reconstruction is thus random, but also not random, as it heavily depends on constraints, such as the proteins types imposed by the DNA, the relative geometric distribution of cells in embryogenesis, interactions in an organism, in a niche, but also on the opposite of constraints, the autonomy of Kantian wholes.

In the interaction with the ecosystem, the evolutionary trajectory of an organism is characterized by the co-constitution of new interfaces, i.e. new functions and organs that are the proper observables for the Darwinian analysis. And the change of a (major) function induces a change in the global Kantian whole as a coherence structure, that is it changes the internal symmetries: the fish with the new bladder will swim differently, its heart-vascular system will relevantly change ….

Organisms transform the ecosystem while transforming themselves and they can stand/do it because they have an internal preserved universe. Its stability is maintained also by slightly, yet constantly changing internal symmetries. The notion of extended criticality in biology focuses on the dynamics of symmetry changes and provides an insight into the permanent, ontogenetic and evolutionary adaptability, as long as these changes are compatible with the co-constituted Kantian whole and the ecosystem. As we said, autonomy is integrated in and regulated by constraints, with an organism itself and of an organism within an ecosystem. Autonomy makes no sense without constraints and constraints apply to an autonomous Kantian whole. So constraints shape autonomy, which in turn modifies constraints, within the margin of viability, i.e. within the limits of the interval of extended criticality. The extended critical transition proper to the biological dynamics does not allow one to prestate the symmetries and the correlated phase space.

Consider, say, a microbial ecosystem in a human. It has some 150 different microbial species in the intestinal tract. Each person’s ecosystem is unique, and tends largely to be restored following antibiotic treatment. Each of these microbes is a Kantian whole, and in ways we do not understand yet, the “community” in the intestines co-creates their worlds together, co-creating the niches by which each and all achieve, with the surrounding human tissue, a task closure that is “always” sustained even if it may change by immigration of new microbial species into the community and extinction of old species in the community. With such community membership turnover, or community assembly, the phase space of the system is undergoing continual and open ended changes. Moreover, given the rate of mutation in microbial populations, it is very likely that these microbial communities are also co-evolving with one another on a rapid time scale. Again, the phase space is continually changing as are the symmetries.

Can one have a complete description of actual and potential biological niches? If so, the description seems to be incompressible, in the sense that any linguistic description may require new names and meanings for the new unprestable functions, where functions and their names make only sense in the newly co-constructed biological and historical (linguistic) environment. Even for existing niches, short descriptions are given from a specific perspective (they are very epistemic), looking at a purpose, say. One finds out a feature in a niche, because you observe that if it goes away the intended organisms dies. In other terms, niches are compared by differences: one may not be able to prove that two niches are identical or equivalent (in supporting life), but one may show that two niches are different. Once more, there are no symmetries organizing over time these spaces and their internal relations. Mathematically, no symmetry (groups) nor (partial-) order (semigroups) organize the phase spaces of phenotypes, in contrast to physical phase spaces.

Finally, here is one of the many logical challenges posed by evolution: the circularity of the definition of niches is more than the circularity in the definitions. The “in the definitions” circularity concerns the quantities (or quantitative distributions) of given observables. Typically, a numerical function defined by recursion or by impredicative tools yields a circularity in the definition and poses no mathematical nor logical problems, in contemporary logic (this is so also for recursive definitions of metabolic cycles in biology). Similarly, a river flow, which shapes its own border, presents technical difficulties for a careful equational description of its dynamics, but no mathematical nor logical impossibility: one has to optimize a highly non linear and large action/reaction system, yielding a dynamically constructed geodetic, the river path, in perfectly known phase spaces (momentum and space or energy and time, say, as pertinent observables and variables).

The circularity “of the definitions” applies, instead, when it is impossible to prestate the phase space, so the very novel interaction (including the “boundary conditions” in the niche and the biological dynamics) co-defines new observables. The circularity then radically differs from the one in the definition, since it is at the meta-theoretical (meta-linguistic) level: which are the observables and variables to put in the equations? It is not just within prestatable yet circular equations within the theory (ordinary recursion and extended non – linear dynamics), but in the ever changing observables, the phenotypes and the biological functions in a circularly co-specified niche. Mathematically and logically, no law entails the evolution of the biosphere.

Universal Inclusion of the Void. Thought of the Day 38.0

entering-the-void

The universal inclusion of the void means that the intersection between two sets whatsoever is comparable with the void set. That is to say, there is no multiple that does not include within it some part of the “inconsistency” that it structures. The diversity of multiplicity can exhibit multiple modes of articulation, but as multiples, they have nothing to do with one another, they are two absolutely heterogeneous presentations, and this is why this relation – of non-relation – can only be thought under the signifier of being (of the void), which indicates that the multiples in question have nothing in common apart from being multiples. The universal inclusion of the void thus guarantees the consistency of the infinite multiplicities immanent to its presentation. That is to say, it underlines the universal distribution of the ontological structure seized at the point of the axiom of the void set. The void does not merely constitute a consistency at a local point but also organises, from this point of difference, a universal structure that legislates on the structure of all sets, the universe of consistent multiplicity.

This final step, the carrying over of the void seized as a local point of the presentation of the unpresentable, to a global field of sets provides us with the universal point of difference, applicable equally to any number of sets, that guarantees the universal consistency of ontological presentation. In one sense, the universal inclusion of the void demonstrates that, as a unit of presentation, the void anchors the set theoretical universe by its universal inclusion. As such, every presentation in ontological thought is situated in this elementary seizure of ontological difference. The void is that which “fills” ontological or set theoretical presentation. It is what makes common the universe of sets. It is in this sense that the “substance” or constitution of ontology is the void. At the same stroke, however, the universal inclusion of the void also concerns the consistency of set theory in a logical sense.

The universal inclusion of the void provides an important synthesis of the consistency of presentation. What is presented is necessarily consistent but its consistency gives way to two distinct senses. Consistency can refer to its own “substance,” its immanent presentation. Distinct presentations constitute different presentations principally because “what” they present are different. Ontology’s particularity is its presentation of the void. On the other hand, a political site might present certain elements just as a scientific procedure might present yet others. The other sense of consistency is tied to presentation as such, the consistency of presentation in its generality. When one speaks loosely about the “world” being consistent, where natural laws are verifiable against a background of regularity, it is this consistency that is invoked and not the elements that constitute the particularity of their presentation. This sense of consistency, occurring across presentations would certainly take us beyond the particularity of ontology. That is to say, ontological presentation presents a species of this consistency. However, the possibility of multiple approaches does not exclude an ontological treatment of this consistency.

Badiou’s Diagrammatic Claim of Democratic Materialism Cuts Grothendieck’s Topos. Note Quote.

badiou18

Let us focus on the more abstract, elementary definition of a topos and discuss materiality in the categorical context. The materiality of being can, indeed, be defined in a way that makes no material reference to the category of Sets itself.

The stakes between being and materiality are thus reverted. From this point of view, a Grothendieck-topos is not one of sheaves over sets but, instead, it is a topos which is not defined based on a specific geometric morphism E → Sets – a materialization – but rather a one for which such a materialization exists only when the topos itself is already intervened by an explicitly given topos similar to Sets. Therefore, there is no need to start with set-theoretic structures like sieves or Badiou’s ‘generic’ filters.

Strong Postulate, Categorical Version: For a given materialization the situation E is faithful to the atomic situation of truth (Setsγ∗(Ω)op) if the materialization morphism itself is bounded and thus logical.

In particular, this alternative definition suggests that materiality itself is not inevitably a logical question. Therefore, for this definition to make sense, let us look at the question of materiality from a more abstract point of view: what are topoi or ‘places’ of reason that are not necessarily material or where the question of materiality differs from that defined against the ‘Platonic’ world of Sets? Can we deploy the question of materiality without making any reference – direct or sheaf-theoretic – to the question of what the objects ‘consist of’, that is, can we think about materiality without crossing Kant’s categorical limit of the object? Elementary theory suggests that we can.

Elementary Topos:  An elementary topos E is a category which

  1. has finite limits, or equivalently E has so called pull-backs and a terminal object 1,
  2. is Cartesian closed, which means that for each object X there is an exponential functor (−)X : E → E which is right adjoint to the functor (−) × X, and finally
  3. axiom of truth E retains an object called the subobject classifier Ω, which is equipped with an arrow 1 →true Ω such that for each monomorphism σ : Y ֒→ X in E, there is a unique classifying map φσ : X → Ω making σ : Y ֒→ X a pull-back of φσ along the arrow true.

Grothendieck-topos: In respect to this categorical definition, a Grothendieck-topos is a topos with the following conditions satisfies:

(1) E has all set-indexed coproducts, and they are disjoint and universal,

(2) equivalence relations in E have universal co-equalisers,

(3) every equivalence relation in E is effective, and every epimorphism in E is a coequaliser,

(4) E has ‘small hom-sets’, i.e. for any two objects X, Y , the morphisms of E from X to Y are parametrized by a set, and finally

(5) E has a set of generators (not necessarily monic in respect to 1 as in the case of locales).

Together the five conditions can be taken as an alternative definition of a Grothendieck-topos. We should still demonstrate that Badiou’s world of T-sets is actually the category of sheaves Shvs (T, J) and that it will, consequentially, hold up to those conditions of a topos listed above. To shift to the categorical setting, one first needs to define a relation between objects. These relations, the so called ‘natural transformations’ we encountered in relation Yoneda lemma, should satisfy conditions Badiou regards as ‘complex arrangements’.

Relation: A relation from the object (A, Idα) to the object (B,Idβ) is a map ρ : A → B such that

Eβ ρ(a) = Eα a and ρ(a / p) = ρ(a) / p.

It is a rather easy consequence of these two pre-suppositions that it respects the order relation ≤ one retains Idα (a, b) ≤ Idβ (ρ(a), ρ(b)) and that if a‡b are two compatible elements, then also ρ(a)‡ρ(b). Thus such a relation itself is compatible with the underlying T-structures.

Given these definitions, regardless of Badiou’s confusion about the structure of the ‘power-object’, it is safe to assume that Badiou has demonstrated that there is at least a category of T-Sets if not yet a topos. Its objects are defined as T-sets situated in the ‘world m’ together with their respective equalization functions Idα. It is obviously Badiou’s ‘diagrammatic’ aim to demonstrate that this category is a topos and, ultimately, to reduce any ‘diagrammatic’ claim of ‘democratic materialism’ to the constituted, non-diagrammatic objects such as T-sets. That is, by showing that the particular set of objects is a categorical makes him assume that every category should take a similar form: a classical mistake of reasoning referred to as affirming the consequent.

Geach and Relative Identity

Peter-Geachsi_2790822a

The Theory of Relative Identity is a logical innovation due to Peter Thomas Geach  (P.T. Geach-Logic Matters) motivated by the same sort of mathematical examples as Frege’s definition by abstraction. Like Frege Geach seeks to give a logical sense to mathematical talk “up to” a given equivalence E through replacing E by identity but unlike Frege he purports, in doing so, to avoid the introduction of new abstract objects (which in his view causes unnecessary ontological inflation). The price for the ontological parsimony is Geach’s repudiation of Frege’s principle of a unique and absolute identity for the objects in the domain over which quantified variables range. According to Geach things can be same in one way while differing in others. For example two printed letters aa are same as a type but different as tokens. In Geach’s view this distinction does not commit us to a-tokens and a-types as entities but presents two different ways of describing the same reality. The unspecified (or “absolute” in Geach’s terminology) notion of identity so important for Frege is in Geach’s view is incoherent.

Geach’s proposal appears to account better for the way the notion of identity is employed in mathematics since it does not invoke “directions” or other mathematically redundant concepts. It captures particularly well the way the notion of identity is understood in Category theory. According to Baez & Dolan

In a category, two objects can be “the same in a way” while still being different.

So in Category theory the notion of identity is relative in exactly Geach’s sense. But from the logical point of view the notion of relative identity remains highly controversial. Let x,y be identical in one way but not in another, or in symbols: Id(x,y) & ¬Id'(x,y). The intended interpretation assumes that x in the left part of the formula and x in the right part have the same referent, where this last same apparently expresses absolute not relative identity. So talk of relative identity arguably smuggles in the usual absolute notion of identity anyway. If so, there seems good reason to take a standard line and reserve the term “identity” for absolute identity.

We see that Plato, Frege and Geach propose three different views of identity in mathematics. Plato notes that the sense of “the same” as applied to mathematical objects and to the ideas is different: properly speaking, sameness (identity) applies only to ideas while in mathematics sameness means equality or some other equivalence relation. Although Plato certainly recognizes essential links between mathematical objects and Ideas (recall the “ideal numbers”) he keeps the two domains apart. Unlike Plato Frege supposes that identity is a purely logical and domain-independent notion, which mathematicians must rely upon in order to talk about the sameness or difference of mathematical objects, or any other kind at all. Geach’s proposal has the opposite aim: to provide a logical justification for the way of thinking about the (relativized) notions of sameness and difference which he takes to be usual in mathematical contexts and then extend it to contexts outside mathematics (As Geach says):

Any equivalence relation … can be used to specify a criterion of relative identity. The procedure is common enough in mathematics: e.g. there is a certain equivalence relation between ordered pairs of integers by virtue of which we may say that x and y though distinct ordered pairs, are one and the same rational number. The absolute identity theorist regards this procedure as unrigorous but on a relative identity view it is fully rigorous.