Meillassoux, Deleuze, and the Ordinal Relation Un-Grounding Hyper-Chaos. Thought of the Day 41.0

v1v2a

As Heidegger demonstrates in Kant and the Problem of Metaphysics, Kant limits the metaphysical hypostatization of the logical possibility of the absolute by subordinating the latter to a domain of real possibility circumscribed by reason’s relation to sensibility. In this way he turns the necessary temporal becoming of sensible intuition into the sufficient reason of the possible. Instead, the anti-Heideggerian thrust of Meillassoux’s intellectual intuition is that it absolutizes the a priori realm of pure logical possibility and disconnects the domain of mathematical intelligibility from sensibility. (Ray Brassier’s The Enigma of Realism: Robin Mackay – Collapse_ Philosophical Research and Development. Speculative Realism.) Hence the chaotic structure of his absolute time: Anything is possible. Whereas real possibility is bound to correlation and temporal becoming, logical possibility is bound only by non-contradiction. It is a pure or absolute possibility that points to a radical diachronicity of thinking and being: we can think of being without thought, but not of thought without being.

Deleuze clearly situates himself in the camp when he argues with Kant and Heidegger that time as pure auto-affection (folding) is the transcendental structure of thought. Whatever exists, in all its contingency, is grounded by the first two syntheses of time and ungrounded by the third, disjunctive synthesis in the implacable difference between past and future. For Deleuze, it is precisely the eternal return of the ordinal relation between what exists and what may exist that destroys necessity and guarantees contingency. As a transcendental empiricist, he thus agrees with the limitation of logical possibility to real possibility. On the one hand, he thus also agrees with Hume and Meillassoux that [r]eality is not the result of the laws which govern it. The law of entropy or degradation in thermodynamics, for example, is unveiled as nihilistic by Nietzsche s eternal return, since it is based on a transcendental illusion in which difference [of temperature] is the sufficient reason of change only to the extent that the change tends to negate difference. On the other hand, Meillassoux’s absolute capacity-to-be-other relative to the given (Quentin Meillassoux, Ray Brassier, Alain Badiou – After finitude: an essay on the necessity of contingency) falls away in the face of what is actual here and now. This is because although Meillassoux s hyper-chaos may be like time, it also contains a tendency to undermine or even reject the significance of time. Thus one may wonder with Jon Roffe (Time_and_Ground_A_Critique_of_Meillassou) how time, as the sheer possibility of any future or different state of affairs, can provide the (non-)ground for the realization of this state of affairs in actuality. The problem is less that Meillassoux’s contingency is highly improbable than that his ontology includes no account of actual processes of transformation or development. As Peter Hallward (Levi Bryant, Nick Srnicek and Graham Harman (editors) – The Speculative Turn: Continental Materialism and Realism) has noted, the abstract logical possibility of change is an empty and indeterminate postulate, completely abstracted from all experience and worldly or material affairs. For this reason, the difference between Deleuze and Meillassoux seems to come down to what is more important (rather than what is more originary): the ordinal sequences of sensible intuition or the logical lack of reason.

But for Deleuze time as the creatio ex nihilo of pure possibility is not just irrelevant in relation to real processes of chaosmosis, which are both chaotic and probabilistic, molecular and molar. Rather, because it puts the Principle of Sufficient Reason as principle of difference out of real action it is either meaningless with respecting to the real or it can only have a negative or limitative function. This is why Deleuze replaces the possible/real opposition with that of virtual/actual. Whereas conditions of possibility always relate asymmetrically and hierarchically to any real situation, the virtual as sufficient reason is no less real than the actual since it is first of all its unconditioned or unformed potential of becoming-other.

The Womb of Cosmogony. Thought of the Day 30.0

Nowhere and by no people was speculation allowed to range beyond those manifested gods. The boundless and infinite UNITY remained with every nation a virgin forbidden soil, untrodden by man’s thought, untouched by fruitless speculation. The only reference made to it was the brief conception of its diastolic and systolic property, of its periodical expansion or dilatation, and contraction. In the Universe with all its incalculable myriads of systems and worlds disappearing and re-appearing in eternity, the anthropomorphised powers, or gods, their Souls, had to disappear from view with their bodies: — “The breath returning to the eternal bosom which exhales and inhales them,” says our Catechism. . . . In every Cosmogony, behind and higher than the creative deity, there is a superior deity, a planner, an Architect, of whom the Creator is but the executive agent. And still higher, over and around, withinand without, there is the UNKNOWABLE and the unknown, the Source and Cause of all these Emanations. – The Secret Doctrine

oahpla49

Many are the names in the ancient literatures which have been given to the Womb of Being from which all issues, in which all forever is, and into the spiritual and divine reaches of which all ultimately returns, whether infinitesimal entity or macrocosmic spacial unit.

The Tibetans called this ineffable mystery Tong-pa-nnid, the unfathomable Abyss of the spiritual realms. The Buddhists of the Mahayana school describe it as Sunyata or the Emptiness, simply because no human imagination can figurate to itself the incomprehensible Fullness which it is. In the Eddas of ancient Scandinavia the Boundless was called by the suggestive term Ginnungagap – a word meaning yawning or uncircumscribed void. The Hebrew Bible states that the earth was formless and void, and darkness was upon the face of Tehom, the Deep, the Abyss of Waters, and therefore the great Deep of kosmic Space. It has the identical significance of the Womb of Space as envisioned by other peoples. In the Chaldaeo-Jewish Qabbalah the same idea is conveyed by the term ‘Eyn (or Ain) Soph, without bounds. In the Babylonian accounts of Genesis, it is Mummu Tiamatu which stands for the Great Sea or Deep. The archaic Chaldaean cosmology speaks of the Abyss under the name of Ab Soo, the Father or source of knowledge, and in primitive Magianism it was Zervan Akarana — in its original meaning of Boundless Spirit instead of the later connotation of Boundless Time.

In the Chinese cosmogony, Tsi-tsai, the Self-Existent, is the Unknown Darkness, the root of the Wuliang-sheu, Boundless Age. The wu wei of Lao-tse, often mistranslated as passivity and nonaction, imbodies a similar conception. In the sacred scriptures of the Quiches of Guatemala, the Popol Vuh or “Book of the Azure Veil,” reference is made to the “void which was the immensity of the Heavens,” and to the “Great Sea of Space.” The ancient Egyptians spoke of the Endless Deep; the same idea also is imbodied in the Celi-Ced of archaic Druidism, Ced being spoken of as the “Black Virgin” — Chaos — a state of matter prior to manvantaric differentiation.

The Orphic Mysteries taught of the Thrice-Unknown Darkness or Chronos, about which nothing could be predicated except its timeless Duration. With the Gnostic schools, as for instance with Valentinus, it was Bythos, the Deep. In Greece, the school of Democritus and Epicurus postulated To Kenon, the Void; the same idea was later voiced by Leucippus and Diagoras. But the two most common terms in Greek philosophy for the Boundless were Apeiron, as used by Plato, Anaximander and Anaximenes, and Apeiria, as used by Anaxagoras and Aristotle. Both words had the significance of frontierless expansion, that which has no circumscribing bounds.

The earliest conception of Chaos was that almost unthinkable condition of kosmic space or kosmic expanse, which to human minds is infinite and vacant extension of primordial Aether, a stage before the formation of manifested worlds, and out of which everything that later existed was born, including gods and men and all the celestial hosts. We see here a faithful echo of the archaic esoteric philosophy, because among the Greeks Chaos was the kosmic mother of Erebos and Nyx, Darkness and Night — two aspects of the same primordial kosmic stage. Erebos was the spiritual or active side corresponding to Brahman in Hindu philosophy, and Nyx the passive side corresponding to pradhana or mulaprakriti, both meaning root-nature. Then from Erebos and Nyx as dual were born Aether and Hemera, Spirit and Day — Spirit being here again in this succeeding stage the active side, and Day the passive aspect, the substantial or vehicular side. The idea was that just as in the Day of Brahma of Hindu cosmogony things spring into active manifested existence, so in the kosmic Day of the Greeks things spring from elemental substance into manifested light and activity, because of the indwelling urge of the kosmic Spirit.

Dissipations – Bifurcations – Synchronicities. Thought of the Day 29.0

Deleuze’s thinking expounds on Bergson’s adaptation of multiplicities in step with the catastrophe theory, chaos theory, dissipative systems theory, and quantum theory of his era. For Bergson, hybrid scientific/philosophical methodologies were not viable. He advocated tandem explorations, the two “halves” of the Absolute “to which science and metaphysics correspond” as a way to conceive the relations of parallel domains. The distinctive creative processes of these disciplines remain irreconcilable differences-in-kind, commonly manifesting in lived experience. Bergson: Science is abstract, philosophy is concrete. Deleuze and Guattari: Science thinks the function, philosophy the concept. Bergson’s Intuition is a method of division. It differentiates tendencies, forces. Division bifurcates. Bifurcations are integral to contingency and difference in systems logic.

The branching of a solution into multiple solutions as a system is varied. This bifurcating principle is also known as contingency. Bifurcations mark a point or an event at which a system divides into two alternative behaviours. Each trajectory is possible. The line of flight actually followed is often indeterminate. This is the site of a contingency, were it a positionable “thing.” It is at once a unity, a dualism and a multiplicity:

Bifurcations are the manifestation of an intrinsic differentiation between parts of the system itself and the system and its environment. […] The temporal description of such systems involves both deterministic processes (between bifurcations) and probabilistic processes (in the choice of branches). There is also a historical dimension involved […] Once we have dissipative structures we can speak of self-organisation.

Untitled

Figure: In a dynamical system, a bifurcation is a period doubling, quadrupling, etc., that accompanies the onset of chaos. It represents the sudden appearance of a qualitatively different solution for a nonlinear system as some parameter is varied. The illustration above shows bifurcations (occurring at the location of the blue lines) of the logistic map as the parameter r is varied. Bifurcations come in four basic varieties: flip bifurcation, fold bifurcation, pitchfork bifurcation, and transcritical bifurcation. 

A bifurcation, according to Prigogine and Stengers, exhibits determinacy and choice. It pertains to critical points, to singular intensities and their division into multiplicities. The scientific term, bifurcation, can be substituted for differentiation when exploring processes of thought or as Massumi explains affect:

Affect and intensity […] is akin to what is called a critical point, or bifurcation point, or singular point, in chaos theory and the theory of dissipative structures. This is the turning point at which a physical system paradoxically embodies multiple and normally mutually exclusive potentials… 

The endless bifurcating division of progressive iterations, the making of multiplicities by continually differentiating binaries, by multiplying divisions of dualities – this is the ontological method of Bergson and Deleuze after him. Bifurcations diagram multiplicities, from monisms to dualisms, from differentiation to differenciation, creatively progressing. Manuel Delanda offers this account, which describes the additional technicality of control parameters, analogous to higher-level computer technologies that enable dynamic interaction. These protocols and variable control parameters are later discussed in detail in terms of media objects in the metaphorical state space of an in situ technology:

[…] for the purpose of defining an entity to replace essences, the aspect of state space that mattered was its singularities. One singularity (or set of singularities) may undergo a symmetry-breaking transition and be converted into another one. These transitions are called bifurcations and may be studied by adding to a particular state space one or more ‘control knobs’ (technically control parameters) which determine the strength of external shocks or perturbations to which the system being modeled may be subject.

Another useful example of bifurcation with respect to research in the neurological and cognitive sciences is Francesco Varela’s theory of the emergence of microidentities and microworlds. The ready-for-action neuronal clusters that produce microindentities, from moment to moment, are what he calls bifurcating “break- downs”. These critical events in which a path or microidentity is chosen are, by implication, creative:

The Political: NRx, Neoreactionism Archived.

This one is eclectic and for the record.

dark-enlightenment-map-1-5

The techno-commercialists appear to have largely arrived at neoreaction via right-wing libertarianism. They are defiant free marketeers, sharing with other ultra-capitalists such as Randian Objectivists a preoccupation with “efficiency,” a blind trust in the power of the free market, private property, globalism and the onward march of technology. However, they are also believers in the ideal of small states, free movement and absolute or feudal monarchies with no form of democracy. The idea of “exit,” predominantly a techno-commercialist viewpoint but found among other neoreactionaries too, essentially comes down to the idea that people should be able to freely exit their native country if they are unsatisfied with its governance-essentially an application of market economics and consumer action to statehood. Indeed, countries are often described in corporate terms, with the King being the CEO and the aristocracy shareholders.

The “theonomists” place more emphasis on the religious dimension of neoreaction. They emphasise tradition, divine law, religion rather than race as the defining characteristic of “tribes” of peoples and traditional, patriarchal families. They are the closest group in terms of ideology to “classical” or, if you will, “palaeo-reactionaries” such as the High Tories, the Carlists and French Ultra-royalists. Often Catholic and often ultramontanist. Finally, there’s the “ethnicist” lot, who believe in racial segregation and have developed a new form of racial ideology called “Human Biodiversity” (HBD) which says people of African heritage are naturally less intelligent than people of Caucasian and east Asian heritage. Of course, the scientific community considers the idea that there are any genetic differences between human races beyond melanin levels in the skin and other cosmetic factors to be utterly false, but presumably this is because they are controlled by “The Cathedral.” They like “tribal solidarity,” tribes being defined by shared ethnicity, and distrust outsiders.

dark-enlightenment

Overlap between these groups is considerable, but there are also vast differences not just between them but within them. What binds them together is common opposition to “The Cathedral” and to “progressive” ideology. Some of their criticisms of democracy and modern society are well-founded, and some of them make good points in defence of the monarchical system. However, I don’t much like them, and I doubt they’d much like me.

Whereas neoreactionaries are keen on the free market and praise capitalism, unregulated capitalism is something I am wary of. Capitalism saw the collapse of traditional monarchies in Europe in the 19th century, and the first revolutions were by capitalists seeking to establish democratic, capitalist republics where the bourgeoisie replaced the aristocratic elite as the ruling class; setting an example revolutionary socialists would later follow. Capitalism, when unregulated, leads to monopolies, exploitation of the working class, unsustainable practices in pursuit of increased short-term profits, globalisation and materialism. Personally, I prefer distributist economics, which embrace private property rights but emphasise widespread ownership of wealth, small partnerships and cooperatives replacing private corporations as the basic units of the nation’s economy. And although critical of democracy, the idea that any form of elected representation for the lower classes is anathaema is not consistent with my viewpoint; my ideal government would not be absolute or feudal monarchy, but executive constitutional monarchy with a strong monarch exercising executive powers and the legislative role being at least partially controlled by an elected parliament-more like the Bourbon Restoration than the Ancien Régime, though I occasionally say “Vive l’Ancien Régime!” on forums or in comments to annoy progressive types. Finally, I don’t believe in racialism in any form. I tend to attribute preoccupations with racial superiority to deep insecurity which people find the need to suppress by convincing themselves that they are “racially superior” to others, in absence of any actual talent or especial ability to take pride in. The 20th century has shown us where dividing people up based on their genetics leads us, and it is not somewhere I care to return to.

I do think it is important to go into why Reactionaries think Cthulhu always swims left, because without that they’re vulnerable to the charge that they have no a priori reason to expect our society to have the biases it does, and then the whole meta-suspicion of the modern Inquisition doesn’t work or at least doesn’t work in that particular direction. Unfortunately (for this theory) I don’t think their explanation is all that great (though this deserves substantive treatment) and we should revert to a strong materialist prior, but of course I would say that, wouldn’t I.

And of course you could get locked up for wanting fifty Stalins! Just try saying how great Enver Hoxha was at certain places and times. Of course saying you want fifty Stalins is not actually advocating that Stalinism become more like itself – as Leibniz pointed out, a neat way of telling whether something is something is checking whether it is exactly like that thing, and nothing could possibly be more like Stalinism than Stalinism. Of course fifty Stalins is further in the direction that one Stalin is from our implied default of zero Stalins. But then from an implied default of 1.3 kSt it’s a plea for moderation among hypostalinist extremists. As Mayberry Mobmuck himself says, “sovereign is he who determines the null hypothesis.”

Speaking of Stalinism, I think it does provide plenty of evidence that policy can do wonderful things for life expectancy and so on, and I mean that in a totally unironic “hail glorious comrade Stalin!” way, not in a “ha ha Stalin sure did kill a lot people way.” But this is a super-unintuitive claim to most people today, so ill try to get around to summarizing the evidence at some point.

‘Neath an eyeless sky, the inkblack sea
Moves softly, utters not save a quiet sound
A lapping-sound, not saying what may be
The reach of its voice a furthest bound;
And beyond it, nothing, nothing known
Though the wind the boat has gently blown
Unsteady on shifting and traceless ground
And quickly away from it has flown.

Allow us a map, and a lamp electric
That by instrument we may probe the dark
Unheard sounds and an unseen metric
Keep alive in us that unknown spark
To burn bright and not consume or mar
Has the unbounded one come yet so far
For night over night the days to mark
His journey — adrift, without a star?

Chaos is the substrate, and the unseen action (or non-action) against disorder, the interloper. Disorder is a mere ‘messing up order’.  Chaos is substantial where disorder is insubstantial. Chaos is the ‘quintessence’ of things, chaotic itself and yet always-begetting order. Breaking down disorder, since disorder is maladaptive. Exit is a way to induce bifurcation, to quickly reduce entropy through separation from the highly entropic system. If no immediate exit is available, Chaos will create one.

Conjuncted: Austrian Economics. Some Ruminations. Part 1.

Ludwig von Mises’ argument concerning the impossibility of economic calculation under socialism provides a hint as to what a historical specific theory of capital could look like. He argues that financial accounting based on business capital is an indispensable tool when it comes to the allocation and distribution of resources in the economy. Socialism, which has to do without private ownership of means of production and, therefore, also must sacrifice the concepts of (business) capital and financial accounting, cannot rationally appraise the value of the production factors. Without such an appraisal, production must necessarily result in chaos. 

Purely Random Correlations of the Matrix, or Studying Noise in Neural Networks

2-Figure1-1

Expressed in the most general form, in essentially all the cases of practical interest, the n × n matrices W used to describe the complex system are by construction designed as

W = XYT —– (1)

where X and Y denote the rectangular n × m matrices. Such, for instance, are the correlation matrices whose standard form corresponds to Y = X. In this case one thinks of n observations or cases, each represented by a m dimensional row vector xi (yi), (i = 1, …, n), and typically m is larger than n. In the limit of purely random correlations the matrix W is then said to be a Wishart matrix. The resulting density ρW(λ) of eigenvalues is here known analytically, with the limits (λmin ≤ λ ≤ λmax) prescribed by

λmaxmin = 1+1/Q±2 1/Q and Q = m/n ≥ 1.

The variance of the elements of xi is here assumed unity.

The more general case, of X and Y different, results in asymmetric correlation matrices with complex eigenvalues λ. In this more general case a limiting distribution corresponding to purely random correlations seems not to be yet known analytically as a function of m/n. It indicates however that in the case of no correlations, quite generically, one may expect a largely uniform distribution of λ bound in an ellipse on the complex plane.

Further examples of matrices of similar structure, of great interest from the point of view of complexity, include the Hamiltonian matrices of strongly interacting quantum many body systems such as atomic nuclei. This holds true on the level of bound states where the problem is described by the Hermitian matrices, as well as for excitations embedded in the continuum. This later case can be formulated in terms of an open quantum system, which is represented by a complex non-Hermitian Hamiltonian matrix. Several neural network models also belong to this category of matrix structure. In this domain the reference is provided by the Gaussian (orthogonal, unitary, symplectic) ensembles of random matrices with the semi-circle law for the eigenvalue distribution. For the irreversible processes there exists their complex version with a special case, the so-called scattering ensemble, which accounts for S-matrix unitarity.

As it has already been expressed above, several variants of ensembles of the random matrices provide an appropriate and natural reference for quantifying various characteristics of complexity. The bulk of such characteristics is expected to be consistent with Random Matrix Theory (RMT), and in fact there exists strong evidence that it is. Once this is established, even more interesting are however deviations, especially those signaling emergence of synchronous or coherent patterns, i.e., the effects connected with the reduction of dimensionality. In the matrix terminology such patterns can thus be associated with a significantly reduced rank k (thus k ≪ n) of a leading component of W. A satisfactory structure of the matrix that would allow some coexistence of chaos or noise and of collectivity thus reads:

W = Wr + Wc —– (2)

Of course, in the absence of Wr, the second term (Wc) of W generates k nonzero eigenvalues, and all the remaining ones (n − k) constitute the zero modes. When Wr enters as a noise (random like matrix) correction, a trace of the above effect is expected to remain, i.e., k large eigenvalues and the bulk composed of n − k small eigenvalues whose distribution and fluctuations are consistent with an appropriate version of random matrix ensemble. One likely mechanism that may lead to such a segregation of eigenspectra is that m in eq. (1) is significantly smaller than n, or that the number of large components makes it effectively small on the level of large entries w of W. Such an effective reduction of m (M = meff) is then expressed by the following distribution P(w) of the large off-diagonal matrix elements in the case they are still generated by the random like processes

P(w) = (|w|(M-1)/2K(M-1)/2(|w|))/(2(M-1)/2Γ(M/2)√π) —– (3)

where K stands for the modified Bessel function. Asymptotically, for large w, this leads to P(w) ∼ e(−|w|) |w|M/2−1, and thus reflects an enhanced probability of appearence of a few large off-diagonal matrix elements as compared to a Gaussian distribution. As consistent with the central limit theorem the distribution quickly converges to a Gaussian with increasing M.

Based on several examples of natural complex dynamical systems, like the strongly interacting Fermi systems, the human brain and the financial markets, one could systematize evidence that such effects are indeed common to all the phenomena that intuitively can be qualified as complex.

Catastrophe

Since, natural phenomena are continuously battered by perturbations, any thematic of the fundament that classifies critical points of smooth functions becomes the defining parameter of catastrophe. If a natural system is defined by a function of state variables, then the perturbations are represented by control parameters on which the function depends. An unfolding of a function is such a family: it is a smooth function of the state variables with the parameters satisfying a specific condition. Catastrophe’s aim is then to detect properties of a function by studying its unfoldings.

catastrophe topo

Thom studied the continuous crossing from a variety (space) to another, the connections through common boundaries and points between spaces even endowed with different dimensions (a research on the so called “cobordism” (1) which yielded him the Field Medal in 1958), until he singled out few universal forms, that is mathematical objects representing catastrophes or abrupt, although continuous, transitions of forms: specific singularities appearing when an object is submitted to bonds, such as restrictions with regard to its ordinary dimensions, that it accepts except in particular points where it offers resistance by concentrating there, so to say, its structure. The theory is used to classify how stable equilibria change when parameters are varied, with points in parameter space at which qualitative changes affects behavior termed catastrophe points. Catastrophe theory should apply to any gradient system where the force can be written as the negative gradient of a potential, and the points where the gradient vanishes are what the theory prefers degenerate points. There are seven elementary types of catastrophes or generic singularities of an application and Thom decided to study their applications in caustics, surfaces lit according to different angle shots, reflections and refractions. Initially catastrophe theory was of use just to explain caustic formation and only afterwards many other phenomena, but without yielding quantitative solutions and exact predictions, rather qualitatively framing situations that were uncontrollable by only reductionistic quantitative methods summing up elementary units. The study of forms in irregular, accidental and even chaotic situations had truly in advance led scientists like Poincaré and Hadamard, to single out structurally invariable catastrophic evolutions in the most disparate phenomena, in terms of divergences due to sensitive dependence on little variations of the initial conditions. In such cases there were not exact laws, rather evolutionary asymptotic tendencies, which did not allow exact predictions, in case only statistic ones. While when exact predictions are possible, in terms of strict laws and explicit equations, the catastrophe ceases.

For Thom, catastrophe was a methodology. He says,

Mathematicians should see catastrophe theory as just a part of the theory of local singularities of smooth morphisms, or, if they are interested in the wider ambitions of this theory, as a dubious methodology concerning the stability or instability of natural systems….the whole of qualitative dynamics, all the ‘chaos’ theories talked about so much today, depend more or less on it.

Thom gets more philosophical when it comes to the question of morphogenesis. Stability for Thom is a natural condition to place upon mathematical models for processes in nature because the conditions under which such processes take place can never be duplicated and therefore must be invariant under small perturbations and hence stable. what makes morphogenesis interesting for Thom is the fact that locally, as the transition proceeds, the parameter varies, from a stable state of a vector field to an unstable state and back to a stable state by means of a process which locally models system’s morphogenesis. Furthermore, what is observed in a process undergoing morphogenesis is precisely the shock wave and resulting configuration of chreods (2) separated by strata of the shockwave, at each interval of time and over intervals of observation time. It then follows “that to classify an observed phenomenon or to support a hypothesis about the local underlying dynamic, we need in principle only observe the process, study the observed catastrophe or discontinuity set and try to relate it to one of the finitely many universal catastrophe sets, which would become then our main object of interest. Even if a process depends on a large number of physical parameters, as long as it is described by the gradient model, its description would involve one of seven elementary catastrophes; in particular one can give a relatively simple mathematical description of such apparently complicated processes even if one does not know what the relevant physical parameters are or what the physical mechanism of the process is. According to Thom, “if we consider an unfolding, we can obtain a qualitative intelligence about the behaviors of a system in the neighborhood of an unstable equilibrium point. this idea was not accepted widely and was criticized by applied mathematicians because for them only numerical exactness allowed prediction and therefore efficient action. After the work of Grothendieck, it is known that the theory of singularity unfolding is a particular case of a general category, the theory of flat deformations of an analytic set and for flat local deformations of an analytic set only the hyper surface case has a smooth unfolding of finite dimension. For Thom, this meant the if we wanted to continue the scientific domain of calculable exact laws, we would be justified in considering the instance where an analytic process leads to a singularity of codimension one in internal variables. Might we then not expect that the process be diffused and subsequently propagated in the unfolding according to a mode that is to be defined? Such an argument allows one to think that the Wignerian domain of exact laws can be extended into a region where physical processes are no longer calculable but where analytic continuation remains qualitatively valid.

7e3788316b922465ac872570642857d4

Anyway, catastrophe theory studies forms as qualitative discontinuities though on a continuous substrate. In any case forms as mental facts are immersed in a matter which is still a thought object. The more you try to analyze it the more it appears as a fog, revealing a more and more complex and inexhaustible weaving the more it refines itself through the forms it assumes. In fact complexity is more and more ascertained until a true enigma is reached when you once for all want to define reality as a universe endowed with a high number of dimensions and then object of mental experiences to which even objective phenomena are at the end concretely reduced. Concrete reality is yet more evident than a scientific explanation and naïve ontology appears more concrete than the scientific one. It is steady and universal, while the latter is always problematic and revisable. Besides, according to Bachelard, while naïve explanation is immediately reflected into the ordinary language which is accessible to everybody, the claimed scientific explanation goes with its jargon beyond immediate experience, away from the life world which only we can know immediately.

As for example, the continuous character of reality, which Thom entrusts to a world intuition as a frame of the phenomenological discontinuities themselves, is instead contradicted by the present tendency to reduce all to discrete units of information (bits) of modern computing. Of course it has a practical value: an animal individuating a prey perceives it as an entity which is absolutely distinct from its environment, just as we discretize linguistic phonemata to learn speaking without confounding them. Yet a continuous background remains, notwithstanding the tendency of our brains to discretize. Such background is for example constituted by space and time. Continuum is said an illusion as exemplified by a film which appears continuous to us, while it is made of discrete frames. Really it is an illusion but with a true mental base, otherwise it would not arise at all, and such base is just the existence of continuum. Really we perceive continuum but need discreteness, finiteness in order to keep things under control. Anyway quantum mechanics seems to introduce discreteness in absolute terms, something we do not understand but which is operatively valid, as is shown by the possibility to localize or delocalize a wave packet by simply varying the value distributions of complementary variables as position and momentum or time and energy, according to Heisenberg’s principle of indetermination. Anyway, also the apparent quantum discontinuity hides a continuity which, always according to Heisenberg’s principle, may be only obscured and not cancelled in several phenomena. It is difficult to conceive but not monstrous. The hypothesis according to which we are finite and discrete in our internal structure is afterwards false for we are more than that. We have hundreds billions of neurons, which are in continuous movement, as they are constituted by molecules continuously vibrating in the space, so giving place to infinite possible variations in a considerable dimensions number, even though we are reduced to the smallest possible number of states and dimensions to deal with the system under study, according to a technical and algorithmic thought which is operatively effective, certainly practically motivated but unidentifiable with reality.

(1) Two manifolds M and N are said to be cobordant if their disjoint union is the boundary of some other manifold. Given the extreme difficulty of the classification of manifolds it would seem very unlikely that much progress could be made in classifying manifolds up to cobordism. However, René Thom, in his remarkable, if unreadable, 1954 paper (French), gave the full solution to this problem for unoriented manifolds, as well as many powerful insights into the methods for solving it in the cases of manifolds with additional structure. The key step was the reduction of the cobordism problem to a homotopy problem, although the homotopy problem is still far from trivial. This was later generalized by Lev Pontrjagin, and this result is now known as the Thom-Pontrjagin theorem.

(2) Every natural process decomposes into structurally stable islands, the chreods. The set of chreods and the multidimensional syntax controlling their positions constitute the semantic model. When the chreod is considered as a word of this multidimensional language, the meaning (signification) of this word is precisely that of the global topology of the associated attractor (or attractors) and of the catastrophes that it (or they) undergo. In particular, the signification of a given attractor is defined by the geometry of its domain of existence on the space of external variables and the topology of the regulation catastrophes bounding that domain. One result of this is that the signification of a form (chreod) manifests itself only by the catastrophes that create or destroy it. This gives the axiom dear to the formal linguists: that the meaning of a word is nothing more than the use of the word; this is also the axiom of the “bootstrap” physicists, according to whom a particle is completely defined by the set of interactions in which it participates.

Emergentic Philosophy or Defining Complexity

techno-worlds-complexity-and-complications-clockwork-silver-serge-averbukh

If the potential of emergence is not pregnant with what emerges from it, then emergence becomes just a gobbledygook (generally unintelligible) of abstraction and obscurity. What is this differentiation all about? The origin of differentiation is to be located in what has already been actualized. Thus, potential is not only abstract, but relative. Abstract, since, potential could come to mean a host of other things than that what it is meant for, and relative, since it is dependent on intertwinings within which it could unfold. Potentiality is creative for philosophy, through an expansive notion of unity through assemblages of multiple singularities helping dislodge anthropocentric worldviews that insist on rationale of the world as a solid and stable structure. A way out is to think in terms of liquid structures, where power to self-organize and untouched by any human static control allows for an existence at the edge of creative and flowing chaos. Such a position is tangible in history as a confluence of infinite variations, and rooted in materialism of a revived form. Emergence is a diachronic construction of functional structures in complex systems attaining a synchronic coherence of systemic behavior during the process of arresting the individual component’s behavior, so very crucial in ramifications for addressing burning questions in the philosophy of science, especially the ones concerning reductionism. Complexity investigates emergent properties, certain regularities of behavior that somehow transcend the ingredients that make them up. Complexity argues against reductionism, against reducing the whole to the parts. And in doing so, it transforms scientific understanding of far-from-equilibrium structures of irreversible times and of non-Euclidean spaces.

Manifold(s) of Deleuzean/De Landian Intensity(ies): The Liquid Flowing Chaos of Complexity

fluid_dynamics_by_aetasserenus

The potential for emergence is pregnant with that which emerges from it, even if as pure potential, lest emergence would be just a gobbledygook of abstraction and obscurity. Some aspects of emergence harness more potential or even more intensity in emergence. What would intensity mean here? Emergence in its most abstract form is described by differentiation, which is the perseverance of differing by extending itself into the world. Thus intensity or potentiality would be proportional to intensity/quality, and degree/quantity of differentiation. The obvious question is the origin of this differentiation. This comes through what has already been actualized, thus putting forth a twist. The twist is in potential being not just abstract, but also relative. Abstract, because potential can come to mean anything other than what it has a potential for, and relative, since, it is dependent upon intertwining within which it can unfold. So, even if intensity for the potential of emergence is proportional to differentiation, an added dimension of meta-differentiation is introduced that not only deals with the intensity of the potential emergence it actualizes, but also in turn the potential, which, its actualization gives rise to. This complexification of emergence is termed complexity.

Complexity is that by which emergence intertwines itself with intensity, thus laden with potentiality itself. This, in a way, could mean that complexity is a sort of meta-emergence, in that, it contains potential for the emergence of greater intensity of emergence. This implies that complexity and emergence require each other’s presence to co-exist and co-evolve. If emergence is, by which, complexity manifests itself in actuality in the world, then complexity is, by which, emergence surfaces as potential through intertwining. Where would Deleuze and Guattari fit in here? This is crucial, since complexity for the said thinkers is different from the way it has been analyzed above. And let us note where the difference rests. To have to better cope with the ideas of Deleuze and Guattari, it is mandated to invite Manuel De Landa with his intense reading of the thinkers in question. The point is proved in these words of John Protevi,

According to Manuel DeLanda, in the late 60s, Gilles Deleuze began to formulate some of the philosophical significance of what is now sometimes referred to as “chaos/complexity theory,” the study of “open” matter/energy systems which move from simple to complex patterning and from complex to simple patterning. Though not a term used by contemporary scientists in everyday work (“non-linear dynamics” is preferred), it can be a useful term for a collection of studies of phenomena whose complexity is such that Laplacean determinism no longer holds beyond a limited time and space scale. Thus the formula of chaos/complexity might be “short-term predictability, long-term unpredictability.

Here, potentiality is seen as creative for philosophy within materialism. An expansion on the notion of unity through assemblages of multiple singularities is on the cards, that facilitate the dislodging of anthropocentric view points, since such views are at best limited, with over-insistence on the rationale of world as a stable and solid structure. The solidity of structures is to be rethought in terms that open vistas for potential creation. The only way out to accomplish this is in terms of liquid structures that are always vulnerable to chaos and disorder considered a sine qua non for this creative potential to emerge. In this liquidity, De Landa witnesses the power to self-organize and further, the ability to form an ethics of sorts, one untouched by human static control, and which allows an existence at the edge of creative, flowing chaos. Such a position is tangible in history as a confluence of infinite variations, a rationale that doesn’t exist when processes are dynamic, thus wanting history to be rooted in materialism of a revived form. Such a history is one of flowing articulations not determined by linear and static constructions, but by infinite bifurcations, of the liquid unfolding, thus exposing a collective identity from a myriad of points and perspectives. This is complexity for Deleuze and Guattari, which enables a re-look at material systems through their powers of immanent autopoiesis or self-organization.

State-Space Trajectory and Basin of Attraction

A system that undergoes unexpected and/or violent upheaval is always attributed to as being facing up to a rupture, the kind of which is comprehensible by analyzing the subsystems that go on to make the systemic whole. Although, it could prove to be quite an important tool analysis, it seldom faces aporetic situations, because of unthought behavior exhibited. This behavior emanates from localized zones, and therefore send in lines of rupture that digresses the whole system from being comprehended successfully. To overcome this predicament, one must legitimize the existence of what Bak and Chen refer to as autopoietic or self-organizing criticality.

…composite systems naturally evolve to a critical state in which a minor event starts a chain reaction that can affect any number of elements in the system. Although composite systems produce more minor events than catastrophes, chain reaction of all sizes are an integral part of the dynamics. According to the theory, the mechanism that leads to minor events is the same one that leads to major events. Furthermore, composite systems never reach equilibrium but evolve from one meta-stable state to the next…self-organized criticality is a holistic theory: the global features such as the relative number of large and small events, do not depend on the microscopic mechanisms. Consequently global features of the system cannot be understood by analyzing the parts separately. To our knowledge, self-organized criticality is the only model or mathematical description that has led to a holistic theory for dynamic systems.

The acceptance of this criticality as existing has an affirmative impact on the autopoietic system in moving towards the point aptly called the critical point, which is loaded with a plethora of effects for a single event. This multitude is achieved through state-space descriptions or diagrams, with their uncanny characteristics of showing up different dimensions for different and independent variables. Diagrammatically, a state-space or Wuensche Diagram is,

r225n

and

 

rbn_pa

In all complex systems simulations at each moment the state of the system is described by a set of variables. As the system is updated over time these variables undergo changes that are influenced by the previous state of the entire system. System dynamics can be viewed as tabular data depicting the changes in variables over time. However, it is hard to analyze system dynamics just looking at the changes in these variables, as causal relationships between variables are not readily apparent. By removing all the details about the actual state and the actual temporal information, we can view the dynamics as a graph with nodes describing states and links describing transitions. For instance software applications can have a large number of states. Problems occur when software applications reach uncommon or unanticipated states. Being able to visualize the entire state space, and quickly comprehend the paths leading to any particular state, allows more targeted analysis. Common states can be thoroughly tested, uncommon states can be identified and artificially induced. State space diagrams allow for numerous insights into system behaviour, in particular some states of the system can be shown to be unreachable, while others are unavoidable. Its applicability lies in any situation in which you have a model or system which changes state over time and you want to examine the abstract dynamical qualities of these changes. For example, social network theory, gene regulatory networks, urban and agricultural water usage, and concept maps in cognition and language modeling.

In such a scenario, every state of the system would be represented by a unique point in the state-space and the dynamics of the system would be mapped by trajectories through the state-space. These trajectories when converge at a point, are said to converge in on a basin of attraction, or simply an attractor, and it is at this point that any system reaches stability.

ecplex_basin

But, would this attractor phenomenon work for neural networks, where there are myriad nodes, each with their own corresponding state-spaces? The answer is in the affirmative, and thats because in stable systems, only a few attractor points are present, thus pulling in the system to stability. On the contrary, if the system is not stable, utter chaos would reign supreme, and this is where autopoiesis as a phenomenon comes to rescue by balancing perfectly between chaos and ordered states. This balancing act is extremely crucial and sensitive, for on the one hand, a chaotic system is too disordered to be beneficial, and on the other, a system that is highly stable suffers a handicap in dedicating a lot of resources towards reaching and maintaining attractor point/s. Not only that, even a transition from one stable state to another would rope in sluggish responses in adaptability to environment, and that too at a heavy cost of perturbations. But, self-organizing criticality would take care of such high costs by optimally utilizing available resources. And as the nodes are in possession of unequal weights to begin with, the fight for superiority takes precedence, that gets reflected in the state-space as well. If inputs are marked by variations, optimization through autopoietic criticality takes over, else, the system settles down to a strong attractor/s.

The nodes that interact at local zones are responsible for the macro-level effects, and according to Kauffman, this is possible in simple networks, by the switching values of “on” or “off” at input. In such cases, order is ensured with the formation of cores of stability that thereafter permeate the network and further see to it that the system reaches stability by drawing in other nodes into stability. In complex networks, nonlinearity is incorporated to adjust signals approaching critical point. The adjustment sees to it that if the signals are getting higher values, the system as such would slide into stability, or otherwise into chaos. Therefore adjustment as a mechanism is an important factor in complex systems to self-organize by hovering around criticality, and this is what is referred to as “on the edge of chaos” by Lewin.