Excessive Subjective Transversalities. Thought of the Day 33.0

In other words, object and subject, in their mutual difference and reciprocal trajectories, emerge and re-emerge together, from transformation. The everything that has already happened is emergence, registered after its fact in a subject-object relation. Where there was classically and in modernity an external opposition between object and subject, there is now a double distinction internal to the transformation. 1) After-the-fact: subject-object is to emergence as stoppage is to process. 2) In-fact: “objective” and “subjective” are inseparable, as matter of transformation to manner of transformation… (Brian Massumi Deleuze Guattari and Philosophy of Expression)

chaosmos__79422.1408518169.650.650

Massumi makes the case, after Simondon and Deleuze and Guattari, for a dynamic process of subjectivity in which subject and object are other but their relation is transformative to their terms. That relation is emergence. In Felix Guattari’s last book, Chaosmosis, he outlines the production of subjectivity as transversal. He states that subjectivity is

the ensemble of conditions which render possible the emergence of individual and/or collective instances as self-referential existential Territories, adjacent, or in a delimiting relation, to an alterity that is itself subjective.

This is the subject in excess (Simondon; Deleuze), overpowering the transcendental. The subject as constituted by all the forces that simultaneously impinge upon it; are in relation to it. Similarly, Simondon characterises this subjectivity as the transindividual, which refers to

a relation to others, which is not determined by a constituted subject position, but by pre-individuated potentials only experienced as affect (Adrian Mackenzie-Transductions_ bodies and machines at speed).

Equating this proposition to technologically enabled relations exerts a strong attraction on the experience of felt presence and interaction in distributed networks. Simondon’s principle of individuation, an ontogenetic process similar to Deleuze’s morphogenetic process, is committed to the guiding principle

of the conservation of being through becoming. This conservation is effected by means of the exchanges made between structure and process… (Simondon).

Or think of this as structure and organisation, which is autopoietic process; the virtual organisation of the affective interval. These leanings best situate ideas circulating through collectives and their multiple individuations. These approaches reflect one of Bergson’s lasting contributions to philosophical practice: his anti-dialectical methodology that debunks duality and the synthesised composite for a differentiated multiplicity that is also a unified (yet heterogeneous) continuity of duration. Multiplicities replace the transcendental concept of essences.

Unformed Bodies Without Organs (BwO) and Protevi’s Version of Autopoiesis Spreading Rhizomatically. Thought of the Day 32.0

03

Protevi’s interpretation of autopoietic organisation as equivalent to the virtual, unformed, unorganised BwO is in many ways radical. For many, the theory of autopoiesis is a ‘closed’ system theory in contrast to virtuality which signals the third wave cybernetics of open systems. One’s position on this issue, though distinctions are indeed ‘fuzzy’, dictate the descriptives of discourse. The preference here favours the catalysis of human-machinic interplay as it veers towards the transductive and transversal. But these terms of fluidity should remain fluid. Despite a nearly universal theoretical disavowal of the Cartesian paradigm, is it still problematic to surrender the Enlightenment’s legacy of the liberal humanist subject? To surrender the notion of identity, of self and other as individually determined? Does the plausibility of the posthuman send silent shivers down the vertebrae of the elitist homo sapien? Are realities constructed from an always already individual being or is it that, “autonomous will is merely the story consciousness tells itself to explain results that actually come about through chaotic dynamics and emergent structures”? To in any way grasp the dimension of the collective through collaborative practice, a path must be traversed through the (trans)individual. The path explored here is selective. It begins with Bergson and spreads rhizomatically.

The Semiotic Theory of Autopoiesis, OR, New Level Emergentism

higher-consciousness

The dynamics of all the life-cycle meaning processes can be described in terms of basic semiotic components, algebraic constructions of the following forms:

Pnn:fnn] → Ξn+1)

where Ξn is a sign system corresponding to a representation of a (design) problem at time t1, Ξn+1 is a sign system corresponding to a representation of the problem at time t2, t2 > t1, fn is a composition of semiotic morphisms that specifies the interaction of variation and selection under the condition of information closure, which requires no external elements be added to the current sign system; мn is a semiotic morphism, and Pn is the probability associated with мn, ΣPn = 1, n=1,…,M, where M is the number of the meaningful transformations of the resultant sign system after fn. There is a partial ranking – importance ordering – on the constraints of A in every Ξn, such that lower ranked constraints can be violated in order for higher ranked constraints to be satisfied. The morphisms of fn preserve the ranking.

The Semiotic Theory of Self-Organizing Systems postulates that in the scale hierarchy of dynamical organization, a new level emerges if and only if a new level in the hierarchy of semiotic interpretance emerges. As the development of a new product always and naturally causes the emergence of a new meaning, the above-cited Principle of Emergence directly leads us to the formulation of the first law of life-cycle semiosis as follows:

I. The semiosis of a product life cycle is represented by a sequence of basic semiotic components, such that at least one of the components is well defined in the sense that not all of its morphisms of м and f are isomorphisms, and at least one м in the sequence is not level-preserving in the sense that it does not preserve the original partial ordering on levels.

For the present (i.e. for an on-going process), there exists a probability distribution over the possible мn for every component in the sequence. For the past (i.e. retrospectively), each of the distributions collapses to a single mapping with Pn = 1, while the sequence of basic semiotic components is degenerated to a sequence of functions. For the future, the life-cycle meaning-making

Autopoiesis Revisited

33_10klein1

Autopoiesis principally dealt with determining the essence of living beings to start off with, thus calling to attention a clarification between organization and structure. This distinction was highlighted with organization subtending the set of all possible relations of the autopoietic processes of an organism and structure as a synchronic snapshot from the organizational set that was active at any given instant. This distinction was tension ridden, for a possibility of a production of a novel functional structure was inhibited, and especially so, when the system had perturbations vis-à-vis the environment that housed it. Thus within the realm of autopoiesis, a diachronic emergence was conceivable only as a natural drift. John Protevi throws light on this perspective with his insistence on synchronic emergence as autonomous, and since autonomy is interest directed, the question of autopoiesis in the social realm is ruled out. The case of understanding rejection of extending autopoiesis to the social realm, especially Varela’s rejection, is a move conceived more to move beyond autopoiesis, rather than beyond neocybernetics as concerned with the organizational closure of informational systems, lest a risk of slipping into polarization should loom large. The aggrandizing threat of fascistic and authoritarian tendencies in Varela were indeed ill-conceived. This polarity that Varela considered later in his intellectual trajectory as comprising of fragments that constituted the whole, and collectively constructed, was a launch pad for Luhmann to enter the fray and use autopoiesis to social systems. Autopoiesis forms the central notion for his self-referential systems, where the latter are characterized by acknowledging their referring to themselves in every operation. Autopoietic system while organizationally closed nevertheless references an environment, background or context. This is an indication that pure auto-referentiality is generally lacking, replaced instead by a broader process of self- referentiality which comprises hetero-referentiality with a reference to an environment. This process is watchful of the distinction between itself and the environment, lest it should fail to take off. As Luhmann says that if an autopoietic system did not have an environment, it would be forced to invent one as the horizon of its auto-referentiality.

A system distinguishes itself from the environment by boundaries, where the latter is a zone of high-degree complexity, the former is a one of reduced complexity. Even Luhmann’s system believes in being interest-driven, where the communication is selective with the available information to the best of its efficiency. Luhmann likens the operation of autopoiesis to a program, making a series of logical distinctions. Here, Luhmann refers to the British mathematician G. Spencer Brown’s logic of distinctions that Maturana and Varela had identified as a model for the functioning of any cognitive process. The supreme criteria guiding the “self-creation” of any given system is a defining binary code. This binary code is taken by Luhmann to problematize the auto-referential system’s continuous confrontation with the dilemma of disintegration/continuation. Importantly, Luhmann treats systems on an ontological level, that is, systems exist, and this paradigm is attempted to be changed through the differential relations between the system and the environment.

Philosophically, complexity and self-organizational principles shifts trends into interdisciplinarity. To take a case of holism, emergentism within complexity abhors a study through reductionism. Scientifically, this notion of holism failed to stamp its authority due to a lack of any solid scientificity, and the hubristic Newtonian paradigm of reductionism as the panacea for all ills came to stay. The rapprochement was not possible until a German biologist Ludwig von Bertalanffy shocked the prevalent world view with his thesis on the openness of living systems through interactions with the surrounding systems for their continual survival. This idea deliberated on a system embedded within an environment separated by a boundary that lent the system its own identity. Input from the environment and output from the system could be conceived as a plurality of systems interacting with one another to form a network, which, if functionally coherent is a system in its own right, or a supersystem, with the initial conditions as its subsystems. This strips the subsystems of any independence, but determinable within a network via relations and/or mapping. This in general is termed constraint, that abhors independence from relations between the coupled systems (supersystem/subsystem). If the coupling between the systems is tight enough, an organization with its identity and autonomy results. Cybernetics deals precisely with such a formulation, where the autonomy in question is maintained through goal-directed seemingly intelligent action in line with the thoughts of Varela and Luhmann. This is significant because the perturbations originating in the environment are compensated for by the system actively in order to maintain its preferred state of affairs, with greater the amount of perturbations implying greater compensatory actions on the part of the system. One consequence of such a systemic perspective has gotten rid of Cartesian mind-matter split by thinking of it as nothing more than a special kind of relation. Such is the efficacy of autopoiesis in negotiating the dilemma surrounding the metaphysical question concerning the origin of order.

Could Complexity Rehabilitate Mo/PoMo Ethics?

A well known passage from Marie Fleming could be invoked here to acquit complexity from the charges and accusation pertaining to relativism. He says,

Anyone who argues against reason is necessarily caught up in a contradiction: she asserts at the locutionary level that reason does not exist, while demonstrating by way of her performance in argumentative processes that such reason does in fact exist.

Such an absolute statement about complexity would similarly be eaten along its way.

0f8aae843bd9dc3fe714779be3f6dc38

Taking the locutionary from the above quote, it could be used to adequately distinguish from performative, or logic versus rhetoric. Such a distinction gains credibility, if one is able to locate an Archimedean point to share discourse/s, which, from the point of view of complexity theory would be a space outside the autopoietic system, or, in other words, would be a meta-theoretical framework. Such a framework is skeptically looked upon/at by complexity, which has no qualms in exhibiting an acknowledgement towards performative tensions at work. Such tensions are generative of ethical choices and consequences, since any accessibility to the finality of knowledge is built upon the denial of critical perspective/s, thus shrouding the entire exercise in either a veil of ignorance, or a hubristic pride, or illusory at best.

Morality gains significance, since its formulations is often ruptured for want of secure, and certain knowledge, and both of which are not provided for by complexity theory and French theory, according to the accusations labeled against them. Even if, in making choices that are normative in nature, a clear formulation of the ethical is obligated. Lyotard’s underlining conditions of knowledge is often considered unethical, as he admits to the desire for justice to be shrouded in an unknown intellectual territory. Lyotard has Habermas in mind in dealing with this, since for the latter’s communication therapy, what is mandated is clearly consensual agreement on the part of the public to seek out these metaprescriptions as universally valid and as spanning all language games. Habermas is targeted here for deliberately ignoring the diversity inherent in the post-modern society. For Lyotard,

It is the monster formed by the interweaving of various networks of heteromorphous classes of utterances (denotative, prescriptive, performative, technical, evaluative, etc.). there is no reason to think that it could be possible to determine metaprescriptive common to all of these language games or like the revisable consensus like the one in force at a given moment in the scientific community could embrace the totality of metaprescriptions regulating the totality of statements circulating in the social collectivity. As a matter of fact, the contemporary decline of narratives of legitimization – be they traditional or ‘modern’ (the emancipation of humanity, the realization of the idea) – is tied to the abandonment of this belief.

The fight over consensus, if it could be achieved at all, is contentious between Lyotard and Habermas. Obviously, it could be attained, but only locally and should not even vie for universal validity. Lyotard scores a point over Habermas here, because of his emphasis on the permeability of discursive practices dressed with paralogy. Justice, as a subset of ethics in the post-modern society, in order to overcome its status as a problematic, must recognize the heteromorphous nature of language games or phase regimens on the one hand, and consensus as reached must have a local space-time valuation contingently subject to refutation or nullification on the other. Such a diagnosis goes against the crux of modernism’s idea of ethics as founded upon foundational and universal set of rules, and maybe imperatives. Modernism’s idea of ethics is no different, at least in the formative structure from the rule-based analysis, since both demand a strict adherence to the dictates of rules and guidelines. A liberation comes in the form of post-modernism. Bauman sees the post-modern society as not only setting us free, but also pushing us towards a paradoxical situation, where agents have the fullness of moral choice and responsibility, while simultaneously depriving them of the comfort of the universal guidance as promised by modernism. Moral responsibility comes with the loneliness of moral choice. Such paradoxical events or situations facing man in the post-modern society only reinvests faith in agonistics of the network. At the same time, such an aporetic position is too paradoxical to satisfy many. Taking cues from the field of jurisprudence, the works of Druscilla Cornell could help clear the muddy waters here to an extent of a satisfactory resolution. Cornell aims to establish the relationship of the philosophy of the limit, or what she calls the post-structural theory of Derrida in principle, to questions of ethics, law and justice. Cornell shows no inhibitions towards accepting the complexity of relationships governing humans, and in the process accepts Hegel as the vantage point. Hegel criticizes Kant for his abstract idealism, and admits to our constitution within a social structure, which is teleologically headed for perfection. In short, the dialectical process is convergent for Hegel, since it is operative within a social/historical system aiming towards organization. Adorno differs here, since, for him dialectics is always divergent, with stress laid upon differences that characterize between humans as always irreducible to a totalizing organized system. This position of Adorno with its sympathy for difference is much closer to complexity, that at first would seem. Cornell carries further on from there and introduces the work of Luhmann, who is a towering figure in sociology, when it comes to bringing in autopoiesis within the fold. Humans are never allowed to stand outside the system that Luhmann thinks is not only complex, but autopoietic as well. Therefore, on an individual level, the choice element has no role to play, except, accepting the system that would undergo an organization to best suit its survival through a process of evolution, and not transformation. Luhmann’s understanding still prioritizes the present, and has no place for the past or the uncertain future. Cornell considers this a drawback, and makes past as an ingredient in understanding the meaning of an event, on the one hand, and following Derrida, wants to take up responsibility for the future, even if it is unknown. With a structure like this in place, it is possible to evade the rigidity of modernist claims on ethics on the one hand, and fluidity of evasive tendencies towards responsibility on the other. Instead, what Cornell calls for is an acceptance of the present ethical principles in all seriousness. That is to be resistant to change, and awareness of applications of the principles is what is called for. Ethics involves calculation in a responsible manner. In a similar vein, complexity entails irreducibility to calculation, in the sense of coming out with novelistic tendencies involving creativity that is not simply a flight of fancy, but an imagination laden with responsibility. Only, in this regard, could ethics mean not subjecting to any normativity. And, one of the ways to achieve this to obviously shy away from intellectual arrogance.

Manifold(s) of Deleuzean/De Landian Intensity(ies): The Liquid Flowing Chaos of Complexity

fluid_dynamics_by_aetasserenus

The potential for emergence is pregnant with that which emerges from it, even if as pure potential, lest emergence would be just a gobbledygook of abstraction and obscurity. Some aspects of emergence harness more potential or even more intensity in emergence. What would intensity mean here? Emergence in its most abstract form is described by differentiation, which is the perseverance of differing by extending itself into the world. Thus intensity or potentiality would be proportional to intensity/quality, and degree/quantity of differentiation. The obvious question is the origin of this differentiation. This comes through what has already been actualized, thus putting forth a twist. The twist is in potential being not just abstract, but also relative. Abstract, because potential can come to mean anything other than what it has a potential for, and relative, since, it is dependent upon intertwining within which it can unfold. So, even if intensity for the potential of emergence is proportional to differentiation, an added dimension of meta-differentiation is introduced that not only deals with the intensity of the potential emergence it actualizes, but also in turn the potential, which, its actualization gives rise to. This complexification of emergence is termed complexity.

Complexity is that by which emergence intertwines itself with intensity, thus laden with potentiality itself. This, in a way, could mean that complexity is a sort of meta-emergence, in that, it contains potential for the emergence of greater intensity of emergence. This implies that complexity and emergence require each other’s presence to co-exist and co-evolve. If emergence is, by which, complexity manifests itself in actuality in the world, then complexity is, by which, emergence surfaces as potential through intertwining. Where would Deleuze and Guattari fit in here? This is crucial, since complexity for the said thinkers is different from the way it has been analyzed above. And let us note where the difference rests. To have to better cope with the ideas of Deleuze and Guattari, it is mandated to invite Manuel De Landa with his intense reading of the thinkers in question. The point is proved in these words of John Protevi,

According to Manuel DeLanda, in the late 60s, Gilles Deleuze began to formulate some of the philosophical significance of what is now sometimes referred to as “chaos/complexity theory,” the study of “open” matter/energy systems which move from simple to complex patterning and from complex to simple patterning. Though not a term used by contemporary scientists in everyday work (“non-linear dynamics” is preferred), it can be a useful term for a collection of studies of phenomena whose complexity is such that Laplacean determinism no longer holds beyond a limited time and space scale. Thus the formula of chaos/complexity might be “short-term predictability, long-term unpredictability.

Here, potentiality is seen as creative for philosophy within materialism. An expansion on the notion of unity through assemblages of multiple singularities is on the cards, that facilitate the dislodging of anthropocentric view points, since such views are at best limited, with over-insistence on the rationale of world as a stable and solid structure. The solidity of structures is to be rethought in terms that open vistas for potential creation. The only way out to accomplish this is in terms of liquid structures that are always vulnerable to chaos and disorder considered a sine qua non for this creative potential to emerge. In this liquidity, De Landa witnesses the power to self-organize and further, the ability to form an ethics of sorts, one untouched by human static control, and which allows an existence at the edge of creative, flowing chaos. Such a position is tangible in history as a confluence of infinite variations, a rationale that doesn’t exist when processes are dynamic, thus wanting history to be rooted in materialism of a revived form. Such a history is one of flowing articulations not determined by linear and static constructions, but by infinite bifurcations, of the liquid unfolding, thus exposing a collective identity from a myriad of points and perspectives. This is complexity for Deleuze and Guattari, which enables a re-look at material systems through their powers of immanent autopoiesis or self-organization.

Representation as a Meaningful Philosophical Quandary

1456831690974

The deliberation on representation indeed becomes a meaningful quandary, if most of the shortcomings are to be overcome, without actually accepting the way they permeate the scientific and philosophical discourse. The problem is more ideological than one could have imagined, since, it is only within the space of this quandary that one can assume success in overthrowing the quandary. Unless the classical theory of representation that guides the expert systems has been accepted as existing, there is no way to dislodge the relationship of symbols and meanings that build up such systems, lest the predicament of falling prey to the Scylla of metaphysically strong notion of meaningful representation as natural or the Charybdis of an external designer should gobble us up. If one somehow escapes these maliciously aporetic entities, representation as a metaphysical monster stands to block our progress. Is it really viable then to think of machines that can survive this representational foe, a foe that gets no aid from the clusters of internal mechanisms? The answer is very much in the affirmative, provided, a consideration of the sort of such a non-representational system as continuous and homogeneous is done away with. And in its place is had functional units that are no more representational ones, for the former derive their efficiency and legitimacy through autopoiesis. What is required is to consider this notional representational critique of distributed systems on the objectivity of science, since objectivity as a property of science has an intrinsic value of independence from the subject who studies the discipline. Kuhn  had some philosophical problems to this precise way of treating science as an objective discipline. For Kuhn, scientists operate under or within paradigms thus obligating hierarchical structures. Such hierarchical structures ensure the position of scientists to voice their authority on matters of dispute, and when there is a crisis within, or, for the paradigm, scientists, to begin with, do not outrightly reject the paradigm, but try their level best at resolution of the same. In cases where resolution becomes a difficult task, an outright rejection of the paradigm would follow suit, thus effecting what is commonly called the paradigm shift. If such were the case, obviously, the objective tag for science goes for a hit, and Kuhn argues in favor of a shift in social order that science undergoes, signifying the subjective element. Importantly, these paradigm shifts occur to benefit scientific progress and in almost all of the cases, occur non-linearly. Such a view no doubt slides Kuhn into a position of relativism, and has been the main point of attack on paradigms shifting. At the forefront of attacks has been Michael Polanyi and his bunch of supporters, whose work on epistemology of science have much of the same ingredients, but was eventually deprived of fame. Kuhn was charged with plagiarism. The commonality of their arguments could be measured by a dissenting voice for objectivity in science. Polanyi thought of it as a false ideal, since for him the epistemological claims that defined science were based more on personal judgments, and therefore susceptible to fallibilism. The objective nature of science that obligates the scientists to see things as they really are is kind of dislodged by the above principle of subjectivity. But, if science were to be seen as objective, then the human subjectivity would indeed create a rupture as far as the purified version of scientific objectivity is sought for. The subject or the observer undergoes what is termed the “observer effect” that refers to the change impacting an act of observation being observed. This effect is as good as ubiquitous in most of the domains of science and technology ranging from Heisenbug(1) in computing via particle physics, science of thermodynamics to quantum mechanics. The quantum mechanics observer effect is quite perplexing, and is a result of a phenomenon called “superposition” that signifies the existence in all possible states and all at once. The superposition gets its credit due to Schrödinger’s cat experiment. The experiment entails a cat that is neither dead nor alive until observed. This has led physicists to take into account the acts of “observation” and “measurement” to comprehend the paradox in question, and thereby come out resolving it. But there is still a minority of quantum physicists out there who vouch for the supremacy of an observer, despite the quantum entanglement effect that go on to explain “observation” and “measurement” impacts.(2) Such a standpoint is indeed reflected in Derrida (9-10) as well, when he says (I quote him in full),

The modern dominance of the principle of reason had to go hand in hand with the interpretation of the essence of beings as objects, and object present as representation (Vorstellung), an object placed and positioned before a subject. This latter, a man who says ‘I’, an ego certain of itself, thus ensures his own technical mastery over the totality of what is. The ‘re-‘ of repraesentation also expresses the movement that accounts for – ‘renders reason to’ – a thing whose presence is encountered by rendering it present, by bringing it to the subject of representation, to the knowing self.

If Derridean deconstruction needs to work on science and theory, the only way out is to relinquish the boundaries that define or divide the two disciplines. Moreover, if there is any looseness encountered in objectivity, the ramifications are felt straight at the levels of scientific activities. Even theory does not remain immune to these consequences. Importantly, as scientific objectivity starts to wane, a corresponding philosophical luxury of avoiding the contingent wanes. Such a loss of representation congruent with a certain theory of meaning we live by has serious ethical-political affectations.

(1) Heisenbug is a pun on the Heisenberg’s uncertainty principle and is a bug in computing that is characterized by a disappearance of the bug itself when an attempt is made to study it. One common example is a bug that occurs in a program that was compiled with an optimizing compiler, but not in the same program when compiled without optimization (e.g., for generating a debug-mode version). Another example is a bug caused by a race condition. A heisenbug may also appear in a system that does not conform to the command-query separation design guideline, since a routine called more than once could return different values each time, generating hard- to-reproduce bugs in a race condition scenario. One common reason for heisenbug-like behaviour is that executing a program in debug mode often cleans memory before the program starts, and forces variables onto stack locations, instead of keeping them in registers. These differences in execution can alter the effect of bugs involving out-of-bounds member access, incorrect assumptions about the initial contents of memory, or floating- point comparisons (for instance, when a floating-point variable in a 32-bit stack location is compared to one in an 80-bit register). Another reason is that debuggers commonly provide watches or other user interfaces that cause additional code (such as property accessors) to be executed, which can, in turn, change the state of the program. Yet another reason is a fandango on core, the effect of a pointer running out of bounds. In C++, many heisenbugs are caused by uninitialized variables. Another similar pun intended bug encountered in computing is the Schrödinbug. A schrödinbug is a bug that manifests only after someone reading source code or using the program in an unusual way notices that it never should have worked in the first place, at which point the program promptly stops working for everybody until fixed. The Jargon File adds: “Though… this sounds impossible, it happens; some programs have harbored latent schrödinbugs for years.”

(2) There is a related issue in quantum mechanics relating to whether systems have pre-existing – prior to measurement, that is – properties corresponding to all measurements that could possibly be made on them. The assumption that they do is often referred to as “realism” in the literature, although it has been argued that the word “realism” is being used in a more restricted sense than philosophical realism. A recent experiment in the realm of quantum physics has been quoted as meaning that we have to “say goodbye” to realism, although the author of the paper states only that “we would [..] have to give up certain intuitive features of realism”. These experiments demonstrate a puzzling relationship between the act of measurement and the system being measured, although it is clear from experiment that an “observer” consisting of a single electron is sufficient – the observer need not be a conscious observer. Also, note that Bell’s Theorem suggests strongly that the idea that the state of a system exists independently of its observer may be false. Note that the special role given to observation (the claim that it affects the system being observed, regardless of the specific method used for observation) is a defining feature of the Copenhagen Interpretation of quantum mechanics. Other interpretations resolve the apparent paradoxes from experimental results in other ways. For instance, the Many- Worlds Interpretation posits the existence of multiple universes in which an observed system displays all possible states to all possible observers. In this model, observation of a system does not change the behavior of the system – it simply answers the question of which universe(s) the observer(s) is(are) located in: In some universes the observer would observe one result from one state of the system, and in others the observer would observe a different result from a different state of the system.

State-Space Trajectory and Basin of Attraction

A system that undergoes unexpected and/or violent upheaval is always attributed to as being facing up to a rupture, the kind of which is comprehensible by analyzing the subsystems that go on to make the systemic whole. Although, it could prove to be quite an important tool analysis, it seldom faces aporetic situations, because of unthought behavior exhibited. This behavior emanates from localized zones, and therefore send in lines of rupture that digresses the whole system from being comprehended successfully. To overcome this predicament, one must legitimize the existence of what Bak and Chen refer to as autopoietic or self-organizing criticality.

…composite systems naturally evolve to a critical state in which a minor event starts a chain reaction that can affect any number of elements in the system. Although composite systems produce more minor events than catastrophes, chain reaction of all sizes are an integral part of the dynamics. According to the theory, the mechanism that leads to minor events is the same one that leads to major events. Furthermore, composite systems never reach equilibrium but evolve from one meta-stable state to the next…self-organized criticality is a holistic theory: the global features such as the relative number of large and small events, do not depend on the microscopic mechanisms. Consequently global features of the system cannot be understood by analyzing the parts separately. To our knowledge, self-organized criticality is the only model or mathematical description that has led to a holistic theory for dynamic systems.

The acceptance of this criticality as existing has an affirmative impact on the autopoietic system in moving towards the point aptly called the critical point, which is loaded with a plethora of effects for a single event. This multitude is achieved through state-space descriptions or diagrams, with their uncanny characteristics of showing up different dimensions for different and independent variables. Diagrammatically, a state-space or Wuensche Diagram is,

r225n

and

 

rbn_pa

In all complex systems simulations at each moment the state of the system is described by a set of variables. As the system is updated over time these variables undergo changes that are influenced by the previous state of the entire system. System dynamics can be viewed as tabular data depicting the changes in variables over time. However, it is hard to analyze system dynamics just looking at the changes in these variables, as causal relationships between variables are not readily apparent. By removing all the details about the actual state and the actual temporal information, we can view the dynamics as a graph with nodes describing states and links describing transitions. For instance software applications can have a large number of states. Problems occur when software applications reach uncommon or unanticipated states. Being able to visualize the entire state space, and quickly comprehend the paths leading to any particular state, allows more targeted analysis. Common states can be thoroughly tested, uncommon states can be identified and artificially induced. State space diagrams allow for numerous insights into system behaviour, in particular some states of the system can be shown to be unreachable, while others are unavoidable. Its applicability lies in any situation in which you have a model or system which changes state over time and you want to examine the abstract dynamical qualities of these changes. For example, social network theory, gene regulatory networks, urban and agricultural water usage, and concept maps in cognition and language modeling.

In such a scenario, every state of the system would be represented by a unique point in the state-space and the dynamics of the system would be mapped by trajectories through the state-space. These trajectories when converge at a point, are said to converge in on a basin of attraction, or simply an attractor, and it is at this point that any system reaches stability.

ecplex_basin

But, would this attractor phenomenon work for neural networks, where there are myriad nodes, each with their own corresponding state-spaces? The answer is in the affirmative, and thats because in stable systems, only a few attractor points are present, thus pulling in the system to stability. On the contrary, if the system is not stable, utter chaos would reign supreme, and this is where autopoiesis as a phenomenon comes to rescue by balancing perfectly between chaos and ordered states. This balancing act is extremely crucial and sensitive, for on the one hand, a chaotic system is too disordered to be beneficial, and on the other, a system that is highly stable suffers a handicap in dedicating a lot of resources towards reaching and maintaining attractor point/s. Not only that, even a transition from one stable state to another would rope in sluggish responses in adaptability to environment, and that too at a heavy cost of perturbations. But, self-organizing criticality would take care of such high costs by optimally utilizing available resources. And as the nodes are in possession of unequal weights to begin with, the fight for superiority takes precedence, that gets reflected in the state-space as well. If inputs are marked by variations, optimization through autopoietic criticality takes over, else, the system settles down to a strong attractor/s.

The nodes that interact at local zones are responsible for the macro-level effects, and according to Kauffman, this is possible in simple networks, by the switching values of “on” or “off” at input. In such cases, order is ensured with the formation of cores of stability that thereafter permeate the network and further see to it that the system reaches stability by drawing in other nodes into stability. In complex networks, nonlinearity is incorporated to adjust signals approaching critical point. The adjustment sees to it that if the signals are getting higher values, the system as such would slide into stability, or otherwise into chaos. Therefore adjustment as a mechanism is an important factor in complex systems to self-organize by hovering around criticality, and this is what is referred to as “on the edge of chaos” by Lewin.

Autopoiesis & Pre-Determined Design: An Abhorrent Alliance

Self-organization has also been conflated with the idea of emergence, and indeed one can occur without the other, thus nullifying the thesis of strong reliance between the two. Moreover, western philosophical traditions have been quite vocal in their skepticism about emergence and order within a structure, if there isn’t a presence of an external agency, either in the form of God, or in some a priori principle. But these traditions are indeed in for a rude shock, since there is nothing mystical about emergence and even self-organization (in cases where they are thought to be in conflated usage). Not just an absence of mysticism characterizing self-organization, but, even stochasticity seems to be a missing link in the said principle. Although, examples supporting the case vary according to the diverse environmental factors and complexity inherent in the system, the ease of working through becomes apparent, if self-organization or autopoiesis is viewed as a the capacity exhibited by the complex systems in enabling them to change or develop the internal structure spontaneously, while adapting and manipulating with their environment in the ongoing process. This could very well be the starting point in line with a working definition of autopoiesis. A clear example of this kind would be the human brain (although, brains of animals could suffice this equally well), which shows a great proclivity to learn, to remember in the midst of its development. Language is another instance, since in its development, a recognition of its structure is mandated, and this very structure in its attempt to survive and develop further under circumstances that are variegated, must show allegiance to adaptability. Even if, language is guided by social interactions between humans, the cultural space conducive for its development would have strong affinity to a generalized aspect of linguistic system. Now, let us build up on the criteria of determining what makes the system autopoietic, and thereby see what are the features that are generally held to be in common with autopoietic systems.

33_10klein

Self-organization abhors predetermined design, thus enabling the system to dynamically adapt to the regular/irregular changes in the environment in a nonlinear adherence. Even if emergence can occur without the underlying principle of self- organization, and vice versa, self-organization is in itself an emergent property of the system as a whole, with the individual components acting on local information and general principles. This is crucial, since the macroscopic behavior emerges out of microscopic interactions that in themselves are carriers of scant information, and in turn have a direct component of complexity associated with them when viewed microscopically. This complexity also gets reflected in their learning procedures, since for systems that are self-organizing, it is only the experiential aspects from previous encounters compared with the recent ones that help. And what would this increase in complexity entail? As a matter of fact, complexity is a reversal of entropy at the local level, thus putting the system at the mercy of a point of saturation. Moreover, since the systems are experiential, they are historical and hence based on memory. If such is the case, then it is safe to point out that these systems are diachronic in nature, and hence memory forms a vital component of emergence. Memory as anamnesis is unthinkable without selective amnesia, for piling up information does trade off with relevance of information simultaneously. For information that goes under the name of irrelevant, is simply jettisoned, and the space created in this manner is utilized for cases pertaining to representation. Not only representation sort of makes a back door entry here, it is also convenient for this space to undergo a systematic patterning that is the hallmark of these systems. Despite the patterns being the hallmark of self-organization, it should in no way be taken to mean that these systems are stringently teleological, because, the introduction of nonlinear functions that guide these systems introduce at the same time the shunning off of a central authority, or anthropomorphic centrality, or any external designer. Linear functions could partake in localized situations, but at the macroscopic level, they lose their vitality, and if complex systems stick on to their loyalty towards linear functions, they fade away in the process of trying hard to avoid negotiating this allegiance. Just as allegiance to nonlinearity is important for self-organization, so is an allegiance to anti-reductionism. That is due to the fact of micro-level units having no knowledge about the macro-level effects, while at the same time, these macro-level effects manifest themselves in clusters of micro-level units, thus ruling out any sort of independent level- based descriptions. The levels are stacked, intertwined, and most importantly, any resistance to reductionist discourse in trying to explicate the emergence within the system has no connotation for resistance to materialist properties themselves emerging.

chaos-nonlinearity-web-490x363

Clusters of information flow into the system from the external world that have an influencing impact on the internal makeup of the system and in turn triggers off interactions in tune with Hebb’s law to alter weights. With the process in full swing, two possibilities could take shape, viz, formation of a stable system of weights based on the regularity of a stable cluster, and association between sets of these stable clusters as and when they are identified. This self-organizing principle is not only based on learning, but at the same time also cautious with sidelining those that are potentially futile for the system. Now, when such information flows into the system, sensors and/or transducers are set up, that obligate varying levels of intensity of activity to some neurons and nodes over others. This is of course to be expected, and the way to come to terms with a regulated pattern of activity is the onus of adjustments of weights associated with neurons/nodes. A very important factor lies in the fact of the event denoting the flow of information from the external world into the system to occur regularly, or at least occasionally, lest the self-organizing or autopoietic system should fail to record in memory such occurrences and eventually fade out. Strangely, the patterns are arrived at without any reliance upon differentiated micro-level units to begin with. In parallel with neural networks, the nodes and neurons possess random values for their weights. The levels housing these micro- level nodes or neurons are intertwined to increase their strength, and if there is any absence of self-persisting positive feedback, the autopoietic system can in no way move away from the dictates of undifferentiated states it began with. As the nodes are associated with random values of weights, there is a race to show superiority, thus arresting the contingent limitless growth under the influence of limitless resources, thereby giving the emerging structure some meaningful essence and existence. Intertwining of levels also results in consensus building, and therefore effectuates meaning as accorded to these emergent structures of autpoietic systems. But this consensus building could lead astray the system from complexity, and hence to maintain the status quo, it is imperative for these autopoietic systems to have a correctional apparatus. The correctional apparatus spontaneously breaks the symmetry that leads the system away from complexity by either introducing haphazard fault lines in connections, or chaotic behaviors resulting from sensitivity to minor fluctuations as a result of banking on nonlinearity. Does this correctional apparatus in any way impact memory gained through the process of historicality? Apparently not. This is because of the distributed nature of memory storage, which is largely due to weights that are non-correspondingly symbolic. The weights that show their activity at the local scale are associated with memory storage through traces, and it is only due to this fact that information gets distributed over the system generating robustness. With these characteristic features, autopoietic systems only tend towards organizing their structures to the optimum, with safely securing the complexity expected within the system.

Complexity Theory and Philosophy: A Peace Accord

neuron_spark

Complexity has impacted fields diverse from the one it originated in, i.e. science. It has touched the sociological domains, and organizational sciences, but sadly, it has not had much of a say in mainstream academic philosophy. In sociology, John Urry (2003) examines the ideas of chaos and complexity in carrying out analyses of global processes. He does this, because he believes that systems are balanced between order and chaos, and that, there is no teleological move towards any state of equilibrium, as the events that pilot the system are not only unpredictable, but also irreversible at the same time. Such events rupture the space-time regularity with their dimension of unpredictability that was thought of as characterizing hitherto known sociological discursive practices. A highly significant contribution that comes along with such an analyses is the distinguishing between what Urry aptly calls “global networks” and “global fluids”. Global fluids are a topographical space used to describe the de-territorialized movement of people, information, objects, finances in an undirected, nonlinear mode, and in a way are characteristic of emergentism and hybridization. The topographies of global networks and global fluids interact in complex manner to give rise to emergent properties that define systems as always on the edge of chaos, pregnant with unpredictability.

emergentism

Cognitive science and evolutionary theory have been inspirational for a lot of philosophical investigations and have also benefited largely from complexity theory. If such is the case, the perplexing thing is complexity theory’s impact in philosophy, which has not had major inroads to make. Why could this be so? Let us ponder this over.

Analytical philosophy has always been concerned with analysis, and logical constructs that are to be stringently followed. These rules and regulations take the domain of philosophical investigations falling under the rubric of analytical tradition away from holism, uncertainty, unpredictability and subjectivity that are characteristics of complexity. The reason why this could be case is attributable to complexity theory as developed on the base of mathematics and computational theories, which, somehow is not the domain of academic philosophy dealing with social sciences and cultural studies in present days, but is confined to discussions and debates amongst philosophers of science (biology is an important branch here), mathematics and technology. Moreover, the debates and deliberations have concerned themselves with the unpredictable and uncertain implications as derived from the vestiges of chaos theory and not complexity theory per se. This is symptomatic of the fact that a lot of confusion rests upon viewing these two path-breaking theories as synonymous, which, incidentally is a mistake, as the former happens at best to be a mere subset of the latter. An ironical fate encountered philosophy, since it dealt with complex notions of language, without actually admitting to the jargon, and technical parlance of complexity theory. If philosophy lets complexity make a meaningful intercourse into its discursive practices, then it could be beneficial to the alliance. And the branch of philosophy that is making use of this intervention and alliance at present is post-modern philosophy ++++

The works of Freud and Saussure as furthered by Lacan and Derrida, not only accorded fecundity for a critique of modernity, but, also opened up avenues for a meaningful interaction with complexity. French theory at large was quite antagonistic to modernist claims of reducing the diverse world to essential features for better comprehensibility, and this essentially lent for its affinity to complexity. Even if Derrida never explicitly used the complexity parlance in his corpus, there appears to be a strong sympathy towards the phenomenon via his take on post-structuralism. On the other hand, Lyotard, in setting his arguments for post-modern conditions of knowledge was ecstatic about paralogy as a defining feature, which is no different from the way complexity, connectionism and distributed systems would harbor.

cc40a61d21aaf144f8e7cf31c50cd31b

Even Deleuze and Guattari are closer to the complex approach through their notions of rhizomes, which are non-reductive, non-hierarchical, and multiplicities oriented connections in data representations and interpretations, and are characterized by horizontal connectivities, as contrasted with arborescent models that find their characterizations in vertical and linear determinations. The ideas are further developed by De Landa (2006), where the attempt is to define a new ontology that could be utilized by social scientists. Components that make up the assemblages are characterized along two axes viz, material, explicating on the variable roles components might undergo, and territorializng/deterritorializing, explicating on processes components might be involved with.

mehretu

Relations of exteriority define components, implying that components are self-subsistent, or that there is never a loss of identity for them, during the process of being unplugged from one assemblage to be plugged into another. This relationship between the assemblages and components is nonlinearly and complexly defined, since assemblages are affected by lower level ones, but could also potentially act on to these components affecting adaptations in them. This is so similar to the way distributed systems are principally modeled. Then why has philosophy at large not shown much impact from complexity despite the French theoretical affinities with the latter?

Chaos theory is partly to blame here, for it has twisted the way a structure of a complex system is understood. The systems have a non-linear operational tendencies, and this has obfuscated the notion of meaning as lying squarely on relativism. The robustness of these systems, when looked at in an illuminating manner from the French theoretical perspective could be advantageous to get rid of ideas about complex systems as based on a knife’s edge, despite being nonlinearly determinable. If the structure of the system were a problematic, then defining limits and boundaries was no easy job. What is the boundary between the system and the environment? Is it rigorously drawn and followed, or is it a mere theoretical choice and construct? These are valid question, which philosophy found it difficult to come to terms with. These questions gained intensity with the introduction of self-organizational systems and/or autopoietic ones. Classical and modern philosophies either had to dismiss these ideas as chimerical, or it had to close off its own analyzing methods in dealing with these issues, and both of these approaches had a detrimental effect of isolating the discipline of philosophy from the cultural domains in which such notions were making positive interventions and inroads. It could safely be said that French theory, in a way tried its rescue mission, and picked up momentum in success. The major contribution from continental philosophy post-60s was framing solutions. Framing, as a schema of interpretation helped comprehending and responding to events and enabled systems and contexts to constitute one another, thus positing a resolution on the boundaries and limits issues that had plagued hitherto known philosophical doctrines.

The notion of difference, so central to modernism was a problematic that needed to be resolved. Such was never a problem within French theory, but was a tonic to be consumed along side complexity, to address socio-economic and political issues. Deleuze (1994), for example, in his metaphysical treatise, sought a critique of representation, and a systematic inversion of the traditional metaphysical notions of identity and difference. Identities were not metaphysically or logically prior to differences, and identities in whatever categories, are pronounced by their derivation from differences. In other words, forms, categories, apperception, and resemblances fail to attain their differences in themselves. And, as Deleuze (2003: 32) says,

If philosophy has a positive and direct relation to things, it is only insofar as philosophy claims to grasp the thing itself, according to what it is, in its difference from everything it is not, in other words, in its internal difference.

But Deleuzean thesis on metaphysics does make a political intervention, like when he says,

The more our daily life appears standardized, stereotyped, and subject to an accelerated reproduction of objects of consumption, the more art must be injected into it in order to extract from it that little difference which plays simultaneously between other levels of repetition, and even in order to make the two extremes resonate — namely, the habitual series of consumption and the instinctual series of destruction and death. (Deleuze 1994: 293).(1)

Tackling the complexity within the social realm head-on does not lie in extrapolating convenient generalities, and thereafter trying to fathom how finely they fit together, but, rather in apprehending the relational schema of the network, within which, individuals emerge as subjects, objects and systems that are capable of grasping the real things.(2) 

One major criticism leveled against complexity is that it is sympathetic to relativism, just like most of the French theoretical thought is. Whether, this accusation has any substance to it could be measured by the likes of circular meaningless debates like the Sokal hoax. The hoax was platitudinous to say the least, and vague at best. And why would this be so? Sokal in his article, “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity”, incorporated the vocabulary of his specialized discipline to unearth the waywardness of usage by the French theorists. This, for Sokal was fashionable nonsense, or an act of making noise. He takes the French theorists to task for a liberal use of terms like chaos, complexity, quantum, relativity, gender, difference, topology, and deconstruction, without any proper insight. Who would be vague in the Sokal affair? The physicist, or the bunch of French theorists? Such an issue could be tackled on an intelligibility concern. Intelligibility is a result of differentiation and not a guarantee of truth-giving process (Cilliers 2005: 262).

Clearly communicated does not give any indisputable identity to a concept. The only way, (such a meaning can) be meaningful is through limitations being set on such communications, an ethical choice once again. These limitations enable knowledge to come into existence, and this must be accepted de facto. In a parallel metaphoric with complexity, these limitations or constraints are sine qua non for autopoiesis to make an entry. Cilliers (2005: 264) is quite on target, when he lays down the general schema for complexity, if it is, aligned with notions of chaos, randomness and noise, the accusations of relativism and vagueness will start to hold water. It is aligned with notions of structure as the result of contingent constraints, we can make claims about complex systems, which are clear and comprehensible, despite the fact that the claims themselves are historically contingent.

Undoubtedly, complexity rides on modesty. But, the accusations against this position only succeed to level complexity as weak, a gross mistake in itself. Let us take Derrida here, as read by Sweetman (1999). Sweetman cites Derrida as an ideal post-modernist, and thereafter launches an attack on his works as confusing aesthetics with metaphysics, as mistakenly siding with assertions over arguments in philosophy, as holding Derrida for moral and epistemological relativism and, self-contradictory with a tinge of intellectual arrogance. Such accusations, though addressed by Derrida and his scholars at various times, nevertheless find parallels in complexity, where, the split is between proponents of mathematical certainty in dealing with complexity on the one hand, and proponents of metaphorical proclivities in dealing with the phenomenon on the other. So, how would relativism make an entry here? Being a relativist is as good as swimming in paradoxical intellectual currents, and such a position is embraced due to a lack of foundational basis for knowledge, if nothing more. The counter-argument against the relativistic stance of complexity could be framed in a simplistic manner, by citing the case of limited knowledge as not relativistic knowledge. If these forms of knowledge were equated in any manner, it would only help close doors on investigations.

A look at Luhmann’s use of autopoiesis in social theory is obligated here. This is necessitated by the fact of autopoiesis getting directly imported from biological sciences, to which, even Varela had objections, though intellectually changing tracks. Luhmann considers the leaving out of self-referentiality as a problematic in the work of Chileans (Maturana + Varela), since for Luhmann systems are characterized by general patterns which can just be described as making a distinction and crossing the boundary of the distinction [which] enables us to ask questions about society as a self-observing systems[s] (Hayles, K., Luhmann, N., Rasch, W., Knodt, E. & Wolfe, C., 1995 Autumn). Such a reaction from Luhmann is in his response to a cautious undertaking of any import directly from biological and psychological sciences to describe society and social theory. Reality is always distorted through the lens of perception and, this blinds humans from seeing things-in-themselves (the Kantian noumenon). One could visualize this within the analytical tradition of language as a problematic, involving oppositional thinking within the binary structure of linguistic terms themselves. What is required is an evolutionary explanation of how systems survive to the extent that they can learn to handle the inside/outside difference within the system, and within the context of their own operations, since they can never operate outside the system (Hayles, K., Luhmann, N., Rasch, W., Knodt, E. & Wolfe, C., 1995 Autumn). For the social theory to be effective, what requires deconstruction is the deconstruction of the grand tautological claim of autopoiesis, or the unity of the system as produced by the system itself. Luhmann tells us that a methodology that undertakes such a task must do this empirically by identifying the operations which produce and reproduce the unity of the system (Luhmann 1992). This is a crucial point, since the classical/traditional questions as regards the problem of reference as conditioning meaning and truth, are the distinctions between the subject and the object. Luhmann thinks of these questions as quasi-questions, and admonishes a replacement by self-reference/external-reference for any meaningful transformation to take effect. In his communications theory(3), he states flatly that as a system, it depends upon “introducing the difference between system and environment into the system” as the internal split within the system itself that allows it to make the distinction to begin its operative procedures to begin with (Luhmann 1992: 1420). The self-reference/external-reference distinction is a contingent process, and is open to temporal forms of difference. How to define the operation that differentiates the system and organizes the difference between system and environment while maintaining reciprocity between dependence and independence is a question that demands a resolution. The breakthrough for autopoietic systems is provided by the notion of structural coupling, since a renunciation of the idea of overarching causality on the one hand, and the retention of the idea of highly selective connections between systems and environments is effected here. Structural coupling maintains this reciprocity between dependence and independence. Moreover, autopoietic systems are defined by the way they are, by their mode of being in the world, and by the way they overcome or encounter entropy in the world. In other words, a self-perpetuating system performing operational closure continuously are autopoietic systems that organize dynamic stability.

wpid-dznthe-autopoiesis-of-architecture-by-patrik-schumacher-6

Even if the concepts of complexity have not traveled far and wide into the discipline of philosophy, the trends are on the positive side. Developments in cognitive sciences and consciousness studies have a far reaching implications on philosophy of mind, as does in research in science that helps redefine the very notion of life. These researches are carried out within the spectrum of complexity theory, and therefore, there is a lot of scope for optimism. Complexity theory is still in the embryonic stage, for it is a theory of the widest possible extent for our understanding the world that we inhabit. Though, there are roadblocks along the way, it should in no way mean that it is the end of the road for complexity, but only a beginning in a new and novel manner.

Complexity theory as imbibed within adaptive systems has a major role in evolutionary doctrines. To add to this, the phenomenon of French Theory has incited creative and innovative ways of looking at philosophy, where residues of dualism and reductionism still rest, and resist any challenges whatsoever. One of the ways through which complexity and philosophy could come closer is, when the latter starts withdrawing its investigations into the how- ness of something, and starts to seriously incorporate the why-ness of it. The how- ness still seems to be arrested within the walls of reductionism, mechanicism, modernism, and the pillars of Newtonian science. So, an ontological reduction of all phenomenon under the governance of deterministic laws is the indelible mark, even if epistemologically, a certain guideline of objectivity seems apparent. What really is missed out on in this process is the creativity, as world in particular and universe in general is describable as a mechanism following clockwork. Such a view held sway for most the modern era, but with the advent of scientific revolutions in the 20th century, things began to look awry. Relativity theory, quantum mechanics, chaos, complexity, and recently string/M-theory were powerful enough in their insights to clean off the hitherto promising and predictable scientific ventures. One view at quantum mechanics/uncertainty and chaos/non-linear dynamics was potent to dislodge predictability from science. This was followed in succession by systems theory and cybernetics, which were instrumental in highlighting the scientific basis for holism and emergence, and showing equally well that knowledge was intrinsically subjective. Not just that, autopoiesis clarified the picture of regularity and organization as not given, but, rather dependent on a dynamically emergent tangle of conflicting forces and random fluctuations, a process very rightly referred to by Prigogine and Stengers (1984) as “order out of chaos”. In very insightful language, Heylighen, Cilliers and Gershenson (2007) pin their hopes on these different approaches, which are now starting to become integrated under the heading of “complexity science”. It’s central paradigm is the multi-agent system: a collection of autonomous components whose local interactions give rise to a global order. Agents are intrinsically subjective and uncertain about the consequences of their actions, yet they generally manage to self-organize into an emergent, adaptive system. Thus uncertainty and subjectivity should no longer be viewed negatively, as the loss of the absolute order of mechanicism, but positively, as factors of creativity, adaptation and evolution….Although a number of (mostly post-modern) philosophers have expressed similar sentiments, the complexity paradigm still needs to be assimilated by academic philosophy.

Such a need is a requisite for complexity to become more aware about how modeling techniques could be made more robust, and for philosophy to understand and resolve some hitherto unaddressed, but perennial problems.

———————————————————–

1  The political implications of such a thesis is rare, but forceful. To add to the quote above, there are other quotes as well, that deliberate on socio-political themes. Like,

“We claim that there are two ways to appeal to ‘necessary destructions’: that of the poet, who speaks in the name of a creative power, capable of overturning all orders and representations in order to affirm Difference in the state of permanent revolution which characterizes eternal return; and that of the politician, who is above all concerned to deny that which ‘differs,’ so as to conserve or prolong an established historical order.” (Deleuze 1994: 53).

and,

“Real revolutions have the atmosphere of fétes. Contradiction is not the weapon of the proletariat but, rather, the manner in which the bourgeoisie defends and preserves itself, the shadow behind which it maintains its claim to decide what the problems are.” (Deleuze 1994: 268).

2 It should however be noted, that only immanent philosophies of the sort Deleuze propagates, the processes of individuation could be accounted for. Moreover, once such an aim is attained, regularities in the world are denied any eternal and universal validation.

3 He defines communication as “a kind of autopoetic network of operations which continually organizes what we seek, the coincidence of self-reference (utterance) and external reference (information)” (1992: 1424). He details this out saying,

“Communication comes about by splitting reality through a highly artificial distinction between utterance and information, both taken as contingent events within an ongoing process that recursively uses the results of previous steps and anticipates further ones”. (1992: 1424).

Bibliography

Ciliers, P. (2005) Complexity, Deconstruction and Relativism. In Theory, Culture & Society, Vol. 22 (5). pp. 255 – 267.

De Landa, M. (2006) New Philosophy of Society: Assemblage Theory and Social Complexity. London: Continuum.

Deleuze, G. (1994) Difference and Repetition. Translated by Patton, P. New York: Columbia University Press.

—————- (2003) Desert Islands and Other Texts (1953-1974). Translated by Taormina, M. Los Angeles: Semiotext(e).

Hayles, K., Luhmann, N., Rasch, W., Knodt, E. & Wolfe, C. (1995 Autumn) Theory of a Different Order: A Conversation with Katherine Hayles and Niklas Luhmann. In Cultural Critique, No. 31, The Politics of Systems and Environments, Part II. Minneapolis, MN: University of Minnesota Press.

Heylighen, F., Cilliers, P., and Gershenson, C. (2007) The Philosophy of Complexity. In Bogg, J. & Geyer, R. (eds), Complexity, Science and Society. Oxford: Radcliffe Publishing.

Luhmann, N (1992) Operational Closure and Structural Coupling: The Differentiation of the Legal System. Cardoza Law Review Vol. 13.

Lyotard, J-F. (1984) The Postmodern Condition: A Report on Knowledge. Translated by Bennington, G. & Massumi, B. Minneapolis, MN: University of Minnesota Press.
Prigogine, I. and Stengers, I. (1984) Order out of Chaos. New York: Bantam Books.

Sweetman, B. (1999) Postmodernism, Derrida and Différance: A Critique. In International Philosophical Quarterly XXXIX (1)/153. pp. 5 – 18.

Urry, J. (2003) Global Complexity. Cambridge: Polity Press.