Paradox of Phallocentrism. Thought of the Day 34.0

Bx68ytlCUAAdYDD

The paradox of phallocentrism in aIl its manifestations is that it depends on the image of the castrated woman to give order and meaning to its world. An idea of woman stands as lynch pin to the system: it is her lack that produces the phallus as a symbolic presence, it is her desire to make good the lack that the phallus signifies. The function of woman in forming the patriarchal unconscious is two-fold. She first symbolises the castration threat by her real absence of a penis, and second thereby raises her child into the symbolic. Once this has been achieved, her meaning in the process is at an end, it does not last into the world of law and language except as a memory which oscillates between memory of maternal plenitude and memory of lack. Both are posited on nature (or on anatomy in Freud’s famous phrase). Woman’s desire is subjected to her image as bearer of the bleeding wound, she can exist only in relation to castration and cannot transcend it. She turns her child into the signifier of her own desire to possess a penis (the condition, she imagines, of entry into the symbolic). Either she must gracefully give way to the word, the Name of the Father and the Law, or else struggle to keep her child down with her in the half-light of the imaginary. Woman then stands in patriarchal culture as signifier for the male other, bound by a symbolic order in which man can live out his phantasies and obsessions through linguistic command by imposing them on the silent image of woman still tied to her place as bearer of meaning, not maker of meaning.

Biogrammatic Vir(Ac)tuality. Note Quote.

In Foucault’s most famous example, the prison acts as the confluence of content (prisoners) and expression (law, penal code) (Gilles Deleuze, Sean Hand-Foucault). Informal Diagrams are proliferate. As abstract machines they contain the transversal vectors that cut across a panoply of features (such as institutions, classes, persons, economic formation, etc), mapping from point to relational point, the generalized features of power economies. The disciplinary diagram explored by Foucault, imposes “a particular conduct upon a particular human multiplicity”. The imposition of force upon force affects and effectuates the felt experience of a life, a living. Deleuze has called the abstract machine “pure matter/function” in which relations between forces are nonetheless very real.

[…] the diagram acts as a non-unifying immanent cause that is co-extensive with the whole social field: the abstract machine is like the cause of the concrete assemblages that execute its relations; and these relations between forces take place ‘not above’ but within the very tissue of the assemblages they produce.

The processual conjunction of content and expression; the cutting edge of deterritorialization:

The relations of power and resistance between theory and practice resonate – becoming-form; diagrammatics as praxis, integrates and differentiates the immanent cause and quasi-cause of the actualized occasions of research/creation. What do we mean by immanent cause? It is a cause which is realized, integrated and distinguished in its effect. Or rather, the immanent cause is realized, integrated and distinguished by its effect. In this way there is a correlation or mutual presupposition between cause and effect, between abstract machine and concrete assemblages

Memory is the real name of the relation to oneself, or the affect of self by self […] Time becomes a subject because it is the folding of the outside…forces every present into forgetting but preserves the whole of the past within memory: forgetting is the impossibiltiy of return and memory is the necessity of renewal.

Untitled

The figure on the left is Henri Bergson’s diagram of an infinitely contracted past that directly intersects with the body at point S – a mobile, sensorimotor present where memory is closest to action. Plane P represents the actual present; plane of contact with objects. The AB segments represent repetitive compressions of memory. As memory contracts it gets closer to action. In it’s more expanded forms it is closer to dreams. The figure on the right extrapolates from Bergson’s memory model to describe the Biogrammatic ontological vector of the Diagram as it moves from abstract (informal) machine in the most expanded form “A” through the cone “tissue” to the phase-shifting (formal), arriving at the Strata of the P plane to become artefact. The ontological vector passes through the stratified, through the interval of difference created in the phase shift (the same phase shift that separates and folds content and expression to move vertically, transversally, back through to the abstract diagram.)

A spatio-temporal-material contracting-expanding of the abstract machine is the processual thinking-feeling-articulating of the diagram becoming-cartographic; synaesthetic conceptual mapping. A play of forces, a series of relays, affecting a tendency toward an inflection of the informal diagram becoming-form. The inflected diagram/biogram folds and unfolds perception, appearances; rides in the gap of becoming between content and expression; intuitively transduces the actualizing (thinking, drawing, marking, erasing) of matter-movement, of expressivity-movement. “To follow the flow of matter… is intuition in action.” A processual stage that prehends the process of the virtual actualizing;

the creative construction of a new reality. The biogrammatic stage of the diagrammatic is paradoxically double in that it is both the actualizing of the abstract machine (contraction) and the recursive counter-actualization of the formal diagram (détournement); virtual and actual.

It is the event-dimension of potential – that is the effective dimension of the interrelating of elements, of their belonging to each other. That belonging is a dynamic corporeal “abstraction” – the “drawing off” (transductive conversion) of the corporeal into its dynamism (yielding the event) […] In direct channeling. That is, in a directional channeling: ontological vector. The transductive conversion is an ontological vector that in-gathers a heterogeneity of substantial elements along with the already-constituted abstractions of language (“meaning”) and delivers them together to change. (Brian Massumi Parables for the Virtual Movement, Affect, Sensation)

Skin is the space of the body the BwO that is interior and exterior. Interstitial matter of the space of the body.

Untitled

The material markings and traces of a diagrammatic process, a ‘capturing’ becoming-form. A diagrammatic capturing involves a transductive process between a biogrammatic form of content and a form of expression. The formal diagram is thus an individuating phase-shift as Simondon would have it, always out-of-phase with itself. A becoming-form that inhabits the gap, the difference, between the wave phase of the biogrammatic that synaesthetically draws off the intermix of substance and language in the event-dimension and the drawing of wave phase in which partial capture is formalized. The phase shift difference never acquires a vectorial intention. A pre-decisive, pre-emptive drawing of phase-shifting with a “drawing off” the biogram.

Untitled

If effects realize something this is because the relations between forces or power relations, are merely virtual, potential, unstable vanishing and molecular, and define only possibilities of interaction so long as they do not enter a macroscopic whole capable of giving form to their fluid manner and diffuse function. But realization is equally an integration, a collection of progressive integrations that are initially local and then become or tend to become global, aligning, homogenizing and summarizing relations between forces: here law is the integration of illegalisms.

 

Duqu 2.0

 InfectionTime

unsigned int __fastcall xor_sub_10012F6D(int encrstr, int a2)

{

  unsigned int result; // eax@2
  int v3;              // ecx@4
  if ( encrstr )
  {
    result = *(_DWORD *)encrstr ^ 0x86F186F1;
    *(_DWORD *)a2 = result;
    if ( (_WORD)result )
    {
      v3 = encrstr - a2;

do

      {
        if ( !*(_WORD *)(a2 + 2) )

break;

        a2 += 4;
        result = *(_DWORD *)(v3 + a2) ^ 0x86F186F1;
        *(_DWORD *)a2 = result;
      }
      while ( (_WORD)result );

} }

else

  {
    result = 0;
    *(_WORD *)a2 = 0;

}

  return result;
}

A closer look at the above C code reveals that the string decryptor routine actually has two parameters: “encrstr” and “a2”. First, the decryptor function checks if the input buffer (the pointer of the encrypted string) points to a valid memory area (i.e., it does not contain NULL value). After that, the first 4 bytes of the encrypted string buffer is XORed with the key “0x86F186F1” and the result of the XOR operation is stored in variable “result”. The first DWORD (first 4 bytes) of the output buffer a2 is then populated by this resulting value (*(_DWORD *)a2 = result;). Therefore, the first 4 bytes of the output buffer will contain the first 4 bytes of the cleartext string.

If the first two bytes (first WORD) of the current value stored in variable “result” contain ‘\0’ characters, the original cleartext string was an empty string and the resulting output buffer will be populated by a zero value, stored on 2 bytes. If the first half of the actual decrypted block (“result” variable) contains something else, the decryptor routine checks the second half of the block (“if ( !*(_WORD *)(a2 + 2) )”). If this WORD value is NULL, then decryption will be ended and the output buffer will contain only one Unicode character with two closing ’\0’ bytes.

If the first decrypted block doens’t contain zero character (generally this is the case), then the decryption cycle continues with the next 4-byte encrypted block. The pointer of the output buffer is incremeted by 4 bytes to be able to store the next cleartext block (”a2 += 4;”). After that, the following 4-byte block of the ”ciphertext” will be decrypted with the fixed decryption key (“0x86F186F1”). The result is then stored within the next 4 bytes of the output buffer. Now, the output buffer contains 2 blocks of the cleartext string.

The condition of the cycle checks if the decryption reached its end by checking the first half of the current decrypted block. If it did not reached the end, then the cycle continues with the decryption of the next input blocks, as described above. Before the decryption of each 4-byte ”ciphertext” block, the routine also checks the second half of the previous cleartext block to decide whether the decoded string is ended or not.

The original Duqu used a very similar string decryption routine, which we printed in the following figure below. We can see that this routine is an exact copy of the previously discussed routine (variable ”a1” is analogous to ”encrstr” argument). The only difference between the Duqu 2.0 (duqu2) and Duqu string decryptor routines is that the XOR keys differ (in Duqu, the key is”0xB31FB31F”).

We can also see that the decompiled code of Duqu contains the decryptor routine in a more compact manner (within a ”for” loop instead of a ”while”), but the two routines are essentially the same. For example, the two boundary checks in the Duqu 2.0 routine (”if ( !*(_WORD *)(a2 + 2) )” and ”while ( (_WORD)result );”) are analogous to the boundary check at the end of the ”for” loop in the Duqu routine (”if ( !(_WORD)v4 || !*(_WORD *)(result + 2) )”). Similarly, the increment operation within the head of the for loop in the Duqu sample (”result += 4”) is analogous to the increment operation ”a2 += 4;” in the Duqu 2.0 sample.

int __cdecl b31f_decryptor_100020E7(int a1, int a2)

{

  _DWORD *v2;      // edx@1
  int result;      // eax@2
  unsigned int v4; // edi@6
  v2 = (_DWORD *)a1;

if ( a1 ) {

    for ( result = a2; ; result += 4 )
    {
v4 = *v2 ^ 0xB31FB31F;
      *(_DWORD *)result = v4;
if ( !(_WORD)v4 || !*(_WORD *)(result + 2) )
        break;

++v2; }

}

else

  {
    result = 0;
    *(_WORD *)a2 = 0;

}

  return result;
}

Leibnizian Mnemonics

Leibniz_-Characteristica_Universalis

By any standard, Leibniz’s effort to create a “universal calculus” should be considered one of the most ambitious intellectual agendas ever conceived. Building on his previous successes in developing the infinitesimal calculus, Leibniz aimed to extend the notion of a symbolic calculus to all domains of human thought, from law, to medicine, to biology, to theology. The ultimate vision was a pictorial language which could be learned by anyone in a matter of weeks and which would transparently represent the factual content of all human knowledge. This would be the starting point for developing a logical means for manipulating the associated symbolic representation, thus giving rise to the ability to model nature and society, to derive new logic truths, and to eliminate logical contradictions from the foundations of Christian thought.

Astonishingly, many elements of this agenda are quite familiar when examined from the perspective of modern computer science. The starting point for this agenda would be an encyclopedia of structured knowledge, not unlike our own contemporary efforts related to the Semantic Web, Web 2.0, or LinkedData. Rather than consisting of prose descriptions, this encyclopedia would consist of taxonomies of basic concepts extending across all subjects.

Leibniz then wanted to create a symbolic representation of each of the fundamental concepts in this repository of structured information. It is the choice of the symbolic representation that is particularly striking. Unlike the usual mathematical symbols that comprise the differential calculus, Leibniz’s effort would rely on mnemonic images which were useful for memorizing facts.

Whereas modern thinkers usually imagine memorization to be a task accomplished through pure repetition, 16th and 17th-century Europe saw fundamental innovation in the theory and practice of memory. During this period, practitioners of the memory arts relied on a diverse array of visualization techniques that allowed them to recall massive amounts of information with extraordinary precision. These memory techniques were part of a broader intellectual culture which viewed memorization as a foundational methodology for structuring knowledge.

The basic elements of this methodology were mnemonic techniques. Not the simple catch phrases that we typically associate with mnemonics, but rather, elaborate visualized scenes or images that represented what was to be remembered. It is these same memory techniques that are used in modern memory competitions and which allow competitors to perform such superhuman feats as memorizing the order of a deck of cards in under 25 seconds, or thousands of random numbers in an hour. The basic principle behind these techniques is the same, namely, that a striking and inventive visual image can dramatically aid the memory.

Leibniz and many of his contemporaries had a much more ambitious vision for mnemonics than our modern day competitive memorizers. They believed that the process of memorization went hand in hand with structuring knowledge, and furthermore, that there were better and worse mnemonics and that the different types of pictorial representations could have different philosophical and scientific implications.

For instance, if the purpose was merely to memorize, one might create the most lewd and absurd possible images in order to remember some list of facts. Indeed, this was recommended by enterprising memory theorists of the day trying to make money by selling pamphlets on how to improve one’s memory. Joshua Foer’s memoir Moonwalking with Einstein is an engaging and insightful first-person account of the “competitive memory circuit,” where techniques such as this one are the bread and butter of how elite competitors are able to perform feats of memory that boggle the mind.

But whereas in the modern world, mnemonic techniques have been relegated to learning vocabulary words and the competitive memory circuit, elite intellectuals several centuries ago had a much more ambitious vision the ultimate implications of this methodology. In particular, Leibniz hoped that through a rigorous process of notation engineering one might be able to preserve the memory-aiding properties of mnemonics while eliminating the inevitable conceptual interference that arises in creating absurdly comical, lewd, or provocative mnemonics. By drawing inspiration from the Chinese alphabet and Egyptian hieroglyphics, he hoped to create a language that could be learned by anyone in a short period of time and which would transparently – through the pictorial dimension – represent the factual content of a curated encyclopedia. Furthermore, by building upon his successes in developing the infinitesimal calculus, Leibniz hoped that a logical structure would emerge which would allow novel insights to be derived by manipulating the associated symbolic calculus.

Leibniz’s motivations extended far beyond the realm of the natural sciences. Using mnemonics as the core alphabet to engineer a symbolic system with complete notational transparency would mean that all people would be able to learn this language, regardless of their level of education or cultural background. It would be a truly universal language, one that would unite the world, end religious conflict, and bring about widespread peace and prosperity. It was a beautiful and humane vision, although it goes without saying that it did not materialize.

Theosophical Panpsychism

558845_518908904786866_1359300997_n3

Where does mind individually, and consciousness ultimately, originate? In the cosmos there is only one life, one consciousness, which masquerades under all the different forms of sentient beings. This one consciousness pierces up and down through all the states and planes of being and serves to uphold the memory, whether complete or incomplete, of each state’s experience. This suggests that our self-conscious mind is really a ray of cosmic mind. There is a mysterious vital life essence and force involved in the interaction of spirit or consciousness with matter. The cosmos has its memory and follows general pathways of formation based on previous existences, much as everything else does. Aided by memory, it somehow selects out of the infinite possibilities a new and improved imbodiment. When the first impulse emerges, we have cosmic ideation vibrating the first matter, manifesting in countless hierarchies of beings in endless gradations. Born of the one cosmic parent, monadic centers emerge as vital seeds of consciousness, as germs of its potential. They are little universes in the one universe.

Theosophy does not separate the world into organic and inorganic, for even the atoms are considered god-sparks. All beings are continuously their own creators and recorders, forming more perishable outer veils while retaining the indestructible thread-self that links all their various principles and monads through vast cycles of experience. We are monads or god-sparks currently evolving throughout the human stage. The deathless monad runs through all our imbodiments, for we have repeated many times the processes of birth and death. In fact, birth and death for most of humanity are more or less automatic, unconscious experiences as far as our everyday awareness is concerned. How do we think? We can start, for example, with desire which provides the impulse that causes the mind through will and imagination to project a stream of thoughts, which are living elemental beings. These thoughts take various forms which may result in different kinds of actions or creative results. This is another arena of responsibility, for in the astral light our thoughts circulate through other minds and affect them, but those that belong to us have our stamp and return to us again and again. So through these streams of thought we create habits of mind, which build our character and eventually our self-made destiny. The human mind is an ideator resonating with its past, selecting thoughts and making choices, anticipating and creating a pattern of unfolding. Perhaps we are reflecting in the small the operations of the divine mind which acts as the cosmic creator and architect. Some thoughts or patterns we create are limiting; others are liberating. The soul grows, and thoughts are reused and transformed by the mind, perhaps giving them a superior expression. Plato was right: with spiritual will and worthiness we can recollect the wisdom of the past and unlock the higher mind. We have the capacity of identifying with all beings, experiencing the oneness we share together in our spiritual consciousness, that continuous stream that is the indestructible thread-self. All that it was, is, or is becoming is our karma. Mind and memory are a permanent part of the reincarnating ego or human soul, and of the universe as well.

In the cosmos there are many physical, psychic, mental, and spiritual fields — self-organizing, whole, living systems. Every such field is holographic in that it contains the characteristics of every other field within itself. Rupert Sheldrake’s concepts of morphic fields and morphic resonance, for instance, are in many ways similar to some phenomena attributed to the astral light. All terrestrial entities can be considered fields belonging to our living earth, Gaia, and forming part of her constitution. The higher akasic fields resonate with every part of nature. Various happenings within the earth’s astral light are said to result in physical effects which include all natural and human phenomena, ranging from epidemics and earthquakes to wars and weather patterns. Gaia, again, is part of the fields which form the solar being and its constitution, and so on throughout the cosmos.

Like the earth, human beings each have auric fields and an astral body. The fifty trillion cells in our body, as well as the tissues and organs they form, each have their own identity and memory. Our mental and emotional fields influence every cell and atom of our being for better or worse. How we think and act affects not only humanity but Gaia as well through the astral light, the action of which is guided by active creative intelligences. For example, the automatic action of divine beings restores harmony, balancing the inner with the outer throughout nature.

Drift City #Accelerate. Note Quote.

p17psjiutm18l16jv1odn8tbdf67

What begins as Utopia likely fades into Dystopia in time. Constant Nieuwenhuys, the Dutch sculptor and artist is better known for his “New Babylon”, a drift city in the post-capitalist world. This city would be suspended in the air, covering the entire globe, a shell around industrial units driven by cybernetics and automation in order to finally liberate humanity from the horrors of labour. He called the inhabitants homo ludens, freed from labour. In short, his Utopia consisted in an environment created by the activity of life, rather than the other way round. The drift city would be an endless array of mutable N/Ws, with being tied down to the factory or bureaucracy, with ample opportunities for individuals to wander or drift through the gigantic architectural spaces. With the being tied down, there would ensue liberation from any attachments, the social or the familial in tracking this endless voyage of nomadism, eventually reaching a point of locking/unlocking memory to correspondingly effectuate a sense of self. Undoubtedly, an anticipation of schizoanalysis.

Stochastic Quantum Walks and Artificial Neural Networks

Schuld et al. propose using quantum walks to construct a quantum ANN algorithm, specifi- cally with an eye to demonstrate associative memory capabilities. This is a sensible idea, as both discrete-time and continuous-time quantum walks are universal for quantum computation. In associative memories, a previously-seen complete input is retrieved upon presentation of an incomplete or noisy input.

f2

The quantum walker position represents the pattern of the “active” neurons (the firing pattern). That is, on an n-dimensional hypercube, if the walker is in a specific corner labelled with an n-bit string, then this string will have n corresponding neurons, each of which is “active” if the corresponding bit is 1. In a Hopfield network for a given input state x, the outputs are the minima of the energy function

E (x1,….,xn) = -1/2 Σni=1 Σnj=1 wijxixj + Σni=1 θixi

where xi is the state of the i-th neuron, wij is the strength of the inter-neuron link and θi is the activation threshold. Their idea is to construct a quantum walker such that one of these minima (dynamic attractor state) is the desired final state with high probability.

The paper examines two different approaches. First is the naïve case, where activation of a Hopfield network neuron is done using a biased coin. However they prove that this cannot work as the required neuron updating process is not unitary. Instead, a non-linearity is introduced through stochastic quantum walks (SQW) on a hypercube. To inject attractors in the walker’s hypercube graph, they remove all edges leading to/from the corners which represent them. This means that the coherent part of the walk can’t reach/leave these states, thus they become sink states of the graph. The decoherent part, represented by jump operators, adds paths leading to the sinks. A few successful simulations were run, illustrating the possibility of building an associative memory using SQW, and showing that the walker ends up in the sink in a time dependent on the decoherent dynamics. This might be a result in the right direction, but it is not a definitive answer to the ANN problem since Schuld et al. only demonstrate some associative memory properties of the walk. Their suggestion for further work is to explore open quantum walks for training feed-forward ANNs.

Permeability of Autopoietic Principles (revisited) During Cognitive Development of the Brain

Distinctions and binaries have their problematics, and neural networks are no different when one such attempt is made regarding the information that flows from the outside into the inside, where interactions occur. The inside of the system has to cope up with the outside of the system through mechanisms that are either predefined for the system under consideration, or having no independent internal structure at all to begin with. The former mechanism results in loss of adaptability, since all possible eventualities would have to be catered for in the fixed, internal structure of the system. The latter is guided by conditions prevailing in the environment. In either cases, learning to cope with the environmental conditions is the key for system’s reaching any kind of stability. But, how would a system respond to its environment? According to the ideas propounded by Changeaux et. al. , this is possible in two ways, viz,

  1. An instructive mechanism is directly imposed by the environment on the system’s structure and,
  2. a selective mechanism, that is Darwinian in its import, helps maintain order as a result of interactions between the system and environment. The environment facilitates reinforcement, stabilization and development of the structure, without in any way determining it.

These two distinct ways when exported to neural networks take on connotations as supervised and unsupervised learning. The position of Changeaux et. al. is rooted in rule- based, formal and representational formats, and is thus criticized by the likes of Edelman. According to him, in a nervous system (his analysis are based upon nervous systems) neural signals in an information processing models are taken in from the periphery, and thereafter encoded in various ways to be subsequently transformed and retransformed during processing and generating an output. This not only puts extreme emphasis on formal rules, but also makes the claim on the nature of memory that is considered to occur through the representation of events through recording or replication of their informational details. Although, Edelman’s analysis takes nervous system as its centrality, the informational modeling approach that he undertakes is blanketed over the ontological basis that forms the fabric of the universe. Connectionists have no truck with this approach, as can be easily discerned from a long quote Edelman provides:

The notion of information processing tends to put a strong emphasis on the ability of the central nervous system to calculate the relevant invariance of a physical world. This view culminates in discussions of algorithms and computations, on the assumption that brain computes in an algorithmic manner…Categories of natural objects in the physical world are implicitly assumed to fall into defined classes or typologies that are accessible to a program. Pushing the notion even further, proponents of certain versions of this model are disposed to consider that the rules and representation (Chomsky) that appear to emerge in the realization of syntactical structures and higher semantic functions of language arise from corresponding structures at the neural level.

Edelman is aware of the shortcomings in informational processing models, and therefore takes a leap into connectionist fold with his proposal of brain consisting of a large number of undifferentiated, but connected neurons. He, at the same time gives a lot of credence to organization occurring at development phases of the brain. He lays out the following principles of this population thinking in his Neural Darwinism: The Theory of Neuronal Group Selection:

  1. The homogeneous, undifferentiated population of neurons is epigenetically diversified into structurally variant groups through a number of selective processescalled“primary repertoire”.
  2. Connections among the groups are modified due to signals received during the interactions between the system and environment housing the system. Such modifications that occur during the post-natal period become functionally active to used in future, and form “secondary repertoire”.
  3. With the setting up of “primary” and “secondary” repertoires, groups engage in interactions by means of feedback loops as a result of various sensory/motor responses, enabling the brain to interpret conditions in its environment and thus act upon them.

“Degenerate” is what Edelman calls are the neural groups in the primary repertoire to begin with. This entails the possibility of a significant number of non-identical variant groups. This has another dimension to it as well, in that, non-identical variant groups are distributed uniformly across the system. Within Edelman’s nervous system case study, degeneracy and distributedness are crucial features to deny the localization of cortical functions on the one hand, and existence of hierarchical processing structures in a narrow sense on the other. Edelman’s cortical map formation incorporate the generic principles of autopoiesis. Cortical maps are collections (areas) of minicolumns in the brain cortex that have been identified as performing a specific information processing function. Schematically, it is like,

8523645_orig

In Edelman’s theory, neural groups have an optimum size that is not known a priori, but develops spontaneously and dynamically. Within the cortex, this is achieved by means of inhibitory connections spread over a horizontal plane, while excitatory ones are vertically laid out, thus enabling the neuronal activity to be concentrated on the vertical plane rather than the horizontal one. Hebb’s rule facilitates the utility function of this group. Impulses are carried on to neural groups thereby activating the same, and subsequently altering synaptic strengths. During the ensuing process, a correlation gets formed between neural groups with possible overlapping of messages as a result of synaptic activity generated within each neural groups. This correlational activity could be selected for frequent exposure to such overlaps, and once selected, the group might start to exhibit its activity even in the absence of inputs or impulses. The selection is nothing but memory, and is always used in learning procedures. A lot depends upon the frequency of exposure, as if this is on the lower scale, memory, or selection could simply fade away, and be made available for a different procedure. No wonder, why forgetting is always referred to as a precondition for memory. Fading away might be an useful criteria for using the freed allotted memory storage space during developmental process, but at the stage when groups of the right size are in place and ready for selection, weakly interacting groups would meet the fate of elimination. Elimination and retention of groups depends upon what Edelman refers to as the vitality principle, wherein, sensitivity to historical process finds more legitimacy, and that of extant groups find takers in influencing the formation of new groups. The reason for including Edelman’s case was specifically to highlight the permeability of self-organizing principles during the cognitive development of the brain, and also pitting the superiority of neural networks/connectionist models in comprehending brain development over the traditional rule-based expert and formal systems of modeling techniques.

In order to understand the nexus between brain development and environment, it would be secure to carry further Edelman’s analysis. It is a commonsense belief in linking the structural changes in the brain with environmental effects. Even if one takes recourse to Darwinian evolution, these changes are either delayed due to systemic resistance to let these effects take over, or in not so Darwinian a fashion, the effects are a compounded resultant of embedded groups within the network. On the other hand, Edelman’s cortical map formation is not just confined to the processes occurring within brain’s structure alone, but is also realized by how the brain explores its environment. This aspect is nothing but motor behavior in its nexus between the brain and environment and is strongly voiced by Cilliers, when he calls to attention,

The role of active motor behavior forms the first half of the argument against abstract, solipsistic intelligence. The second half concerns the role of communication. The importance of communication, especially the use of symbol systems (language), does not return us to the paradigm of objective information- processing. Structures for communication remain embedded in a neural structure, and therefore will always be subjected to the complexities of network interaction. Our existence is both embodied and contingent.

Edelman is criticized for showing no respect to replication in his theory, which is a strong pillar for natural selection and learning. Recently, attempts to incorporate replication in the brain have been undertaken, and strong indicators for neuronal replicators with the use of Hebb’s learning mechanism as showing more promise when compared with natural selection are in the limelight (Fernando, Goldstein and Szathmáry). These autopoietic systems when given a mathematical description and treatment could be used to model onto a computer or a digital system, thus help giving insights into the world pregnant with complexity.

Autopiesis goes directly to the heart of anti-foundationalism. This is because the epistemological basis of basic beliefs is not paid any due respect or justificatory support in autopietic system’s insistence on internal interactions and external contingent factors obligating the system to undergo continuous transformations. If autopoiesis could survive wonderfully well without the any transcendental intervention, or a priori definition, it has parallels running within French theory. If anti-foundationalism is the hallmark of autopoiesis, so is anti-reductionism, since it is well nigh impossible to analyze to have meaning explicated in terms of atomistic units, and especially so, when the systems are already anti-foundationalist. Even in biologically contextual terms, a mereology according to Garfinkel is emergent as a result of complex interactions that go on within the autopoietic system. Garfinkel says,

We have seen that modeling aggregation requires us to transcend the level of the individual cells to describe the system by holistic variables. But in classical reductionism, the behavior of holistic entities must ultimately be explained by reference to the nature of their constituents, because those entities ‘are just’ collections of the lower-level objects with their interactions. Although, it may be true in some sense that systems are just collections of their elements, it does not follow that we can explain the system’s behavior by reference to its parts, together with a theory of their connections. In particular, in dealing with systems of large numbers of similar components, we must make recourse to holistic concepts that refer to the behavior of the system as a whole. We have seen here, for example, concepts such as entrainment, global attractors, waves of aggregation, and so on. Although these system properties must ultimately be definable in terms of the states of individuals, this fact does not make them ‘fictions’; they are causally efficacious (hence, real) and have definite causal relationships with other system variables and even to the states of the individuals.

Autopoiesis gains vitality, when systems thinking opens up the avenues of accepting contradictions and opposites rather than merely trying to get rid of them. Vitality is centered around a conflict, and ideally comes into a balanced existence, when such a conflict, or strife helps facilitate consensus building, or cooperation. If such goals are achieved, analyzing complexity theory gets a boost, and moreover by being sensitive to autopoiesis, an appreciation of the sort of the real lebenswelt gets underlined. Memory† and history are essentials for complex autopoietic system, whether they be biological and/or social, and this can be fully comprehended in some quite routine situations where systems that are quite identical in most respects, if differing in their histories would have different trajectories in responding to situations they face. Memory does not determine the final description of the system, since it is itself susceptible to transformations, and what really gets passed on are the traces. The same susceptibility to transformations would apply to traces as well. But memory is not stored in the brain as discrete units, but rather as in a distributed pattern, and this is the pivotal characteristic of self-organizing complex systems over any other form of iconic representation. This property of transformation as associated with autopoietic systems is enough to suspend the process in between activity and passivity, in that the former is determining by the environment and the latter is impact on the environment. This is really important in autopoiesis, since the distinction between the inside and outside and active and passive is difficult to discern, and moreover this disappearance of distinction is a sufficient enough case to vouch against any authoritative control as residing within the system, and/or emanating from any single source. Autopoiesis scores over other representational modeling techniques by its ability to self-reflect, or by the system’s ability to act upon itself. For Lawson, reflexivity disallows any static description of the system, since it is not possible to intercept the reflexive moment, and it also disallows a complete description of the system at a meta-level. Even though a meta-level description can be construed, it is only the frozen frames or snapshots of the systems at any given particular instance, and hence ignores the temporal dimensions the systems undergo. For that to be taken into account, and measure the complexity within the system, the role of activity and passivity cannot be ignored at any cost, despite showing up great difficulties while modeling. But, is it not really a blessing in disguise, for the model of a complex system should be retentive of complexity in the real world? Well, the answer is yes, it is.

Somehow, the discussion till now still smells of anarchy within autopoiesis, and if there is no satisfactory account of predictability and stability within the self-organizing system, the fears only get aggravated. A system which undergoes huge effects when small changes or alteration are made in the causes is definitely not a candidate for stability. And autopietic systems are precisely such. Does this mean that these are unstable?, or does it call for a reworking of the notion of stability? This is philosophically contentious and there is no doubt regarding this. Unstability could be a result of probabilities, but complex systems have to fall outside the realm of such probabilities. What happens in complex systems is a result of complex interactions due to a large number of factors, that need not be logically compatible. At the same time, stochasticity has no room, for it serves as an escape route from the annals of classical determinism, and hence a theory based on such escape routes could never be a theory of self-organization (Patteee). Stability is closely related to the ability to predict, and if stability is something very different from what classical determinism tells it is, the case for predictability should be no different. The problems in predictions are gross, as are echoed in the words of Krohn and Küppers,

In the case of these ‘complex systems’ (Nicolis and Prigogine), or ‘non-trivial’ machines, a functional analysis of input-output correlations must be supplemented by the study of ‘mechanisms’, i.e. by causal analysis. Due to the operational conditions of complex systems it is almost impossible to make sense of the output (in terms of the functions or expected effects) without taking into account the mechanisms by which it is produced. The output of the system follows the ‘history’ of the system, which itself depends on its previous output taken as input (operational closure). The system’s development is determined by its mechanisms, but cannot be predicted, because no reliable rule can be found in the output itself. Even more complicated are systems in which the working mechanisms themselves can develop according to recursive operations (learning of learning; invention of inventions, etc.).

The quote above clearly is indicative of predicaments while attempting to provide explanations of predictability. Although, it is quite difficult to get rid of these predicaments, nevertheless, attempts to mitigate them so as to reduce noise levels from distorting or disturbing the stability and predictability of the systems are always in the pipeline. One such attempt lies in collating or mapping constraints onto a real epistemological fold of history and environment, and thereafter apply it to the studies of the social and the political. This is voiced very strongly as a parallel metaphoric in Luhmann, when he draws attention to,

Autopoietic systems, then, are not only self organizing systems. Not only do they produce and eventually change their own structures but their self-reference applies to the production of other components as well. This is the decisive conceptual innovation. It adds a turbo charger to the already powerful engine of self-referential machines. Even elements, that is, last components (individuals), which are, at least for the system itself, undecomposable, are produced by the system itself. Thus, everything which is used as a unit by the system is produced as a unit by the system itself. This applies to elements, processes, boundaries and other structures, and last but not least to the unity of the system itself. Autopoietic systems, of course, exist within an environment. They cannot exist on their own. But there is no input and no output of unity.

What this entails for social systems is that they are autopoietically closed, in that, while they rely on resources from their environment, the resources in question do not become part of the systematic operation. So the system never tries its luck at adjusting to the changes that are brought about superficially and in the process fritter away its available resources, instead of going for trends that do not appear to be superficial. Were a system to ever attempt a fall from grace in making acclimatizations to these fluctuations, a choice that is ethical in nature and contextual at the same time is resorted to. Within the distributed systems as such, a central authority is paid no heed, since such a scenario could result in general degeneracy of the system as a whole. Instead, what gets highlighted is the ethical choice of decentralization, to ensure system’s survivability, and dynamism. Such an ethical treatment is no less altruistic.

Autopoiesis & Pre-Determined Design: An Abhorrent Alliance

Self-organization has also been conflated with the idea of emergence, and indeed one can occur without the other, thus nullifying the thesis of strong reliance between the two. Moreover, western philosophical traditions have been quite vocal in their skepticism about emergence and order within a structure, if there isn’t a presence of an external agency, either in the form of God, or in some a priori principle. But these traditions are indeed in for a rude shock, since there is nothing mystical about emergence and even self-organization (in cases where they are thought to be in conflated usage). Not just an absence of mysticism characterizing self-organization, but, even stochasticity seems to be a missing link in the said principle. Although, examples supporting the case vary according to the diverse environmental factors and complexity inherent in the system, the ease of working through becomes apparent, if self-organization or autopoiesis is viewed as a the capacity exhibited by the complex systems in enabling them to change or develop the internal structure spontaneously, while adapting and manipulating with their environment in the ongoing process. This could very well be the starting point in line with a working definition of autopoiesis. A clear example of this kind would be the human brain (although, brains of animals could suffice this equally well), which shows a great proclivity to learn, to remember in the midst of its development. Language is another instance, since in its development, a recognition of its structure is mandated, and this very structure in its attempt to survive and develop further under circumstances that are variegated, must show allegiance to adaptability. Even if, language is guided by social interactions between humans, the cultural space conducive for its development would have strong affinity to a generalized aspect of linguistic system. Now, let us build up on the criteria of determining what makes the system autopoietic, and thereby see what are the features that are generally held to be in common with autopoietic systems.

33_10klein

Self-organization abhors predetermined design, thus enabling the system to dynamically adapt to the regular/irregular changes in the environment in a nonlinear adherence. Even if emergence can occur without the underlying principle of self- organization, and vice versa, self-organization is in itself an emergent property of the system as a whole, with the individual components acting on local information and general principles. This is crucial, since the macroscopic behavior emerges out of microscopic interactions that in themselves are carriers of scant information, and in turn have a direct component of complexity associated with them when viewed microscopically. This complexity also gets reflected in their learning procedures, since for systems that are self-organizing, it is only the experiential aspects from previous encounters compared with the recent ones that help. And what would this increase in complexity entail? As a matter of fact, complexity is a reversal of entropy at the local level, thus putting the system at the mercy of a point of saturation. Moreover, since the systems are experiential, they are historical and hence based on memory. If such is the case, then it is safe to point out that these systems are diachronic in nature, and hence memory forms a vital component of emergence. Memory as anamnesis is unthinkable without selective amnesia, for piling up information does trade off with relevance of information simultaneously. For information that goes under the name of irrelevant, is simply jettisoned, and the space created in this manner is utilized for cases pertaining to representation. Not only representation sort of makes a back door entry here, it is also convenient for this space to undergo a systematic patterning that is the hallmark of these systems. Despite the patterns being the hallmark of self-organization, it should in no way be taken to mean that these systems are stringently teleological, because, the introduction of nonlinear functions that guide these systems introduce at the same time the shunning off of a central authority, or anthropomorphic centrality, or any external designer. Linear functions could partake in localized situations, but at the macroscopic level, they lose their vitality, and if complex systems stick on to their loyalty towards linear functions, they fade away in the process of trying hard to avoid negotiating this allegiance. Just as allegiance to nonlinearity is important for self-organization, so is an allegiance to anti-reductionism. That is due to the fact of micro-level units having no knowledge about the macro-level effects, while at the same time, these macro-level effects manifest themselves in clusters of micro-level units, thus ruling out any sort of independent level- based descriptions. The levels are stacked, intertwined, and most importantly, any resistance to reductionist discourse in trying to explicate the emergence within the system has no connotation for resistance to materialist properties themselves emerging.

chaos-nonlinearity-web-490x363

Clusters of information flow into the system from the external world that have an influencing impact on the internal makeup of the system and in turn triggers off interactions in tune with Hebb’s law to alter weights. With the process in full swing, two possibilities could take shape, viz, formation of a stable system of weights based on the regularity of a stable cluster, and association between sets of these stable clusters as and when they are identified. This self-organizing principle is not only based on learning, but at the same time also cautious with sidelining those that are potentially futile for the system. Now, when such information flows into the system, sensors and/or transducers are set up, that obligate varying levels of intensity of activity to some neurons and nodes over others. This is of course to be expected, and the way to come to terms with a regulated pattern of activity is the onus of adjustments of weights associated with neurons/nodes. A very important factor lies in the fact of the event denoting the flow of information from the external world into the system to occur regularly, or at least occasionally, lest the self-organizing or autopoietic system should fail to record in memory such occurrences and eventually fade out. Strangely, the patterns are arrived at without any reliance upon differentiated micro-level units to begin with. In parallel with neural networks, the nodes and neurons possess random values for their weights. The levels housing these micro- level nodes or neurons are intertwined to increase their strength, and if there is any absence of self-persisting positive feedback, the autopoietic system can in no way move away from the dictates of undifferentiated states it began with. As the nodes are associated with random values of weights, there is a race to show superiority, thus arresting the contingent limitless growth under the influence of limitless resources, thereby giving the emerging structure some meaningful essence and existence. Intertwining of levels also results in consensus building, and therefore effectuates meaning as accorded to these emergent structures of autpoietic systems. But this consensus building could lead astray the system from complexity, and hence to maintain the status quo, it is imperative for these autopoietic systems to have a correctional apparatus. The correctional apparatus spontaneously breaks the symmetry that leads the system away from complexity by either introducing haphazard fault lines in connections, or chaotic behaviors resulting from sensitivity to minor fluctuations as a result of banking on nonlinearity. Does this correctional apparatus in any way impact memory gained through the process of historicality? Apparently not. This is because of the distributed nature of memory storage, which is largely due to weights that are non-correspondingly symbolic. The weights that show their activity at the local scale are associated with memory storage through traces, and it is only due to this fact that information gets distributed over the system generating robustness. With these characteristic features, autopoietic systems only tend towards organizing their structures to the optimum, with safely securing the complexity expected within the system.