Permeability of Autopoietic Principles (revisited) During Cognitive Development of the Brain

Distinctions and binaries have their problematics, and neural networks are no different when one such attempt is made regarding the information that flows from the outside into the inside, where interactions occur. The inside of the system has to cope up with the outside of the system through mechanisms that are either predefined for the system under consideration, or having no independent internal structure at all to begin with. The former mechanism results in loss of adaptability, since all possible eventualities would have to be catered for in the fixed, internal structure of the system. The latter is guided by conditions prevailing in the environment. In either cases, learning to cope with the environmental conditions is the key for system’s reaching any kind of stability. But, how would a system respond to its environment? According to the ideas propounded by Changeaux et. al. , this is possible in two ways, viz,

  1. An instructive mechanism is directly imposed by the environment on the system’s structure and,
  2. a selective mechanism, that is Darwinian in its import, helps maintain order as a result of interactions between the system and environment. The environment facilitates reinforcement, stabilization and development of the structure, without in any way determining it.

These two distinct ways when exported to neural networks take on connotations as supervised and unsupervised learning. The position of Changeaux et. al. is rooted in rule- based, formal and representational formats, and is thus criticized by the likes of Edelman. According to him, in a nervous system (his analysis are based upon nervous systems) neural signals in an information processing models are taken in from the periphery, and thereafter encoded in various ways to be subsequently transformed and retransformed during processing and generating an output. This not only puts extreme emphasis on formal rules, but also makes the claim on the nature of memory that is considered to occur through the representation of events through recording or replication of their informational details. Although, Edelman’s analysis takes nervous system as its centrality, the informational modeling approach that he undertakes is blanketed over the ontological basis that forms the fabric of the universe. Connectionists have no truck with this approach, as can be easily discerned from a long quote Edelman provides:

The notion of information processing tends to put a strong emphasis on the ability of the central nervous system to calculate the relevant invariance of a physical world. This view culminates in discussions of algorithms and computations, on the assumption that brain computes in an algorithmic manner…Categories of natural objects in the physical world are implicitly assumed to fall into defined classes or typologies that are accessible to a program. Pushing the notion even further, proponents of certain versions of this model are disposed to consider that the rules and representation (Chomsky) that appear to emerge in the realization of syntactical structures and higher semantic functions of language arise from corresponding structures at the neural level.

Edelman is aware of the shortcomings in informational processing models, and therefore takes a leap into connectionist fold with his proposal of brain consisting of a large number of undifferentiated, but connected neurons. He, at the same time gives a lot of credence to organization occurring at development phases of the brain. He lays out the following principles of this population thinking in his Neural Darwinism: The Theory of Neuronal Group Selection:

  1. The homogeneous, undifferentiated population of neurons is epigenetically diversified into structurally variant groups through a number of selective processescalled“primary repertoire”.
  2. Connections among the groups are modified due to signals received during the interactions between the system and environment housing the system. Such modifications that occur during the post-natal period become functionally active to used in future, and form “secondary repertoire”.
  3. With the setting up of “primary” and “secondary” repertoires, groups engage in interactions by means of feedback loops as a result of various sensory/motor responses, enabling the brain to interpret conditions in its environment and thus act upon them.

“Degenerate” is what Edelman calls are the neural groups in the primary repertoire to begin with. This entails the possibility of a significant number of non-identical variant groups. This has another dimension to it as well, in that, non-identical variant groups are distributed uniformly across the system. Within Edelman’s nervous system case study, degeneracy and distributedness are crucial features to deny the localization of cortical functions on the one hand, and existence of hierarchical processing structures in a narrow sense on the other. Edelman’s cortical map formation incorporate the generic principles of autopoiesis. Cortical maps are collections (areas) of minicolumns in the brain cortex that have been identified as performing a specific information processing function. Schematically, it is like,

8523645_orig

In Edelman’s theory, neural groups have an optimum size that is not known a priori, but develops spontaneously and dynamically. Within the cortex, this is achieved by means of inhibitory connections spread over a horizontal plane, while excitatory ones are vertically laid out, thus enabling the neuronal activity to be concentrated on the vertical plane rather than the horizontal one. Hebb’s rule facilitates the utility function of this group. Impulses are carried on to neural groups thereby activating the same, and subsequently altering synaptic strengths. During the ensuing process, a correlation gets formed between neural groups with possible overlapping of messages as a result of synaptic activity generated within each neural groups. This correlational activity could be selected for frequent exposure to such overlaps, and once selected, the group might start to exhibit its activity even in the absence of inputs or impulses. The selection is nothing but memory, and is always used in learning procedures. A lot depends upon the frequency of exposure, as if this is on the lower scale, memory, or selection could simply fade away, and be made available for a different procedure. No wonder, why forgetting is always referred to as a precondition for memory. Fading away might be an useful criteria for using the freed allotted memory storage space during developmental process, but at the stage when groups of the right size are in place and ready for selection, weakly interacting groups would meet the fate of elimination. Elimination and retention of groups depends upon what Edelman refers to as the vitality principle, wherein, sensitivity to historical process finds more legitimacy, and that of extant groups find takers in influencing the formation of new groups. The reason for including Edelman’s case was specifically to highlight the permeability of self-organizing principles during the cognitive development of the brain, and also pitting the superiority of neural networks/connectionist models in comprehending brain development over the traditional rule-based expert and formal systems of modeling techniques.

In order to understand the nexus between brain development and environment, it would be secure to carry further Edelman’s analysis. It is a commonsense belief in linking the structural changes in the brain with environmental effects. Even if one takes recourse to Darwinian evolution, these changes are either delayed due to systemic resistance to let these effects take over, or in not so Darwinian a fashion, the effects are a compounded resultant of embedded groups within the network. On the other hand, Edelman’s cortical map formation is not just confined to the processes occurring within brain’s structure alone, but is also realized by how the brain explores its environment. This aspect is nothing but motor behavior in its nexus between the brain and environment and is strongly voiced by Cilliers, when he calls to attention,

The role of active motor behavior forms the first half of the argument against abstract, solipsistic intelligence. The second half concerns the role of communication. The importance of communication, especially the use of symbol systems (language), does not return us to the paradigm of objective information- processing. Structures for communication remain embedded in a neural structure, and therefore will always be subjected to the complexities of network interaction. Our existence is both embodied and contingent.

Edelman is criticized for showing no respect to replication in his theory, which is a strong pillar for natural selection and learning. Recently, attempts to incorporate replication in the brain have been undertaken, and strong indicators for neuronal replicators with the use of Hebb’s learning mechanism as showing more promise when compared with natural selection are in the limelight (Fernando, Goldstein and Szathmáry). These autopoietic systems when given a mathematical description and treatment could be used to model onto a computer or a digital system, thus help giving insights into the world pregnant with complexity.

Autopiesis goes directly to the heart of anti-foundationalism. This is because the epistemological basis of basic beliefs is not paid any due respect or justificatory support in autopietic system’s insistence on internal interactions and external contingent factors obligating the system to undergo continuous transformations. If autopoiesis could survive wonderfully well without the any transcendental intervention, or a priori definition, it has parallels running within French theory. If anti-foundationalism is the hallmark of autopoiesis, so is anti-reductionism, since it is well nigh impossible to analyze to have meaning explicated in terms of atomistic units, and especially so, when the systems are already anti-foundationalist. Even in biologically contextual terms, a mereology according to Garfinkel is emergent as a result of complex interactions that go on within the autopoietic system. Garfinkel says,

We have seen that modeling aggregation requires us to transcend the level of the individual cells to describe the system by holistic variables. But in classical reductionism, the behavior of holistic entities must ultimately be explained by reference to the nature of their constituents, because those entities ‘are just’ collections of the lower-level objects with their interactions. Although, it may be true in some sense that systems are just collections of their elements, it does not follow that we can explain the system’s behavior by reference to its parts, together with a theory of their connections. In particular, in dealing with systems of large numbers of similar components, we must make recourse to holistic concepts that refer to the behavior of the system as a whole. We have seen here, for example, concepts such as entrainment, global attractors, waves of aggregation, and so on. Although these system properties must ultimately be definable in terms of the states of individuals, this fact does not make them ‘fictions’; they are causally efficacious (hence, real) and have definite causal relationships with other system variables and even to the states of the individuals.

Autopoiesis gains vitality, when systems thinking opens up the avenues of accepting contradictions and opposites rather than merely trying to get rid of them. Vitality is centered around a conflict, and ideally comes into a balanced existence, when such a conflict, or strife helps facilitate consensus building, or cooperation. If such goals are achieved, analyzing complexity theory gets a boost, and moreover by being sensitive to autopoiesis, an appreciation of the sort of the real lebenswelt gets underlined. Memory† and history are essentials for complex autopoietic system, whether they be biological and/or social, and this can be fully comprehended in some quite routine situations where systems that are quite identical in most respects, if differing in their histories would have different trajectories in responding to situations they face. Memory does not determine the final description of the system, since it is itself susceptible to transformations, and what really gets passed on are the traces. The same susceptibility to transformations would apply to traces as well. But memory is not stored in the brain as discrete units, but rather as in a distributed pattern, and this is the pivotal characteristic of self-organizing complex systems over any other form of iconic representation. This property of transformation as associated with autopoietic systems is enough to suspend the process in between activity and passivity, in that the former is determining by the environment and the latter is impact on the environment. This is really important in autopoiesis, since the distinction between the inside and outside and active and passive is difficult to discern, and moreover this disappearance of distinction is a sufficient enough case to vouch against any authoritative control as residing within the system, and/or emanating from any single source. Autopoiesis scores over other representational modeling techniques by its ability to self-reflect, or by the system’s ability to act upon itself. For Lawson, reflexivity disallows any static description of the system, since it is not possible to intercept the reflexive moment, and it also disallows a complete description of the system at a meta-level. Even though a meta-level description can be construed, it is only the frozen frames or snapshots of the systems at any given particular instance, and hence ignores the temporal dimensions the systems undergo. For that to be taken into account, and measure the complexity within the system, the role of activity and passivity cannot be ignored at any cost, despite showing up great difficulties while modeling. But, is it not really a blessing in disguise, for the model of a complex system should be retentive of complexity in the real world? Well, the answer is yes, it is.

Somehow, the discussion till now still smells of anarchy within autopoiesis, and if there is no satisfactory account of predictability and stability within the self-organizing system, the fears only get aggravated. A system which undergoes huge effects when small changes or alteration are made in the causes is definitely not a candidate for stability. And autopietic systems are precisely such. Does this mean that these are unstable?, or does it call for a reworking of the notion of stability? This is philosophically contentious and there is no doubt regarding this. Unstability could be a result of probabilities, but complex systems have to fall outside the realm of such probabilities. What happens in complex systems is a result of complex interactions due to a large number of factors, that need not be logically compatible. At the same time, stochasticity has no room, for it serves as an escape route from the annals of classical determinism, and hence a theory based on such escape routes could never be a theory of self-organization (Patteee). Stability is closely related to the ability to predict, and if stability is something very different from what classical determinism tells it is, the case for predictability should be no different. The problems in predictions are gross, as are echoed in the words of Krohn and Küppers,

In the case of these ‘complex systems’ (Nicolis and Prigogine), or ‘non-trivial’ machines, a functional analysis of input-output correlations must be supplemented by the study of ‘mechanisms’, i.e. by causal analysis. Due to the operational conditions of complex systems it is almost impossible to make sense of the output (in terms of the functions or expected effects) without taking into account the mechanisms by which it is produced. The output of the system follows the ‘history’ of the system, which itself depends on its previous output taken as input (operational closure). The system’s development is determined by its mechanisms, but cannot be predicted, because no reliable rule can be found in the output itself. Even more complicated are systems in which the working mechanisms themselves can develop according to recursive operations (learning of learning; invention of inventions, etc.).

The quote above clearly is indicative of predicaments while attempting to provide explanations of predictability. Although, it is quite difficult to get rid of these predicaments, nevertheless, attempts to mitigate them so as to reduce noise levels from distorting or disturbing the stability and predictability of the systems are always in the pipeline. One such attempt lies in collating or mapping constraints onto a real epistemological fold of history and environment, and thereafter apply it to the studies of the social and the political. This is voiced very strongly as a parallel metaphoric in Luhmann, when he draws attention to,

Autopoietic systems, then, are not only self organizing systems. Not only do they produce and eventually change their own structures but their self-reference applies to the production of other components as well. This is the decisive conceptual innovation. It adds a turbo charger to the already powerful engine of self-referential machines. Even elements, that is, last components (individuals), which are, at least for the system itself, undecomposable, are produced by the system itself. Thus, everything which is used as a unit by the system is produced as a unit by the system itself. This applies to elements, processes, boundaries and other structures, and last but not least to the unity of the system itself. Autopoietic systems, of course, exist within an environment. They cannot exist on their own. But there is no input and no output of unity.

What this entails for social systems is that they are autopoietically closed, in that, while they rely on resources from their environment, the resources in question do not become part of the systematic operation. So the system never tries its luck at adjusting to the changes that are brought about superficially and in the process fritter away its available resources, instead of going for trends that do not appear to be superficial. Were a system to ever attempt a fall from grace in making acclimatizations to these fluctuations, a choice that is ethical in nature and contextual at the same time is resorted to. Within the distributed systems as such, a central authority is paid no heed, since such a scenario could result in general degeneracy of the system as a whole. Instead, what gets highlighted is the ethical choice of decentralization, to ensure system’s survivability, and dynamism. Such an ethical treatment is no less altruistic.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s