Distributed Representation Revisited

Figure-132-The-distributed-representation-of-language-meaning-in-neural-networks

If the conventional symbolic model mandates a creation of theory that is sought to address the issues pertaining to the problem, this mandatory theory construction is bypassed in case of distributed representational systems, since the latter is characterized by a large number of interactions occurring in a nonlinear fashion. No such attempts at theoretical construction are to be made in distributed representational systems for fear of high end abstraction, thereby sucking off the nutrient that is the hallmark of the model. Distributed representation is likely to encounter onerous issues if the size of the network inflates, but the issue is addressed through what is commonly known as redundancy technique, whereby, a simultaneous encoding of information generated by numerous interactions take place, thus ameliorating the adequacy of presenting the information to the network. In the words of Paul Cilliers, this is an important point, for,

the network used for the model of a complex system will have to have the same level of complexity as the system itself….However, if the system is truly complex, a network of equal complexity may be the simplest adequate model of such a system, which means that it would be just as difficult to analyze as the system itself.

Following, he also presents a caveat,

This has serious methodological implications for the scientists working with complex systems. A model which reduces the complexity may be easier to implement, and may even provide a number of economical descriptions of the system, but the price paid for this should be considered carefully.

One of the outstanding qualities of distributed representational systems is their adaptability. Adaptability, in the sense of reusing the network to be applicable to other problems to offer solutions. Exactly, what this connotes is, the learning process the network has undergone for a problem ‘A’, could be shared for problem ‘B’, since many of the input neurons are bounded by information learned through ‘A’ that could be applicable to ‘B’. In other words, the weights are the dictators for solving or resolving issues, no matter, when and for which problem the learning took place. There is a slight hitch here, and that being this quality of generalizing solutions could suffer, if the level of abstraction starts to shoot up. This itself could be arrested, if in the initial stages, the right kind of framework is decided upon, thus obscuring the hitch to almost non-affective and non-existence impacting factor. The very notion of weights is considered here by Sterelny as a problematic, and he takes it to attack distributed representation in general and connectionsim as a whole in particular. In an analogically witty paragraph, Sterelny says,

There is no distinction drawable, even in principle, between functional and non- functional connections. A positive linkage between two nodes in a distributed network might mean a constitutive link (eg. Catlike, in a network for tiger); a nomic one (carnivore, in the same network), or a merely associative one (in my case, a particular football team that play in black and orange.

It should be noted that this criticism on weights is derived, since for Sterelny, relationship between distributed representations and the micro-features that compose them is deeply problematic. If such is the criticism, then no doubt, Sterelny still seems to be ensconced within the conventional semantic/symbolic model. And since, all weights can take part in information processing, there is some sort of a democratic liberty that is accorded to the weights within a distributed representation, and hence any talk of constitutive, nomic, or even for that matter associative is mere humbug. Even if there is a disagreement prevailing that a large pattern of weights are not convincing enough for an explanation, as they tend to complicate matters, the distributed representational systems work consistently enough as compared to an alternative system that offers explanation through reasoning, and thereby, it is quite foolhardy to jettison the distributed representation by the sheer force of criticism. If the neural network can be adapted to produce the correct answer for a number of training cases that is large compared with the size of the network, it can be trusted to respond correctly to the previously unseen cases provided they are drawn from the same population using the same distribution as the training cases, thus undermining the commonly held idea that explanations are the necessary feature of the trustworthy systems (Baum and Haussler). Another objection that distributed representation faces is that, if representations are distributed, then the probability of two representations of the same thing as different from one another cannot be ruled out. So, one of them is the true representation, while the other is only an approximation of the representation.(1) This is a criticism of merit and is attributed to Fodor, in his influential book titled Psychosemantics.(2) For, if there is only one representation, Fodor would not shy from saying that this is the yucky solution, folks project believe in. But, since connectionism believes in the plausibility of indeterminate representations, the question of flexibility scores well and high over the conventional semantic/symbolic models, and is it not common sense to encounter flexibility in daily lives? The other response to this objection comes from post-structuralist theories (Baudrillard is quite important here. See the first footnote below). The objection of true representation, and which is a copy of the true representation meets its pharmacy in post-structuralism, where meaning is constituted by synchronic as well as diachronic contextualities, and thereby supplementing the distributed representation with a no-need-for concept and context, as they are inherent in the idea of such a representation itself. Sterelny, still seems to ride on his obstinacy, and in a vitriolic tone poses his demand to know as to why distributed representation should be regarded as states of the system at all. Moreover, he says,

It is not clear that a distributed representation is a representation for the connectionist system at all…given that the influence of node on node is local, given that there is no processor that looks at groups of nodes as a whole, it seems that seeing a distributed representation in a network is just an outsider’s perspective on the system.

This is moving around in circles, if nothing more. Or maybe, he was anticipating what G. F. Marcus would write and echo to some extent in his book The Algebraic Mind. In the words of Marcus,

…I agree with Stemberger(3) that connectionism can make a valuable contribution to cognitive science. The only place, we differ is that, first, he thinks that the contribution will be made by providing a way of eliminating symbols, whereas I think that connectionism will make its greatest contribution by accepting the importance of symbols, seeking ways of supplementing symbolic theories and seeking ways of explaining how symbols could be implemented in the brain. Second, Stemberger feels that symbols may play no role in cognition; I think that they do.

Whatever Sterelny claims, after most of the claims and counter-claims that have been taken into account, the only conclusion for the time being is that distributive representation has been undermined, his adamant position to be notwithstanding.

(1) This notion finds its parallel in Baudrillard’s Simulation. And subsequently, the notion would be invoked in studying the parallel nature. Of special interest is the order of simulacra in the period of post-modernity, where the simulacrum precedes the original, and the distinction between reality and representation vanishes. There is only the simulacrum and the originality becomes a totally meaningless concept.

(2) This book is known for putting folk psychology firmly on the theoretical ground by rejecting any external, holist and existential threat to its position.

(3) Joseph Paul Stemberger is a professor in the Department of Linguistics at The University of British Columbia in Vancouver, British Columbia, Canada, with primary interests in phonology, morphology, and their interactions. My theoretical orientations are towards Optimality Theory, employing our own version of the theory, and towards connectionist models.

 

Advertisement

Permeability of Autopoietic Principles (revisited) During Cognitive Development of the Brain

Distinctions and binaries have their problematics, and neural networks are no different when one such attempt is made regarding the information that flows from the outside into the inside, where interactions occur. The inside of the system has to cope up with the outside of the system through mechanisms that are either predefined for the system under consideration, or having no independent internal structure at all to begin with. The former mechanism results in loss of adaptability, since all possible eventualities would have to be catered for in the fixed, internal structure of the system. The latter is guided by conditions prevailing in the environment. In either cases, learning to cope with the environmental conditions is the key for system’s reaching any kind of stability. But, how would a system respond to its environment? According to the ideas propounded by Changeaux et. al. , this is possible in two ways, viz,

  1. An instructive mechanism is directly imposed by the environment on the system’s structure and,
  2. a selective mechanism, that is Darwinian in its import, helps maintain order as a result of interactions between the system and environment. The environment facilitates reinforcement, stabilization and development of the structure, without in any way determining it.

These two distinct ways when exported to neural networks take on connotations as supervised and unsupervised learning. The position of Changeaux et. al. is rooted in rule- based, formal and representational formats, and is thus criticized by the likes of Edelman. According to him, in a nervous system (his analysis are based upon nervous systems) neural signals in an information processing models are taken in from the periphery, and thereafter encoded in various ways to be subsequently transformed and retransformed during processing and generating an output. This not only puts extreme emphasis on formal rules, but also makes the claim on the nature of memory that is considered to occur through the representation of events through recording or replication of their informational details. Although, Edelman’s analysis takes nervous system as its centrality, the informational modeling approach that he undertakes is blanketed over the ontological basis that forms the fabric of the universe. Connectionists have no truck with this approach, as can be easily discerned from a long quote Edelman provides:

The notion of information processing tends to put a strong emphasis on the ability of the central nervous system to calculate the relevant invariance of a physical world. This view culminates in discussions of algorithms and computations, on the assumption that brain computes in an algorithmic manner…Categories of natural objects in the physical world are implicitly assumed to fall into defined classes or typologies that are accessible to a program. Pushing the notion even further, proponents of certain versions of this model are disposed to consider that the rules and representation (Chomsky) that appear to emerge in the realization of syntactical structures and higher semantic functions of language arise from corresponding structures at the neural level.

Edelman is aware of the shortcomings in informational processing models, and therefore takes a leap into connectionist fold with his proposal of brain consisting of a large number of undifferentiated, but connected neurons. He, at the same time gives a lot of credence to organization occurring at development phases of the brain. He lays out the following principles of this population thinking in his Neural Darwinism: The Theory of Neuronal Group Selection:

  1. The homogeneous, undifferentiated population of neurons is epigenetically diversified into structurally variant groups through a number of selective processescalled“primary repertoire”.
  2. Connections among the groups are modified due to signals received during the interactions between the system and environment housing the system. Such modifications that occur during the post-natal period become functionally active to used in future, and form “secondary repertoire”.
  3. With the setting up of “primary” and “secondary” repertoires, groups engage in interactions by means of feedback loops as a result of various sensory/motor responses, enabling the brain to interpret conditions in its environment and thus act upon them.

“Degenerate” is what Edelman calls are the neural groups in the primary repertoire to begin with. This entails the possibility of a significant number of non-identical variant groups. This has another dimension to it as well, in that, non-identical variant groups are distributed uniformly across the system. Within Edelman’s nervous system case study, degeneracy and distributedness are crucial features to deny the localization of cortical functions on the one hand, and existence of hierarchical processing structures in a narrow sense on the other. Edelman’s cortical map formation incorporate the generic principles of autopoiesis. Cortical maps are collections (areas) of minicolumns in the brain cortex that have been identified as performing a specific information processing function. Schematically, it is like,

8523645_orig

In Edelman’s theory, neural groups have an optimum size that is not known a priori, but develops spontaneously and dynamically. Within the cortex, this is achieved by means of inhibitory connections spread over a horizontal plane, while excitatory ones are vertically laid out, thus enabling the neuronal activity to be concentrated on the vertical plane rather than the horizontal one. Hebb’s rule facilitates the utility function of this group. Impulses are carried on to neural groups thereby activating the same, and subsequently altering synaptic strengths. During the ensuing process, a correlation gets formed between neural groups with possible overlapping of messages as a result of synaptic activity generated within each neural groups. This correlational activity could be selected for frequent exposure to such overlaps, and once selected, the group might start to exhibit its activity even in the absence of inputs or impulses. The selection is nothing but memory, and is always used in learning procedures. A lot depends upon the frequency of exposure, as if this is on the lower scale, memory, or selection could simply fade away, and be made available for a different procedure. No wonder, why forgetting is always referred to as a precondition for memory. Fading away might be an useful criteria for using the freed allotted memory storage space during developmental process, but at the stage when groups of the right size are in place and ready for selection, weakly interacting groups would meet the fate of elimination. Elimination and retention of groups depends upon what Edelman refers to as the vitality principle, wherein, sensitivity to historical process finds more legitimacy, and that of extant groups find takers in influencing the formation of new groups. The reason for including Edelman’s case was specifically to highlight the permeability of self-organizing principles during the cognitive development of the brain, and also pitting the superiority of neural networks/connectionist models in comprehending brain development over the traditional rule-based expert and formal systems of modeling techniques.

In order to understand the nexus between brain development and environment, it would be secure to carry further Edelman’s analysis. It is a commonsense belief in linking the structural changes in the brain with environmental effects. Even if one takes recourse to Darwinian evolution, these changes are either delayed due to systemic resistance to let these effects take over, or in not so Darwinian a fashion, the effects are a compounded resultant of embedded groups within the network. On the other hand, Edelman’s cortical map formation is not just confined to the processes occurring within brain’s structure alone, but is also realized by how the brain explores its environment. This aspect is nothing but motor behavior in its nexus between the brain and environment and is strongly voiced by Cilliers, when he calls to attention,

The role of active motor behavior forms the first half of the argument against abstract, solipsistic intelligence. The second half concerns the role of communication. The importance of communication, especially the use of symbol systems (language), does not return us to the paradigm of objective information- processing. Structures for communication remain embedded in a neural structure, and therefore will always be subjected to the complexities of network interaction. Our existence is both embodied and contingent.

Edelman is criticized for showing no respect to replication in his theory, which is a strong pillar for natural selection and learning. Recently, attempts to incorporate replication in the brain have been undertaken, and strong indicators for neuronal replicators with the use of Hebb’s learning mechanism as showing more promise when compared with natural selection are in the limelight (Fernando, Goldstein and Szathmáry). These autopoietic systems when given a mathematical description and treatment could be used to model onto a computer or a digital system, thus help giving insights into the world pregnant with complexity.

Autopiesis goes directly to the heart of anti-foundationalism. This is because the epistemological basis of basic beliefs is not paid any due respect or justificatory support in autopietic system’s insistence on internal interactions and external contingent factors obligating the system to undergo continuous transformations. If autopoiesis could survive wonderfully well without the any transcendental intervention, or a priori definition, it has parallels running within French theory. If anti-foundationalism is the hallmark of autopoiesis, so is anti-reductionism, since it is well nigh impossible to analyze to have meaning explicated in terms of atomistic units, and especially so, when the systems are already anti-foundationalist. Even in biologically contextual terms, a mereology according to Garfinkel is emergent as a result of complex interactions that go on within the autopoietic system. Garfinkel says,

We have seen that modeling aggregation requires us to transcend the level of the individual cells to describe the system by holistic variables. But in classical reductionism, the behavior of holistic entities must ultimately be explained by reference to the nature of their constituents, because those entities ‘are just’ collections of the lower-level objects with their interactions. Although, it may be true in some sense that systems are just collections of their elements, it does not follow that we can explain the system’s behavior by reference to its parts, together with a theory of their connections. In particular, in dealing with systems of large numbers of similar components, we must make recourse to holistic concepts that refer to the behavior of the system as a whole. We have seen here, for example, concepts such as entrainment, global attractors, waves of aggregation, and so on. Although these system properties must ultimately be definable in terms of the states of individuals, this fact does not make them ‘fictions’; they are causally efficacious (hence, real) and have definite causal relationships with other system variables and even to the states of the individuals.

Autopoiesis gains vitality, when systems thinking opens up the avenues of accepting contradictions and opposites rather than merely trying to get rid of them. Vitality is centered around a conflict, and ideally comes into a balanced existence, when such a conflict, or strife helps facilitate consensus building, or cooperation. If such goals are achieved, analyzing complexity theory gets a boost, and moreover by being sensitive to autopoiesis, an appreciation of the sort of the real lebenswelt gets underlined. Memory† and history are essentials for complex autopoietic system, whether they be biological and/or social, and this can be fully comprehended in some quite routine situations where systems that are quite identical in most respects, if differing in their histories would have different trajectories in responding to situations they face. Memory does not determine the final description of the system, since it is itself susceptible to transformations, and what really gets passed on are the traces. The same susceptibility to transformations would apply to traces as well. But memory is not stored in the brain as discrete units, but rather as in a distributed pattern, and this is the pivotal characteristic of self-organizing complex systems over any other form of iconic representation. This property of transformation as associated with autopoietic systems is enough to suspend the process in between activity and passivity, in that the former is determining by the environment and the latter is impact on the environment. This is really important in autopoiesis, since the distinction between the inside and outside and active and passive is difficult to discern, and moreover this disappearance of distinction is a sufficient enough case to vouch against any authoritative control as residing within the system, and/or emanating from any single source. Autopoiesis scores over other representational modeling techniques by its ability to self-reflect, or by the system’s ability to act upon itself. For Lawson, reflexivity disallows any static description of the system, since it is not possible to intercept the reflexive moment, and it also disallows a complete description of the system at a meta-level. Even though a meta-level description can be construed, it is only the frozen frames or snapshots of the systems at any given particular instance, and hence ignores the temporal dimensions the systems undergo. For that to be taken into account, and measure the complexity within the system, the role of activity and passivity cannot be ignored at any cost, despite showing up great difficulties while modeling. But, is it not really a blessing in disguise, for the model of a complex system should be retentive of complexity in the real world? Well, the answer is yes, it is.

Somehow, the discussion till now still smells of anarchy within autopoiesis, and if there is no satisfactory account of predictability and stability within the self-organizing system, the fears only get aggravated. A system which undergoes huge effects when small changes or alteration are made in the causes is definitely not a candidate for stability. And autopietic systems are precisely such. Does this mean that these are unstable?, or does it call for a reworking of the notion of stability? This is philosophically contentious and there is no doubt regarding this. Unstability could be a result of probabilities, but complex systems have to fall outside the realm of such probabilities. What happens in complex systems is a result of complex interactions due to a large number of factors, that need not be logically compatible. At the same time, stochasticity has no room, for it serves as an escape route from the annals of classical determinism, and hence a theory based on such escape routes could never be a theory of self-organization (Patteee). Stability is closely related to the ability to predict, and if stability is something very different from what classical determinism tells it is, the case for predictability should be no different. The problems in predictions are gross, as are echoed in the words of Krohn and Küppers,

In the case of these ‘complex systems’ (Nicolis and Prigogine), or ‘non-trivial’ machines, a functional analysis of input-output correlations must be supplemented by the study of ‘mechanisms’, i.e. by causal analysis. Due to the operational conditions of complex systems it is almost impossible to make sense of the output (in terms of the functions or expected effects) without taking into account the mechanisms by which it is produced. The output of the system follows the ‘history’ of the system, which itself depends on its previous output taken as input (operational closure). The system’s development is determined by its mechanisms, but cannot be predicted, because no reliable rule can be found in the output itself. Even more complicated are systems in which the working mechanisms themselves can develop according to recursive operations (learning of learning; invention of inventions, etc.).

The quote above clearly is indicative of predicaments while attempting to provide explanations of predictability. Although, it is quite difficult to get rid of these predicaments, nevertheless, attempts to mitigate them so as to reduce noise levels from distorting or disturbing the stability and predictability of the systems are always in the pipeline. One such attempt lies in collating or mapping constraints onto a real epistemological fold of history and environment, and thereafter apply it to the studies of the social and the political. This is voiced very strongly as a parallel metaphoric in Luhmann, when he draws attention to,

Autopoietic systems, then, are not only self organizing systems. Not only do they produce and eventually change their own structures but their self-reference applies to the production of other components as well. This is the decisive conceptual innovation. It adds a turbo charger to the already powerful engine of self-referential machines. Even elements, that is, last components (individuals), which are, at least for the system itself, undecomposable, are produced by the system itself. Thus, everything which is used as a unit by the system is produced as a unit by the system itself. This applies to elements, processes, boundaries and other structures, and last but not least to the unity of the system itself. Autopoietic systems, of course, exist within an environment. They cannot exist on their own. But there is no input and no output of unity.

What this entails for social systems is that they are autopoietically closed, in that, while they rely on resources from their environment, the resources in question do not become part of the systematic operation. So the system never tries its luck at adjusting to the changes that are brought about superficially and in the process fritter away its available resources, instead of going for trends that do not appear to be superficial. Were a system to ever attempt a fall from grace in making acclimatizations to these fluctuations, a choice that is ethical in nature and contextual at the same time is resorted to. Within the distributed systems as such, a central authority is paid no heed, since such a scenario could result in general degeneracy of the system as a whole. Instead, what gets highlighted is the ethical choice of decentralization, to ensure system’s survivability, and dynamism. Such an ethical treatment is no less altruistic.

Connectionism versus Representational Theory of Mind

mind-1272x400

Although there are some promises shown by the representational theory of mind (functionalism) with its insistence of rationalistic tendencies, there are objections that aim to derail this theory. Since the language of thought (representational theory of mind) believes in the existence of folk psychology, opponents of folk psychology dismiss this approach. As was discussed in the first chapter, folk psychology is not a very successful guide to explain the workings of the mind. Since representational theory explains how mental states are responsible for causing behaviors, it believes in folk psychology, and therefore the most acrimonious criticisms for this approach come from the eliminative materialist camp. For the eliminative materialist, there is a one to one mapping between psychological states and neurophysiological states in the brain thus expositing the idea that mental behavior is better explained as compared to when neurophysiology attempts to do the same, since the vistas for doing so are quantifiably and qualifiably more. Behaviorism with its insistence on the absence of linkages between mental states and effects of behavior would be another objection. Importantly, even Searle refuses to be a part of representational theory, with his biological naturalism (as discussed in the second chapter) which is majorly non-representational and investing faith in the causal efficacy of mental states. Another objection is the homunculi regression, according to which there is an infinite regression of explanation about how sentences get their meanings. For Searle, even if this is true, it is only partly true, since it is only at the bottom-level homunculi where manipulation of symbols take place, after which there is aporetic situation. Daniel Dennett on the other hand talks about a “no-need-for-interpretation” at this bottom level, since at this level simplicity crops up. Searle is a monist, but divides intentional states into low-level brain activity and high-level mental activity. So it is these lower-level, nonrepresentational neurophysiological processes that have causal powers in intention and behavior, rather than some higher-level mental representation. Yet another form of challenge comes from within the camp of representational theory of mind itself, which is suggestive of scientific-cognitive research as showing the amount of intelligent action as generated by complex interactions involving neural, bodily and environmental factors. This threat to representational theory(1) is prosaically worded by Wheeler and Clark , when they say,

These are hard times for the notion of internal representation. Increasingly, theorists are questioning the explanatory value of the appeal to internal representation in the search for a scientific understanding of the mind and of intelligent action. What is in dispute is not, of course, the status of certain intelligent agents as representers of their worlds…What is in doubt is not our status as representers, but the presence within us of identifiable and scientifically well-individuated vehicles of representational content. Recent work in neuroscience(2), robotics, philosophy, and development psychology suggests, by way of contrast, that a great deal of (what we intuitively regard as) intelligent action maybe grounded not in the regimented activity of inner content-bearing vehicles, but in complex interactions involving neural, bodily, and environmental factors and forces.

There is a growing sense of skepticism against representational theory, and from within the camp as well, though not all that hostile as the above quote specified. Speculations are rife that this may be due to embracing the continental tradition of phenomenology and Gibsonian psychology, which could partly indicate a move away from the strictures of rule-based approach. This critique from within the camp of internal representation would come in very handy in approaching connectionism and in prioritizing its utility, as a substitute to representation. What is of crucial importance here is the notion of continuous reciprocal causation, which involves multiple simultaneous interactions alongside complex dynamical feedback loops, thus facilitating a causal contribution of each component in the system as determining and determined by the causal contributions of large number of other components on the one hand, and the potentiality of these contributions to change in a radical manner temporally. When the complexity of the causal interactions shoots up, it automatically signals the difficulty level of representation’s explanatory prowess that simultaneously rises. But the real threat to the representational theory comes from connectionism, which despite in agreement with some of the premisses of representational theory, deviate greatly when it comes to creating machines that could be said to think. With neural networks and weights attached to them, a learning algorithm makes it possible to undergo modifications within the network over time. Although, Fodor defends his language of thought, or the representation theory in general against connectionism by claiming that the neural network is just a realization of the computational theory of mind that necessarily employs symbol manipulation. He does this through his use of cognitive architecture. Campers in connectionism however, deny connectionism as a mere implementation of representational theory, in addition to claiming that the laws of nature do not have systematicity as resting on representation, and most importantly deny the thesis of Fodor that essentially cognition is a function that uses representational input and output in favor of eliminative connectionism. Much of the current debate between the two camps revolves around connectionst’s denial of connectionism as a mere implementation of representational theory based on cognitive architecture, which is truism in the case of classicist model. The response from connectionist side is to build a representational system, that agrees with mental representations as constituting the direct objects of propositional attitudes and in possession of combinatorial syntax and semantics, with the domain of mental processes as causally sensitive to the syntactic/formal structure of representations as defined by these combinatorial syntax, thus relying upon a non-concatenative realization of syntactic/structural complexity of representations, in turn yielding a non-classical system.

redneuronal-1024x768

—————————

(1) As an aside, on the threat of representational theory, I see a lot of parallel here between the threat to internal representation and Object Oriented Ontology (L. Bryant flavor), where objects are treated as black boxes, and hence objects are withdrawn, implying that no one, including us would claim any direct access to the inner world, thereby shutting the inner world from any kind of representation. Although paradoxically sounding, it is because of withdrawal that knowledge of objects becomes thoroughly relational. When we know an object, we do not know it per se, but through a relation, thus knowledge shifts from the register of representation to that of performance, meeting its fellow travelers in Deleuze and Guattari, who defend a performative ontology of the world ala Andrew Pickering, by claiming that more than what a language represents, what a language does is interesting.

(2) Not a part of the quotation, but a brief on the recent work in neurons, which are prone to throwing up surprises. Neurons are getting complicated, but the basic functional concept is still synapses as transmitting electrical signals to the dendrites and the cell body (input), whereas the axons carry signals away (output). What is surprising is the finding by the scientists at the Northwestern University, that axons can act as input agents too. There is another way of saying this: axons talk with one another. Before sending signals in reverse, axons carry out their own neural computations without the aide from the cell body or dendrites. Now this is in contrast to a typical neuronal communication where an axon of one neuron is in contact with another neuron’s cell body or dendrite, and not its axon. The computations in axons are slower to a degree of 103 as compared with the computations in dendrites, potentially creating a means for neurons to compute fast things in dendrites, and slow ones in axons. Nelson Spruston, senior author of the paper (“Slow Integration Leads to Persistent Action Potential Firing in Distal Axons of Coupled Interneurons”) and professor of neurobiology and physiology in the Weinberg College of Arts and Sciences, says,

“We have discovered a number of things fundamental to how neurons work that are contrary to the information you find in neuroscience textbooks. Signals can travel from the end of the axon toward the cell body, when it typically is the other way around. We were amazed to see this.”

He and his colleagues first discovered individual nerve cells can fire off signals even in the absence of electrical stimulations in the cell body or dendrites. It’s not always stimulus in, immediate action potential out. (Action potentials are the fundamental electrical signaling elements used by neurons; they are very brief changes in the membrane voltage of the neuron). Similar to our working memory when we memorize a telephone number for later use, the nerve cell can store and integrate stimuli over a long period of time, from tens of seconds to minutes. (That’s a very long time for neurons). Then, when the neuron reaches a threshold, it fires off a long series of signals, or action potentials, even in the absence of stimuli. The researchers call this persistent firing, and it all seems to be happening in the axon. Spruston further says,

“The axons are talking to each other, but it’s a complete mystery as to how it works. The next big question is: how widespread is this behavior? Is this an oddity or does in happen in lots of neurons? We don’t think it’s rare, so it’s important for us to understand under what conditions it occurs and how this happens.”

 

Churchlands, Representational Disconcert via State-Space Physics & Phase-Space Mathematics

Complexity Theory and Philosophy: A Peace Accord

neuron_spark

Complexity has impacted fields diverse from the one it originated in, i.e. science. It has touched the sociological domains, and organizational sciences, but sadly, it has not had much of a say in mainstream academic philosophy. In sociology, John Urry (2003) examines the ideas of chaos and complexity in carrying out analyses of global processes. He does this, because he believes that systems are balanced between order and chaos, and that, there is no teleological move towards any state of equilibrium, as the events that pilot the system are not only unpredictable, but also irreversible at the same time. Such events rupture the space-time regularity with their dimension of unpredictability that was thought of as characterizing hitherto known sociological discursive practices. A highly significant contribution that comes along with such an analyses is the distinguishing between what Urry aptly calls “global networks” and “global fluids”. Global fluids are a topographical space used to describe the de-territorialized movement of people, information, objects, finances in an undirected, nonlinear mode, and in a way are characteristic of emergentism and hybridization. The topographies of global networks and global fluids interact in complex manner to give rise to emergent properties that define systems as always on the edge of chaos, pregnant with unpredictability.

emergentism

Cognitive science and evolutionary theory have been inspirational for a lot of philosophical investigations and have also benefited largely from complexity theory. If such is the case, the perplexing thing is complexity theory’s impact in philosophy, which has not had major inroads to make. Why could this be so? Let us ponder this over.

Analytical philosophy has always been concerned with analysis, and logical constructs that are to be stringently followed. These rules and regulations take the domain of philosophical investigations falling under the rubric of analytical tradition away from holism, uncertainty, unpredictability and subjectivity that are characteristics of complexity. The reason why this could be case is attributable to complexity theory as developed on the base of mathematics and computational theories, which, somehow is not the domain of academic philosophy dealing with social sciences and cultural studies in present days, but is confined to discussions and debates amongst philosophers of science (biology is an important branch here), mathematics and technology. Moreover, the debates and deliberations have concerned themselves with the unpredictable and uncertain implications as derived from the vestiges of chaos theory and not complexity theory per se. This is symptomatic of the fact that a lot of confusion rests upon viewing these two path-breaking theories as synonymous, which, incidentally is a mistake, as the former happens at best to be a mere subset of the latter. An ironical fate encountered philosophy, since it dealt with complex notions of language, without actually admitting to the jargon, and technical parlance of complexity theory. If philosophy lets complexity make a meaningful intercourse into its discursive practices, then it could be beneficial to the alliance. And the branch of philosophy that is making use of this intervention and alliance at present is post-modern philosophy ++++

The works of Freud and Saussure as furthered by Lacan and Derrida, not only accorded fecundity for a critique of modernity, but, also opened up avenues for a meaningful interaction with complexity. French theory at large was quite antagonistic to modernist claims of reducing the diverse world to essential features for better comprehensibility, and this essentially lent for its affinity to complexity. Even if Derrida never explicitly used the complexity parlance in his corpus, there appears to be a strong sympathy towards the phenomenon via his take on post-structuralism. On the other hand, Lyotard, in setting his arguments for post-modern conditions of knowledge was ecstatic about paralogy as a defining feature, which is no different from the way complexity, connectionism and distributed systems would harbor.

cc40a61d21aaf144f8e7cf31c50cd31b

Even Deleuze and Guattari are closer to the complex approach through their notions of rhizomes, which are non-reductive, non-hierarchical, and multiplicities oriented connections in data representations and interpretations, and are characterized by horizontal connectivities, as contrasted with arborescent models that find their characterizations in vertical and linear determinations. The ideas are further developed by De Landa (2006), where the attempt is to define a new ontology that could be utilized by social scientists. Components that make up the assemblages are characterized along two axes viz, material, explicating on the variable roles components might undergo, and territorializng/deterritorializing, explicating on processes components might be involved with.

mehretu

Relations of exteriority define components, implying that components are self-subsistent, or that there is never a loss of identity for them, during the process of being unplugged from one assemblage to be plugged into another. This relationship between the assemblages and components is nonlinearly and complexly defined, since assemblages are affected by lower level ones, but could also potentially act on to these components affecting adaptations in them. This is so similar to the way distributed systems are principally modeled. Then why has philosophy at large not shown much impact from complexity despite the French theoretical affinities with the latter?

Chaos theory is partly to blame here, for it has twisted the way a structure of a complex system is understood. The systems have a non-linear operational tendencies, and this has obfuscated the notion of meaning as lying squarely on relativism. The robustness of these systems, when looked at in an illuminating manner from the French theoretical perspective could be advantageous to get rid of ideas about complex systems as based on a knife’s edge, despite being nonlinearly determinable. If the structure of the system were a problematic, then defining limits and boundaries was no easy job. What is the boundary between the system and the environment? Is it rigorously drawn and followed, or is it a mere theoretical choice and construct? These are valid question, which philosophy found it difficult to come to terms with. These questions gained intensity with the introduction of self-organizational systems and/or autopoietic ones. Classical and modern philosophies either had to dismiss these ideas as chimerical, or it had to close off its own analyzing methods in dealing with these issues, and both of these approaches had a detrimental effect of isolating the discipline of philosophy from the cultural domains in which such notions were making positive interventions and inroads. It could safely be said that French theory, in a way tried its rescue mission, and picked up momentum in success. The major contribution from continental philosophy post-60s was framing solutions. Framing, as a schema of interpretation helped comprehending and responding to events and enabled systems and contexts to constitute one another, thus positing a resolution on the boundaries and limits issues that had plagued hitherto known philosophical doctrines.

The notion of difference, so central to modernism was a problematic that needed to be resolved. Such was never a problem within French theory, but was a tonic to be consumed along side complexity, to address socio-economic and political issues. Deleuze (1994), for example, in his metaphysical treatise, sought a critique of representation, and a systematic inversion of the traditional metaphysical notions of identity and difference. Identities were not metaphysically or logically prior to differences, and identities in whatever categories, are pronounced by their derivation from differences. In other words, forms, categories, apperception, and resemblances fail to attain their differences in themselves. And, as Deleuze (2003: 32) says,

If philosophy has a positive and direct relation to things, it is only insofar as philosophy claims to grasp the thing itself, according to what it is, in its difference from everything it is not, in other words, in its internal difference.

But Deleuzean thesis on metaphysics does make a political intervention, like when he says,

The more our daily life appears standardized, stereotyped, and subject to an accelerated reproduction of objects of consumption, the more art must be injected into it in order to extract from it that little difference which plays simultaneously between other levels of repetition, and even in order to make the two extremes resonate — namely, the habitual series of consumption and the instinctual series of destruction and death. (Deleuze 1994: 293).(1)

Tackling the complexity within the social realm head-on does not lie in extrapolating convenient generalities, and thereafter trying to fathom how finely they fit together, but, rather in apprehending the relational schema of the network, within which, individuals emerge as subjects, objects and systems that are capable of grasping the real things.(2) 

One major criticism leveled against complexity is that it is sympathetic to relativism, just like most of the French theoretical thought is. Whether, this accusation has any substance to it could be measured by the likes of circular meaningless debates like the Sokal hoax. The hoax was platitudinous to say the least, and vague at best. And why would this be so? Sokal in his article, “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity”, incorporated the vocabulary of his specialized discipline to unearth the waywardness of usage by the French theorists. This, for Sokal was fashionable nonsense, or an act of making noise. He takes the French theorists to task for a liberal use of terms like chaos, complexity, quantum, relativity, gender, difference, topology, and deconstruction, without any proper insight. Who would be vague in the Sokal affair? The physicist, or the bunch of French theorists? Such an issue could be tackled on an intelligibility concern. Intelligibility is a result of differentiation and not a guarantee of truth-giving process (Cilliers 2005: 262).

Clearly communicated does not give any indisputable identity to a concept. The only way, (such a meaning can) be meaningful is through limitations being set on such communications, an ethical choice once again. These limitations enable knowledge to come into existence, and this must be accepted de facto. In a parallel metaphoric with complexity, these limitations or constraints are sine qua non for autopoiesis to make an entry. Cilliers (2005: 264) is quite on target, when he lays down the general schema for complexity, if it is, aligned with notions of chaos, randomness and noise, the accusations of relativism and vagueness will start to hold water. It is aligned with notions of structure as the result of contingent constraints, we can make claims about complex systems, which are clear and comprehensible, despite the fact that the claims themselves are historically contingent.

Undoubtedly, complexity rides on modesty. But, the accusations against this position only succeed to level complexity as weak, a gross mistake in itself. Let us take Derrida here, as read by Sweetman (1999). Sweetman cites Derrida as an ideal post-modernist, and thereafter launches an attack on his works as confusing aesthetics with metaphysics, as mistakenly siding with assertions over arguments in philosophy, as holding Derrida for moral and epistemological relativism and, self-contradictory with a tinge of intellectual arrogance. Such accusations, though addressed by Derrida and his scholars at various times, nevertheless find parallels in complexity, where, the split is between proponents of mathematical certainty in dealing with complexity on the one hand, and proponents of metaphorical proclivities in dealing with the phenomenon on the other. So, how would relativism make an entry here? Being a relativist is as good as swimming in paradoxical intellectual currents, and such a position is embraced due to a lack of foundational basis for knowledge, if nothing more. The counter-argument against the relativistic stance of complexity could be framed in a simplistic manner, by citing the case of limited knowledge as not relativistic knowledge. If these forms of knowledge were equated in any manner, it would only help close doors on investigations.

A look at Luhmann’s use of autopoiesis in social theory is obligated here. This is necessitated by the fact of autopoiesis getting directly imported from biological sciences, to which, even Varela had objections, though intellectually changing tracks. Luhmann considers the leaving out of self-referentiality as a problematic in the work of Chileans (Maturana + Varela), since for Luhmann systems are characterized by general patterns which can just be described as making a distinction and crossing the boundary of the distinction [which] enables us to ask questions about society as a self-observing systems[s] (Hayles, K., Luhmann, N., Rasch, W., Knodt, E. & Wolfe, C., 1995 Autumn). Such a reaction from Luhmann is in his response to a cautious undertaking of any import directly from biological and psychological sciences to describe society and social theory. Reality is always distorted through the lens of perception and, this blinds humans from seeing things-in-themselves (the Kantian noumenon). One could visualize this within the analytical tradition of language as a problematic, involving oppositional thinking within the binary structure of linguistic terms themselves. What is required is an evolutionary explanation of how systems survive to the extent that they can learn to handle the inside/outside difference within the system, and within the context of their own operations, since they can never operate outside the system (Hayles, K., Luhmann, N., Rasch, W., Knodt, E. & Wolfe, C., 1995 Autumn). For the social theory to be effective, what requires deconstruction is the deconstruction of the grand tautological claim of autopoiesis, or the unity of the system as produced by the system itself. Luhmann tells us that a methodology that undertakes such a task must do this empirically by identifying the operations which produce and reproduce the unity of the system (Luhmann 1992). This is a crucial point, since the classical/traditional questions as regards the problem of reference as conditioning meaning and truth, are the distinctions between the subject and the object. Luhmann thinks of these questions as quasi-questions, and admonishes a replacement by self-reference/external-reference for any meaningful transformation to take effect. In his communications theory(3), he states flatly that as a system, it depends upon “introducing the difference between system and environment into the system” as the internal split within the system itself that allows it to make the distinction to begin its operative procedures to begin with (Luhmann 1992: 1420). The self-reference/external-reference distinction is a contingent process, and is open to temporal forms of difference. How to define the operation that differentiates the system and organizes the difference between system and environment while maintaining reciprocity between dependence and independence is a question that demands a resolution. The breakthrough for autopoietic systems is provided by the notion of structural coupling, since a renunciation of the idea of overarching causality on the one hand, and the retention of the idea of highly selective connections between systems and environments is effected here. Structural coupling maintains this reciprocity between dependence and independence. Moreover, autopoietic systems are defined by the way they are, by their mode of being in the world, and by the way they overcome or encounter entropy in the world. In other words, a self-perpetuating system performing operational closure continuously are autopoietic systems that organize dynamic stability.

wpid-dznthe-autopoiesis-of-architecture-by-patrik-schumacher-6

Even if the concepts of complexity have not traveled far and wide into the discipline of philosophy, the trends are on the positive side. Developments in cognitive sciences and consciousness studies have a far reaching implications on philosophy of mind, as does in research in science that helps redefine the very notion of life. These researches are carried out within the spectrum of complexity theory, and therefore, there is a lot of scope for optimism. Complexity theory is still in the embryonic stage, for it is a theory of the widest possible extent for our understanding the world that we inhabit. Though, there are roadblocks along the way, it should in no way mean that it is the end of the road for complexity, but only a beginning in a new and novel manner.

Complexity theory as imbibed within adaptive systems has a major role in evolutionary doctrines. To add to this, the phenomenon of French Theory has incited creative and innovative ways of looking at philosophy, where residues of dualism and reductionism still rest, and resist any challenges whatsoever. One of the ways through which complexity and philosophy could come closer is, when the latter starts withdrawing its investigations into the how- ness of something, and starts to seriously incorporate the why-ness of it. The how- ness still seems to be arrested within the walls of reductionism, mechanicism, modernism, and the pillars of Newtonian science. So, an ontological reduction of all phenomenon under the governance of deterministic laws is the indelible mark, even if epistemologically, a certain guideline of objectivity seems apparent. What really is missed out on in this process is the creativity, as world in particular and universe in general is describable as a mechanism following clockwork. Such a view held sway for most the modern era, but with the advent of scientific revolutions in the 20th century, things began to look awry. Relativity theory, quantum mechanics, chaos, complexity, and recently string/M-theory were powerful enough in their insights to clean off the hitherto promising and predictable scientific ventures. One view at quantum mechanics/uncertainty and chaos/non-linear dynamics was potent to dislodge predictability from science. This was followed in succession by systems theory and cybernetics, which were instrumental in highlighting the scientific basis for holism and emergence, and showing equally well that knowledge was intrinsically subjective. Not just that, autopoiesis clarified the picture of regularity and organization as not given, but, rather dependent on a dynamically emergent tangle of conflicting forces and random fluctuations, a process very rightly referred to by Prigogine and Stengers (1984) as “order out of chaos”. In very insightful language, Heylighen, Cilliers and Gershenson (2007) pin their hopes on these different approaches, which are now starting to become integrated under the heading of “complexity science”. It’s central paradigm is the multi-agent system: a collection of autonomous components whose local interactions give rise to a global order. Agents are intrinsically subjective and uncertain about the consequences of their actions, yet they generally manage to self-organize into an emergent, adaptive system. Thus uncertainty and subjectivity should no longer be viewed negatively, as the loss of the absolute order of mechanicism, but positively, as factors of creativity, adaptation and evolution….Although a number of (mostly post-modern) philosophers have expressed similar sentiments, the complexity paradigm still needs to be assimilated by academic philosophy.

Such a need is a requisite for complexity to become more aware about how modeling techniques could be made more robust, and for philosophy to understand and resolve some hitherto unaddressed, but perennial problems.

———————————————————–

1  The political implications of such a thesis is rare, but forceful. To add to the quote above, there are other quotes as well, that deliberate on socio-political themes. Like,

“We claim that there are two ways to appeal to ‘necessary destructions’: that of the poet, who speaks in the name of a creative power, capable of overturning all orders and representations in order to affirm Difference in the state of permanent revolution which characterizes eternal return; and that of the politician, who is above all concerned to deny that which ‘differs,’ so as to conserve or prolong an established historical order.” (Deleuze 1994: 53).

and,

“Real revolutions have the atmosphere of fétes. Contradiction is not the weapon of the proletariat but, rather, the manner in which the bourgeoisie defends and preserves itself, the shadow behind which it maintains its claim to decide what the problems are.” (Deleuze 1994: 268).

2 It should however be noted, that only immanent philosophies of the sort Deleuze propagates, the processes of individuation could be accounted for. Moreover, once such an aim is attained, regularities in the world are denied any eternal and universal validation.

3 He defines communication as “a kind of autopoetic network of operations which continually organizes what we seek, the coincidence of self-reference (utterance) and external reference (information)” (1992: 1424). He details this out saying,

“Communication comes about by splitting reality through a highly artificial distinction between utterance and information, both taken as contingent events within an ongoing process that recursively uses the results of previous steps and anticipates further ones”. (1992: 1424).

Bibliography

Ciliers, P. (2005) Complexity, Deconstruction and Relativism. In Theory, Culture & Society, Vol. 22 (5). pp. 255 – 267.

De Landa, M. (2006) New Philosophy of Society: Assemblage Theory and Social Complexity. London: Continuum.

Deleuze, G. (1994) Difference and Repetition. Translated by Patton, P. New York: Columbia University Press.

—————- (2003) Desert Islands and Other Texts (1953-1974). Translated by Taormina, M. Los Angeles: Semiotext(e).

Hayles, K., Luhmann, N., Rasch, W., Knodt, E. & Wolfe, C. (1995 Autumn) Theory of a Different Order: A Conversation with Katherine Hayles and Niklas Luhmann. In Cultural Critique, No. 31, The Politics of Systems and Environments, Part II. Minneapolis, MN: University of Minnesota Press.

Heylighen, F., Cilliers, P., and Gershenson, C. (2007) The Philosophy of Complexity. In Bogg, J. & Geyer, R. (eds), Complexity, Science and Society. Oxford: Radcliffe Publishing.

Luhmann, N (1992) Operational Closure and Structural Coupling: The Differentiation of the Legal System. Cardoza Law Review Vol. 13.

Lyotard, J-F. (1984) The Postmodern Condition: A Report on Knowledge. Translated by Bennington, G. & Massumi, B. Minneapolis, MN: University of Minnesota Press.
Prigogine, I. and Stengers, I. (1984) Order out of Chaos. New York: Bantam Books.

Sweetman, B. (1999) Postmodernism, Derrida and Différance: A Critique. In International Philosophical Quarterly XXXIX (1)/153. pp. 5 – 18.

Urry, J. (2003) Global Complexity. Cambridge: Polity Press.