Connectionism versus Representational Theory of Mind


Although there are some promises shown by the representational theory of mind (functionalism) with its insistence of rationalistic tendencies, there are objections that aim to derail this theory. Since the language of thought (representational theory of mind) believes in the existence of folk psychology, opponents of folk psychology dismiss this approach. As was discussed in the first chapter, folk psychology is not a very successful guide to explain the workings of the mind. Since representational theory explains how mental states are responsible for causing behaviors, it believes in folk psychology, and therefore the most acrimonious criticisms for this approach come from the eliminative materialist camp. For the eliminative materialist, there is a one to one mapping between psychological states and neurophysiological states in the brain thus expositing the idea that mental behavior is better explained as compared to when neurophysiology attempts to do the same, since the vistas for doing so are quantifiably and qualifiably more. Behaviorism with its insistence on the absence of linkages between mental states and effects of behavior would be another objection. Importantly, even Searle refuses to be a part of representational theory, with his biological naturalism (as discussed in the second chapter) which is majorly non-representational and investing faith in the causal efficacy of mental states. Another objection is the homunculi regression, according to which there is an infinite regression of explanation about how sentences get their meanings. For Searle, even if this is true, it is only partly true, since it is only at the bottom-level homunculi where manipulation of symbols take place, after which there is aporetic situation. Daniel Dennett on the other hand talks about a “no-need-for-interpretation” at this bottom level, since at this level simplicity crops up. Searle is a monist, but divides intentional states into low-level brain activity and high-level mental activity. So it is these lower-level, nonrepresentational neurophysiological processes that have causal powers in intention and behavior, rather than some higher-level mental representation. Yet another form of challenge comes from within the camp of representational theory of mind itself, which is suggestive of scientific-cognitive research as showing the amount of intelligent action as generated by complex interactions involving neural, bodily and environmental factors. This threat to representational theory(1) is prosaically worded by Wheeler and Clark , when they say,

These are hard times for the notion of internal representation. Increasingly, theorists are questioning the explanatory value of the appeal to internal representation in the search for a scientific understanding of the mind and of intelligent action. What is in dispute is not, of course, the status of certain intelligent agents as representers of their worlds…What is in doubt is not our status as representers, but the presence within us of identifiable and scientifically well-individuated vehicles of representational content. Recent work in neuroscience(2), robotics, philosophy, and development psychology suggests, by way of contrast, that a great deal of (what we intuitively regard as) intelligent action maybe grounded not in the regimented activity of inner content-bearing vehicles, but in complex interactions involving neural, bodily, and environmental factors and forces.

There is a growing sense of skepticism against representational theory, and from within the camp as well, though not all that hostile as the above quote specified. Speculations are rife that this may be due to embracing the continental tradition of phenomenology and Gibsonian psychology, which could partly indicate a move away from the strictures of rule-based approach. This critique from within the camp of internal representation would come in very handy in approaching connectionism and in prioritizing its utility, as a substitute to representation. What is of crucial importance here is the notion of continuous reciprocal causation, which involves multiple simultaneous interactions alongside complex dynamical feedback loops, thus facilitating a causal contribution of each component in the system as determining and determined by the causal contributions of large number of other components on the one hand, and the potentiality of these contributions to change in a radical manner temporally. When the complexity of the causal interactions shoots up, it automatically signals the difficulty level of representation’s explanatory prowess that simultaneously rises. But the real threat to the representational theory comes from connectionism, which despite in agreement with some of the premisses of representational theory, deviate greatly when it comes to creating machines that could be said to think. With neural networks and weights attached to them, a learning algorithm makes it possible to undergo modifications within the network over time. Although, Fodor defends his language of thought, or the representation theory in general against connectionism by claiming that the neural network is just a realization of the computational theory of mind that necessarily employs symbol manipulation. He does this through his use of cognitive architecture. Campers in connectionism however, deny connectionism as a mere implementation of representational theory, in addition to claiming that the laws of nature do not have systematicity as resting on representation, and most importantly deny the thesis of Fodor that essentially cognition is a function that uses representational input and output in favor of eliminative connectionism. Much of the current debate between the two camps revolves around connectionst’s denial of connectionism as a mere implementation of representational theory based on cognitive architecture, which is truism in the case of classicist model. The response from connectionist side is to build a representational system, that agrees with mental representations as constituting the direct objects of propositional attitudes and in possession of combinatorial syntax and semantics, with the domain of mental processes as causally sensitive to the syntactic/formal structure of representations as defined by these combinatorial syntax, thus relying upon a non-concatenative realization of syntactic/structural complexity of representations, in turn yielding a non-classical system.



(1) As an aside, on the threat of representational theory, I see a lot of parallel here between the threat to internal representation and Object Oriented Ontology (L. Bryant flavor), where objects are treated as black boxes, and hence objects are withdrawn, implying that no one, including us would claim any direct access to the inner world, thereby shutting the inner world from any kind of representation. Although paradoxically sounding, it is because of withdrawal that knowledge of objects becomes thoroughly relational. When we know an object, we do not know it per se, but through a relation, thus knowledge shifts from the register of representation to that of performance, meeting its fellow travelers in Deleuze and Guattari, who defend a performative ontology of the world ala Andrew Pickering, by claiming that more than what a language represents, what a language does is interesting.

(2) Not a part of the quotation, but a brief on the recent work in neurons, which are prone to throwing up surprises. Neurons are getting complicated, but the basic functional concept is still synapses as transmitting electrical signals to the dendrites and the cell body (input), whereas the axons carry signals away (output). What is surprising is the finding by the scientists at the Northwestern University, that axons can act as input agents too. There is another way of saying this: axons talk with one another. Before sending signals in reverse, axons carry out their own neural computations without the aide from the cell body or dendrites. Now this is in contrast to a typical neuronal communication where an axon of one neuron is in contact with another neuron’s cell body or dendrite, and not its axon. The computations in axons are slower to a degree of 103 as compared with the computations in dendrites, potentially creating a means for neurons to compute fast things in dendrites, and slow ones in axons. Nelson Spruston, senior author of the paper (“Slow Integration Leads to Persistent Action Potential Firing in Distal Axons of Coupled Interneurons”) and professor of neurobiology and physiology in the Weinberg College of Arts and Sciences, says,

“We have discovered a number of things fundamental to how neurons work that are contrary to the information you find in neuroscience textbooks. Signals can travel from the end of the axon toward the cell body, when it typically is the other way around. We were amazed to see this.”

He and his colleagues first discovered individual nerve cells can fire off signals even in the absence of electrical stimulations in the cell body or dendrites. It’s not always stimulus in, immediate action potential out. (Action potentials are the fundamental electrical signaling elements used by neurons; they are very brief changes in the membrane voltage of the neuron). Similar to our working memory when we memorize a telephone number for later use, the nerve cell can store and integrate stimuli over a long period of time, from tens of seconds to minutes. (That’s a very long time for neurons). Then, when the neuron reaches a threshold, it fires off a long series of signals, or action potentials, even in the absence of stimuli. The researchers call this persistent firing, and it all seems to be happening in the axon. Spruston further says,

“The axons are talking to each other, but it’s a complete mystery as to how it works. The next big question is: how widespread is this behavior? Is this an oddity or does in happen in lots of neurons? We don’t think it’s rare, so it’s important for us to understand under what conditions it occurs and how this happens.”



One thought on “Connectionism versus Representational Theory of Mind

  1. […] For this model, and based on it, require an agent to represent the world as it is and as it might be, and to draw appropriate inferences from that representation. Fodor argues that the agent must have a language-like symbol system, for she can represent indefinitely many and indefinitely complex actual and possible states of her environment. She could not have this capacity without an appropriate means of representation, a language of thought. Mentalese thus is too rationalist in its approach, and hence in opposition to neural networks or connectionism. As there can be no possible cognitive processes without mental representations, the theory has many takers(3). One line of thought that supports this approach is the plausibility of psychological models that rep… […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s