Sellarsian Intentionality. Thought of the Day 59.0

121780

Sellars developed a theory of intentionality that seems calculated to so construe intentional phenomena as to make them compatible with developments in the sciences.

Now if thoughts are items which are conceived in terms of the roles they play, then there is no barrier in principle to the identification of conceptual thinking with neurophysiological process. There would be no “qualitative” remainder to be accounted for. The identification, curiously enough, would be even more straightforward than the identification of the physical things in the manifest image with complex systems of physical particles. And in this key, if not decisive, respect, the respect in which both images are concerned with conceptual thinking (which is the distinctive trait of man), the manifest and scientific images could merge without clash in the synoptic view. (Philosophy and the Scientific Image of Man).

The first thing to notice is that Sellars maintains that intentionality is irreducible in the sense that we cannot define in any of the vocabularies of the natural sciences concepts equivalent to the concepts of intentionality. The language of intentionality is introduced as an autonomous explanatory vocabulary tied, of course, to the vocabulary of empirical behavior, but not reducible to that language. The autonomy of mentalistic discourse surely commits us to a new ideology, a new set of basic predicates, above and beyond what can be constructed in the vocabularies of the natural sciences. What we get from the sciences can be the whole truth about the world, including intentional phenomena, then, only if there is some way to construct, using proper scientific methodology, concepts in the scientific image that are legitimate successors to the concepts of intentionality present in the manifest image. That there is such a rigorous construction of successors to the concepts of intentionality is, a clear commitment on Sellars’s part. The only real alternative is some form of eliminativism, an alternative that some of his students adopted and some of his critics thought Sellars was committed to, but which never held any real attraction for Sellars.

The second thing to notice is that the concepts of intentionality, especially the concepts of agency, differ in some significant ways from the normal concepts of the natural sciences. Sellars puts it this way:

To say that a certain person desired to do A, thought it his duty to do B but was forced to do C, is not to describe him as one might describe a scientific specimen. One does, indeed, describe him, but one does something more. And it is this something more which is the irreducible core of the framework of persons.

Here the focus is explicitly on the language of agency, but the point is fundamentally the same as in Sellars’s well-known dictum from Empiricism and Philosophy of Mind:

in characterizing an episode or a state as that of knowing, we are not giving an empirical description of that episode or state; we are placing it in the logical space of reasons, of justifying and being able to justify what one says.

In both epistemic and agential language something extra-descriptive is going on. In order to accommodate this important aspect of such phenomena, Sellars tells us, we must add to the purely descriptive/explanatory vocabulary of the sciences “the language of individual and community intentions”. He points to intentions here because the point is that epistemic and agential language – mentalistic language in general – is ineluctably normative; it always contains a prescriptive, action-oriented dimension and engages in direct or indirect assessment against normative standards. In Sellars’s own theory, norms are grounded in the structure of intentions, particularly community intentions, so any truly complete image must contain the language of intentions.

HumanaMente 

Advertisement

Financial Entanglement and Complexity Theory. An Adumbration on Financial Crisis.

entanglement

The complex system approach in finance could be described through the concept of entanglement. The concept of entanglement bears the same features as a definition of a complex system given by a group of physicists working in a field of finance (Stanley et al,). As they defined it – in a complex system all depends upon everything. Just as in the complex system the notion of entanglement is a statement acknowledging interdependence of all the counterparties in financial markets including financial and non-financial corporations, the government and the central bank. How to identify entanglement empirically? Stanley H.E. et al formulated the process of scientific study in finance as a search for patterns. Such a search, going on under the auspices of “econophysics”, could exemplify a thorough analysis of a complex and unstructured assemblage of actual data being finalized in the discovery and experimental validation of an appropriate pattern. On the other side of a spectrum, some patterns underlying the actual processes might be discovered due to synthesizing a vast amount of historical and anecdotal information by applying appropriate reasoning and logical deliberations. The Austrian School of Economic Thought which, in its extreme form, rejects application of any formalized systems, or modeling of any kind, could be viewed as an example. A logical question follows out this comparison: Does there exist any intermediate way of searching for regular patters in finance and economics?

Importantly, patterns could be discovered by developing rather simple models of money and debt interrelationships. Debt cycles were studied extensively by many schools of economic thought (Shiller, Robert J._ Akerlof, George A – Animal Spirits: How Human Psychology Drives the Economy, and Why It Matters for Global Capitalism). The modern financial system worked by spreading risk, promoting economic efficiency and providing cheap capital. It had been formed during the years as bull markets in shares and bonds originated in the early 1990s. These markets were propelled by abundance of money, falling interest rates and new information technology. Financial markets, by combining debt and derivatives, could originate and distribute huge quantities of risky structurized products and sell them to different investors. Meanwhile, financial sector debt, only a tenth of the size of non-financial-sector debt in 1980, became half as big by the beginning of the credit crunch in 2007. As liquidity grew, banks could buy more assets, borrow more against them, and enjoy their value rose. By 2007 financial services were making 40% of America’s corporate profits while employing only 5% of its private sector workers. Thanks to cheap money, banks could have taken on more debt and, by designing complex structurized products, they were able to make their investment more profitable and risky. Securitization facilitating the emergence of the “shadow banking” system foments, simultaneously, bubbles on different segments of a global financial market.

Yet over the past decade this system, or a big part of it, began to lose touch with its ultimate purpose: to reallocate deficit resources in accordance with the social priorities. Instead of writing, managing and trading claims on future cashflows for the rest of the economy, finance became increasingly a game for fees and speculation. Due to disastrously lax regulation, investment banks did not lay aside enough capital in case something went wrong, and, as the crisis began in the middle of 2007, credit markets started to freeze up. Qualitatively, after the spectacular Lehman Brothers disaster in September 2008, laminar flows of financial activity came to an end. Banks began to suffer losses on their holdings of toxic securities and were reluctant to lend to one another that led to shortages of funding system. This only intensified in late 2007 when Nothern Rock, a British mortgage lender, experienced a bank run that started in the money markets. All of a sudden, liquidity became in a short supply, debt was unwound, and investors were forced to sell and write down the assets. For several years, up to now, the market counterparties no longer trust each other. As Walter Bagehot, an authority on bank runs, once wrote:

Every banker knows that if he has to prove that he is worth of credit, however good may be his arguments, in fact his credit is gone.

In an entangled financial system, his axiom should be stretched out to the whole market. And it means, precisely, financial meltdown or the crisis. The most fascinating feature of the post-crisis era on financial markets was the continuation of a ubiquitous liquidity expansion. To fight the market squeeze, all the major central banks have greatly expanded their balance sheets. The latter rose, roughly, from about 10 percent to 25-30 percent of GDP for the appropriate economies. For several years after the credit crunch 2007-09, central banks bought trillions of dollars of toxic and government debts thus increasing, without any precedent in modern history, money issuance. Paradoxically, this enormous credit expansion, though accelerating for several years, has been accompanied by a stagnating and depressed real economy. Yet, until now, central bankers are worried with downside risks and threats of price deflation, mainly. Otherwise, a hectic financial activity that is going on along unbounded credit expansion could be transformed by herding into autocatalytic process that, if being subject to accumulation of a new debt, might drive the entire system at a total collapse. From a financial point of view, this systemic collapse appears to be a natural result of unbounded credit expansion which is ‘supported’ with the zero real resources. Since the wealth of investors, as a whole, becomes nothing but the ‘fool’s gold’, financial process becomes a singular one, and the entire system collapses. In particular, three phases of investors’ behavior – hedge finance, speculation, and the Ponzi game, could be easily identified as a sequence of sub-cycles that unwound ultimately in the total collapse.

Emergentic Philosophy or Defining Complexity

techno-worlds-complexity-and-complications-clockwork-silver-serge-averbukh

If the potential of emergence is not pregnant with what emerges from it, then emergence becomes just a gobbledygook (generally unintelligible) of abstraction and obscurity. What is this differentiation all about? The origin of differentiation is to be located in what has already been actualized. Thus, potential is not only abstract, but relative. Abstract, since, potential could come to mean a host of other things than that what it is meant for, and relative, since it is dependent on intertwinings within which it could unfold. Potentiality is creative for philosophy, through an expansive notion of unity through assemblages of multiple singularities helping dislodge anthropocentric worldviews that insist on rationale of the world as a solid and stable structure. A way out is to think in terms of liquid structures, where power to self-organize and untouched by any human static control allows for an existence at the edge of creative and flowing chaos. Such a position is tangible in history as a confluence of infinite variations, and rooted in materialism of a revived form. Emergence is a diachronic construction of functional structures in complex systems attaining a synchronic coherence of systemic behavior during the process of arresting the individual component’s behavior, so very crucial in ramifications for addressing burning questions in the philosophy of science, especially the ones concerning reductionism. Complexity investigates emergent properties, certain regularities of behavior that somehow transcend the ingredients that make them up. Complexity argues against reductionism, against reducing the whole to the parts. And in doing so, it transforms scientific understanding of far-from-equilibrium structures of irreversible times and of non-Euclidean spaces.

Simulations of Representations: Rational Calculus versus Empirical Weights

While modeling a complex system, it should never be taken for granted that these models somehow simplify the systems, for that would only strip the models of the capability to account for encoding, decoding, and retaining information that are sine qua non for the environment they plan to model, and the environment that these models find themselves embedded in. Now, that the traditional problems of representation are fraught with loopholes, there needs to be a way to jump out of this quandary, if modeling complex systems are not to be impacted by the traces of these very traditional notions of representation. The employment of post-structuralist theories are sure indicative of getting rid of the symptoms, since they score over the analytical tradition, where, representation is only an analogue of the thing represented, whereas, simulation with its affinity to French theory is conducive to a distributed and a holistic analogy. Any argument against representation is not to be taken as meaning anti-scientific, since it is merely an argument against a particular scientific methodology and/or strategy that assumes complexity to be reducible, and therefore implementable or representable in a machine. The argument takes force only as an appreciation for the nature of complexity, something that could perhaps be repeated in a machine, should the machine itself be complex enough to cope with the distributed character of complexity. Representation is a state that stands-in for some other state, and hence is nothing short of “essentially” about meaning. The language, thought that is incorporated in understanding the world we are embedded in is efficacious only if representation relates to the world, and therefore “relationship” is another pillar of representation. Unless a relationship relates the two, one gets only an abstracted version of the so-called identities in themselves with no explanatory discourse. In the world of complexity, such identity based abstractions lose their essence, for modeling takes over the onus of explanations, and therefore, it is without doubt, the establishment of these relations that bring together states of representations as taking high priority. Representation holds a central value in both formal systems and in neural networks or connectionism, where the former is characterized by a rational calculus, and the latter by patterns that operate over the network lending it a more empirical weight.

figure8

Let logic programming be the starting point for deliberations here. The idea behind this is using mathematical logic to successfully apply to computer programming. When logic is used as such, it is used as a declarative representational language; declarative because, logic of computation is expressed without accounting for the flow of control. In other words, within this language, the question is centered around what-ness, rather than how-ness. Declarative representation has a counterpart in procedural representation, where the onus is on procedures, functions, routines and methods. Procedural representation is more algorithmic in nature, as it depends upon following steps to carry out computation. In other words, the question is centered around how-ness. But logic programming as it is commonly understood cannot do without both of them becoming a part of programming language at the same time. Since both of them are required, propositional logic that deals primarily with declarative representational languages would not suffice all alone, and hence, what is required is a logic that would touch upon predicates as well. This is made possible by first-order predicate logic that distinguishes itself from propositional logic by its use of quantifiers(1). The predicate logic thus finds its applications suited for deductive apparatus of formal systems, where axioms and rules of inferences are instrumental in deriving theorems that guide these systems. This setup is too formal in character and thus calls for a connectionist approach, since the latter is simply not keen to have predicate logic operate over deductive apparatus of a formal system at its party.

If brain and language (natural language and not computer languages, which are more rule-based and hence strict) as complex systems could be shown to have circumvented representationism via modeling techniques, the classical issues inherent in representation would be gotten rid of in the sense of a problematic. Functionalism as the prevalent theory in philosophy of mind that parallels computational model is the target here. In the words of Putnam,

I may have been the first philosopher to advance the thesis that the computer is the right model for mind. I gave my form of this doctrine the name ‘functionalism’, and under this name, it has become the dominant view – some say the orthodoxy – in contemporary philosophy of mind.

The computer metaphor with mind is clearly visible here, with the former having an hardware apparatus that is operated upon by the software programs, while the latter shares the same relation with brain (hardware) and mind (software). So far, so good, but there is a hitch. Like the computer side of metaphor, which can have a software loaded on to different hardwares, provided there is enough computational capability possessed by the hardware, the mind-brain relationship should meet the same criteria as well. If one goes by what Sterelny has hinted for functionalism as a certain physical state of the machine realizing a certain functional state, then a couple of descriptions, mutually exclusive of one another result, viz, a description on the physical level, and a description on the mental level. The consequences of such descriptions are bizarre to the extent that mind as a software can also find its implementation on any other hardware, provided the conditions for hardware’s capability to run the software are met successfully. One could hardly argue against these consequences that follow logically enough from the premisses, but a couple of blocks are not to be ignored at the same time, viz, the adequacy of the physical systems to implement the functional states, and what defines the relationships between these two mutually exclusive descriptions under the context of the same physical system. Sterelny comes up with a couple of criteria for adequate physical systems, designed, and teleological. Rather than provide any support for what he means by the systems as designed, he comes up with evolutionary tendencies, thus vouching for an external designer. The second one gets disturbing, if there is no description made, and this is precisely what Sterelny never offers. His citation of a bucket of water not having a telos in the sense of brain having one, only makes matters slide into metaphysics. Even otherwise, functionalism as a nature of mental states is metaphysical and ontological in import. This claim gets all the more highlighted, if one believes following Brentano that intentionality is the mark of the mental, then any theory of intentionality can be converted into a theory of of the ontological nature of psychological states. Getting back to the second description of Sterelny, functional states attain meaning, if they stand for something else, hence functionalism gets representational. And as Paul Cilliers says it cogently, grammatical structure of the language represents semantical content, and the neurological states of the brain represent certain mental states, thus proving without doubt, the responsibility on representation on establishing a link between the states of the system and conceptual meaning. This is again echoed in Sterelny,

There can be no informational sensitivity without representation. There can be no flexible and adaptive response to the world without representation. To learn about the world, and to use what we learn to act in new ways, we must be able to represent the world, our goals and options. Furthermore we must make appropriate inferences from these representations.

As representation is essentially about meaning, two levels are to be related with one another for any meaning to be possible. In the formal systems, or the rule-based approach, these relations are provided by creating a nexus between “symbol” and what it “symbolizes”. This fundamental linkage is offered by Fodor in his 1975 book, The Language of Thought. The main thesis of the book is about cognition and cognitive processes as remotely plausible, when computationally expressed in terms of representational systems. The language in possession of its own syntactic and semantic structures, and also independent of any medium, exhibits a causal effect on mental representations. Such a language is termed by him “mentalese”, which is implemented in the neural structure (a case in point for internal representation(2)), and following permutations allows for complex thoughts getting built up through simpler versions. The underlying hypothesis states that such a language applies to thoughts having propositional content, implying thoughts as having syntaxes. In order for complex thoughts to be generated, simple concepts are attached with the most basic linguistic token that combine following rules of logic (combinatorial rules). The language thus enriched is not only productive, with regard to length of the sentence getting longer (potentially so) without altering the meaning (concatenation), but also structured, in that rules of grammar that allow us to make inferences about linguistic elements previously unrelated. Once this task is accomplished, the representational theory of thought steps in to explicate on the essence of tokens and how they behave and relate. The representational theory of thought validates mental representations, that stand in uniquely for a subject of representation having a specific content to itself, to allow for causally generated complex thought. Sterelny echoes this when he says,

Internal representation helps us visualize our movements in the world and our embeddedness in the world. Internal representation takes it for granted that organisms inherently have such an attribute to have any cognition whatsoever. The plus point as in the work of Fodor is the absence of any other theory that successfully negotiates or challenges the very inherent-ness of internal representation.

For this model, and based on it, require an agent to represent the world as it is and as it might be, and to draw appropriate inferences from that representation. Fodor argues that the agent must have a language-like symbol system, for she can represent indefinitely many and indefinitely complex actual and possible states of her environment. She could not have this capacity without an appropriate means of representation, a language of thought. Mentalese thus is too rationalist in its approach, and hence in opposition to neural networks or connectionism. As there can be no possible cognitive processes without mental representations, the theory has many takers(3). One line of thought that supports this approach is the plausibility of psychological models that represent cognitive processes as representational thereby inviting computational thought to compute.

(1) Quantifier is an operator that binds a variable over a domain of discourse. The domain of discourse in turn specifies the range of these relevant variables.

(2) Internal representation helps us visualize our movements in the world and our embeddedness in the world. Internal representation takes it for granted that organisms inherently have such an attribute to have any cognition whatsoever. The plus point as in the work of Fodor is the absence of any other theory that successfully negotiates or challenges the very inherent-ness of internal representation.

(3) Tim Crane is a notable figure here. Crane explains Fodor’s Mentalese Hypothesis as desiring one thing and something else. Crane returns to the question of why we should believe the vehicle of mental representation is a language. Crane states that while he agrees with Fodor, his method of reaching it is very different. Crane goes on to say that reason: our ability as humans to decide a rational decision from the information giving is his argument for this question. Association of ideas lead to other ideas which only have a connection for the thinker. Fodor agrees that free association goes on but he says that is in a systemic, rational way that can be shown to work with the Language of Thought theory. Fodor states you must look at in a computational manner and that this allows it to be seen in a different light than normally and that free association follows a certain manner that can be broken down and explained with Language of Thought. Language of Thought.

scientificamericanmind0516-22-i5