Austrian School of Economics: The Praxeological Synthetic. Thought of the Day 135.0

human-action-ethics-praxeology-history

Within the Austrian economics (here, here, here and here), the a priori stance has dominated a tradition running from Carl Menger to Murray Rothbard. The idea here is that the basic structures of economy is entrenched in the more basic structures of human action as such. Nowhere is this more evident than in the work of Ludwig von Mises – his so-called ‘praxeology’, which rests on the fundamental axiom that individual human beings act on the primordial fact that individuals engage in conscious actions toward chosen goals, is built from the idea that all basic laws of economy can be derived apriorically from one premiss: the concept of human action. Of course, this concept is no simple concept, containing within itself purpose, product, time, scarcity of resources, etc. – so it would be more fair to say that economics lies as the implication of the basic schema of human action as such.

Even if the Austrian economists’ conception of the a priori is decidedly objectivist and anti-subjectivist, it is important to remark their insistence on subjectivity within their ontological domain. The Austrian economics tradition is famous exactly for their emphasis on the role of subjectivity in economy. From Carl Menger onwards, they protest against the mainstream economical assumption that the economic agent in the market is fully rational, knows his own preferences in detail, has constant preferences over time, has access to all prices for a given commodity at a given moment, etc. Thus, von Mises’ famous criticism of socialist planned economy is built on this idea: the system of ever-changing prices in the market constitutes a dispersed knowledge about the conditions of resource allocation which is a priori impossible for any single agent – let alone, any central planner’s office – to possess. Thus, their conception of the objective a priori laws of the economic domain perhaps surprisingly had the implication that they warned against a too objectivist conception of economy not taking into account the limits of economic rationality stemming from the general limitations of the capacities of real subjects. Their ensuing liberalism is thus built on a priori conclusions about the relative unpredictability of economics founded on the role played by subjective intentionality. For the same reason, Hayek ended up with a distinction between simple and complex processes, respectively, cutting across all empirical disciplines, where only the former permit precise, predictive, quantitative calculi based on mathemathical modeling while the latter permit only recognition of patterns (which may also be mathematically modeled, to be sure, but without quantitative predictability). It is of paramount importance, though, to distinguish this emphasis on the ineradicable role of subjectivity in certain regional domains from Kantian-like ideas about the foundational role of subjectivity in the construction of knowledge as such. The Austrians are as much subjectivists in the former respect as they are objectivists in the latter. In the history of economics, the Austrians occupy a middle position, being against historicism on the one hand as well as against positivism on the other. Against the former, they insist that a priori structures of economy transgress history which does not possess the power to form institutions at random but only as constrained by a priori structures. And against the latter, they insist that the mere accumulation of empirical data subject to induction will never in itself give rise to the formation of theoretical insights. Structures of intelligible concepts are in all cases necessary for any understanding of empirical regularities – in so far, the Austrian a priori approach is tantamount to a non-skepticist version of the doctrine of ‘theory-ladenness’ of observations.

A late descendant of the Austrian tradition after its emigration to the Anglo-Saxon world (von Mises, Hayek, and Schumpeter were such emigrés) was the anarcho-liberal economist Murray Rothbard, and it is the inspiration from him which allows Barry Smith to articulate the principles underlying the Austrians as ‘fallibilistic apriorism’. Rothbard characterizes in a brief paper what he calls ‘Extreme Apriorism’ as follows:

there are two basic differences between the positivists’ model science of physics on the one hand, and sciences dealing with human actions on the other: the former permits experimental verification of consequences of hypotheses, which the latter do not (or, only to a limited degree, we may add); the former admits of no possibility of testing the premisses of hypotheses (like: what is gravity?), while the latter permits a rational investigation of the premisses of hypotheses (like: what is human action?). This state of affairs makes it possible for economics to derive its basic laws with absolute – a priori – certainty: in addition to the fundamental axiom – the existence of human action – only two empirical postulates are needed: ‘(1) the most fundamental variety of resources, both natural and human. From this follows directly the division of labor, the market, etc.; (2) less important, that leisure is a consumer good’. On this basis, it may e.g. be inferred, ‘that every firm aims always at maximizing its psychic profit’.

Rothbard draws forth this example so as to counterargue traditional economists who will claim that the following proposition could be added as a corollary: ‘that every firm aims always at maximizing its money profit’. This cannot be inferred and is, according to Rothbard, an economical prejudice – the manager may, e.g. prefer for nepotistic reasons to employ his stupid brother even if that decreases the firm’s financial profit possibilities. This is an example of how the Austrians refute the basic premiss of absolute rationality in terms of maximal profit seeking. Given this basis, other immediate implications are:

the means-ends relationship, the time-structure of production, time-preference, the law of diminishing marginal utility, the law of optimum returns, etc.

Rothbard quotes Mises for seeing the fundamental Axiom as a ‘Law of Thought’ – while he himself sees this as a much too Kantian way of expressing it, he prefers instead the simple Aristotelian/Thomist idea of a ‘Law of Reality’. Rothbard furthermore insists that this doctrine is not inherently political – in order to attain the Austrians’ average liberalist political orientation, the preference for certain types of ends must be added to the a priori theory (such as the preference for life over death, abundance over poverty, etc.). This also displays the radicality of the Austrian approach: nothing is assumed about the content of human ends – this is why they will never subscribe to theories about Man as economically rational agent or Man as necessarily economical egotist. All different ends meet and compete on the market – including both desire for profit in one end and idealist, utopian, or altruist goals in the other. The principal interest, in these features of economical theory is the high degree of awareness of the difference between the – extreme – synthetic a priori theory developed, on the one hand, and its incarnation in concrete empirical cases and their limiting conditions on the other.

 

Advertisement

Metaphysical Continuity in Peirce. Thought of the Day 122.0

image12

Continuity has wide implications in the different parts of Peirce’s architectonics of theories. Time and time again, Peirce refers to his ‘principle of continuity’ which has not immediately anything to do with Poncelet’s famous such principle in geometry, but, is rather, a metaphysical implication taken to follow from fallibilism: if all more or less distinct phenomena swim in a vague sea of continuity then it is no wonder that fallibilism must be accepted. And if the world is basically continuous, we should not expect conceptual borders to be definitive but rather conceive of terminological distinctions as relative to an underlying, monist continuity. In this system, mathematics is first science. Thereafter follows philosophy which is distinguished form purely hypothetical mathematics by having an empirical basis. Philosophy, in turn, has three parts, phenomenology, the normative sciences, and metaphysics. The first investigates solely ‘the Phaneron’ which is all what could be imagined to appear as an object for experience: ‘ by the word phaneron I mean the collective total of all that is in any way or in any sense present to the mind, quite regardless whether it corresponds to any real thing or not.’ (Charles Sanders Peirce – Collected Papers of Charles Sanders Peirce) As is evident, this definition of Peirce’s ‘phenomenology’ is parallel to Husserl’s phenomenological reduction in bracketing the issue of the existence of the phenomenon in question. Even if it thus is built on introspection and general experience, it is – analogous to Husserl and other Brentano disciples at the same time – conceived in a completely antipsychological manner: ‘It religiously abstains from all speculation as to any relations between its categories and physiological facts, cerebral or other.’ and ‘ I abstain from psychology which has nothing to do with ideoscopy.’ (Letter to Lady Welby). The normative sciences fall in three: aesthetics, ethics, logic, in that order (and hence decreasing generality), among which Peirce does not spend very much time on the former two. Aesthetics is the investigation of which possible goals it is possible to aim at (Good, Truth, Beauty, etc.), and ethics how they may be reached. Logic is concerned with the grasping and conservation of Truth and takes up the larger part of Peirce’s interest among the normative sciences. As it deals with how truth can be obtained by means of signs, it is also called semiotics (‘logic is formal semiotics’) which is thus coextensive with theory of science – logic in this broad sense contains all parts of philosophy of science, including contexts of discovery as well as contexts of justification. Semiotics has, in turn, three branches: grammatica speculativa (or stekheiotics), critical logic, and methodeutic (inspired by mediaeval trivium: grammar, logic, and rhetoric). The middle one of these three lies closest to our days’ conception of logic; it is concerned with the formal conditions for truth in symbols – that is, propositions, arguments, their validity and how to calculate them, including Peirce’s many developments of the logic of his time: quantifiers, logic of relations, ab-, de-, and induction, logic notation systems, etc. All of these, however, presuppose the existence of simple signs which are investigated by what is often seen as semiotics proper, the grammatica speculativa; it may also be called formal grammar. It investigates the formal condition for symbols having meaning, and it is here we find Peirce’s definition of signs and his trichotomies of different types of sign aspects. Methodeutic or formal rhetorics, on the other hand, concerns the pragmatical use of the former two branches, that is, the study of how to use logic in a fertile way in research, the formal conditions for the ‘power’ of symbols, that is, their reference to their interpretants; here can be found, e.g., Peirce’s famous definitions of pragmati(ci)sm and his directions for scientific investigation. To phenomenology – again in analogy to Husserl – logic adds the interest in signs and their truth. After logic, metaphysics follows in Peirce’s system, concerning the inventarium of existing objects, conceived in general – and strongly influenced by logic in the Kantian tradition for seeing metaphysics mirroring logic. Also here, Peirce has several proposals for subtypologies, even if none of them seem stable, and under this headline classical metaphysical issues mix freely with generalizations of scientific results and cosmological speculations.

Peirce himself saw this classification in an almost sociological manner, so that the criteria of distinction do not stem directly from the implied objects’ natural kinds, but after which groups of persons study which objects: ‘the only natural lines of demarcation between nearly related sciences are the divisions between the social groups of devotees of those sciences’. Science collects scientists into bundles, because they are defined by their causa finalis, a teleologial intention demanding of them to solve a central problem.

Measured on this definition, one has to say that Peirce himself was not modest, not only does he continuously transgress such boundaries in his production, he frequently does so even within the scope of single papers. There is always, in his writings, a brief distance only from mathematics to metaphysics – or between any other two issues in mathematics and philosophy, and this implies, first, that the investigation of continuity and generality in Peirce’s system is more systematic than any actually existing exposition of these issues in Peirce’s texts, second, that the discussion must constantly rely on cross-references. This has the structural motivation that as soon as you are below the level of mathematics in Peirce’s system, inspired by the Comtean system, the single science receives determinations from three different directions, each science consisting of material and formal aspects alike. First, it receives formal directives ‘from above’, from those more general sciences which stand above it, providing the general frameworks in which it must unfold. Second, it receives material determinations from its own object, requiring it to make certain choices in its use of formal insights from the higher sciences. The cosmological issue of the character of empirical space, for instance, can take from mathematics the different (non-)Euclidean geometries and investigate which of these are fit to describe spatial aspects of our universe, but it does not, in itself, provide the formal tools. Finally, the single sciences receive in practice determinations ‘from below’, from more specific sciences, when their results by means of abstraction, prescission, induction, and other procedures provide insights on its more general, material level. Even if cosmology is, for instance, part of metaphysics, it receives influences from the empirical results of physics (or biology, from where Peirce takes the generalized principle of evolution). The distinction between formal and material is thus level specific: what is material on one level is a formal bundle of possibilities for the level below; what is formal on one level is material on the level above.

For these reasons, the single step on the ladder of sciences is only partially independent in Peirce, hence also the tendency of his own investigations to zigzag between the levels. His architecture of theories thus forms a sort of phenomenological theory of aspects: the hierarchy of sciences is an architecture of more and less general aspects of the phenomena, not completely independent domains. Finally, Peirce’s realism has as a result a somewhat disturbing style of thinking: many of his central concepts receive many, often highly different determinations which has often led interpreters to assume inconsistencies or theoretical developments in Peirce where none necessarily exist. When Peirce, for instance, determines the icon as the sign possessing a similarity to its object, and elsewhere determines it as the sign by the contemplation of which it is possible to learn more about its object, then they are not conflicting definitions. Peirce’s determinations of concepts are rarely definitions at all in the sense that they provide necessary and sufficient conditions exhausting the phenomenon in question. His determinations should rather be seen as descriptions from different perspectives of a real (and maybe ideal) object – without these descriptions necessarily conflicting. This style of thinking can, however, be seen as motivated by metaphysical continuity. When continuous grading between concepts is the rule, definitions in terms of necessary and sufficient conditions should not be expected to be exhaustive.

Fundamental Theorem of Asset Pricing: Tautological Meeting of Mathematical Martingale and Financial Arbitrage by the Measure of Probability.

thinkstockphotos-496599823

The Fundamental Theorem of Asset Pricing (FTAP hereafter) has two broad tenets, viz.

1. A market admits no arbitrage, if and only if, the market has a martingale measure.

2. Every contingent claim can be hedged, if and only if, the martingale measure is unique.

The FTAP is a theorem of mathematics, and the use of the term ‘measure’ in its statement places the FTAP within the theory of probability formulated by Andrei Kolmogorov (Foundations of the Theory of Probability) in 1933. Kolmogorov’s work took place in a context captured by Bertrand Russell, who observed that

It is important to realise the fundamental position of probability in science. . . . As to what is meant by probability, opinions differ.

In the 1920s the idea of randomness, as distinct from a lack of information, was becoming substantive in the physical sciences because of the emergence of the Copenhagen Interpretation of quantum mechanics. In the social sciences, Frank Knight argued that uncertainty was the only source of profit and the concept was pervading John Maynard Keynes’ economics (Robert Skidelsky Keynes the return of the master).

Two mathematical theories of probability had become ascendant by the late 1920s. Richard von Mises (brother of the Austrian economist Ludwig) attempted to lay down the axioms of classical probability within a framework of Empiricism, the ‘frequentist’ or ‘objective’ approach. To counter–balance von Mises, the Italian actuary Bruno de Finetti presented a more Pragmatic approach, characterised by his claim that “Probability does not exist” because it was only an expression of the observer’s view of the world. This ‘subjectivist’ approach was closely related to the less well-known position taken by the Pragmatist Frank Ramsey who developed an argument against Keynes’ Realist interpretation of probability presented in the Treatise on Probability.

Kolmogorov addressed the trichotomy of mathematical probability by generalising so that Realist, Empiricist and Pragmatist probabilities were all examples of ‘measures’ satisfying certain axioms. In doing this, a random variable became a function while an expectation was an integral: probability became a branch of Analysis, not Statistics. Von Mises criticised Kolmogorov’s generalised framework as un-necessarily complex. About a decade and a half back, the physicist Edwin Jaynes (Probability Theory The Logic Of Science) champions Leonard Savage’s subjectivist Bayesianism as having a “deeper conceptual foundation which allows it to be extended to a wider class of applications, required by current problems of science”.

The objections to measure theoretic probability for empirical scientists can be accounted for as a lack of physicality. Frequentist probability is based on the act of counting; subjectivist probability is based on a flow of information, which, following Claude Shannon, is now an observable entity in Empirical science. Measure theoretic probability is based on abstract mathematical objects unrelated to sensible phenomena. However, the generality of Kolmogorov’s approach made it flexible enough to handle problems that emerged in physics and engineering during the Second World War and his approach became widely accepted after 1950 because it was practically more useful.

In the context of the first statement of the FTAP, a ‘martingale measure’ is a probability measure, usually labelled Q, such that the (real, rather than nominal) price of an asset today, X0, is the expectation, using the martingale measure, of its (real) price in the future, XT. Formally,

X0 = EQ XT

The abstract probability distribution Q is defined so that this equality exists, not on any empirical information of historical prices or subjective judgement of future prices. The only condition placed on the relationship that the martingale measure has with the ‘natural’, or ‘physical’, probability measures usually assigned the label P, is that they agree on what is possible.

The term ‘martingale’ in this context derives from doubling strategies in gambling and it was introduced into mathematics by Jean Ville in a development of von Mises’ work. The idea that asset prices have the martingale property was first proposed by Benoit Mandelbrot in response to an early formulation of Eugene Fama’s Efficient Market Hypothesis (EMH), the two concepts being combined by Fama. For Mandelbrot and Fama the key consequence of prices being martingales was that the current price was independent of the future price and technical analysis would not prove profitable in the long run. In developing the EMH there was no discussion on the nature of the probability under which assets are martingales, and it is often assumed that the expectation is calculated under the natural measure. While the FTAP employs modern terminology in the context of value-neutrality, the idea of equating a current price with a future, uncertain, has ethical ramifications.

The other technical term in the first statement of the FTAP, arbitrage, has long been used in financial mathematics. Liber Abaci Fibonacci (Laurence Sigler Fibonaccis Liber Abaci) discusses ‘Barter of Merchandise and Similar Things’, 20 arms of cloth are worth 3 Pisan pounds and 42 rolls of cotton are similarly worth 5 Pisan pounds; it is sought how many rolls of cotton will be had for 50 arms of cloth. In this case there are three commodities, arms of cloth, rolls of cotton and Pisan pounds, and Fibonacci solves the problem by having Pisan pounds ‘arbitrate’, or ‘mediate’ as Aristotle might say, between the other two commodities.

Within neo-classical economics, the Law of One Price was developed in a series of papers between 1954 and 1964 by Kenneth Arrow, Gérard Debreu and Lionel MacKenzie in the context of general equilibrium, in particular the introduction of the Arrow Security, which, employing the Law of One Price, could be used to price any asset. It was on this principle that Black and Scholes believed the value of the warrants could be deduced by employing a hedging portfolio, in introducing their work with the statement that “it should not be possible to make sure profits” they were invoking the arbitrage argument, which had an eight hundred year history. In the context of the FTAP, ‘an arbitrage’ has developed into the ability to formulate a trading strategy such that the probability, under a natural or martingale measure, of a loss is zero, but the probability of a positive profit is not.

To understand the connection between the financial concept of arbitrage and the mathematical idea of a martingale measure, consider the most basic case of a single asset whose current price, X0, can take on one of two (present) values, XTD < XTU, at time T > 0, in the future. In this case an arbitrage would exist if X0 ≤ XTD < XTU: buying the asset now, at a price that is less than or equal to the future pay-offs, would lead to a possible profit at the end of the period, with the guarantee of no loss. Similarly, if XTD < XTU ≤ X0, short selling the asset now, and buying it back would also lead to an arbitrage. So, for there to be no arbitrage opportunities we require that

XTD < X0 < XTU

This implies that there is a number, 0 < q < 1, such that

X0 = XTD + q(XTU − XTD)

= qXTU + (1−q)XTD

The price now, X0, lies between the future prices, XTU and XTD, in the ratio q : (1 − q) and represents some sort of ‘average’. The first statement of the FTAP can be interpreted simply as “the price of an asset must lie between its maximum and minimum possible (real) future price”.

If X0 < XTD ≤ XTU we have that q < 0 whereas if XTD ≤ XTU < X0 then q > 1, and in both cases q does not represent a probability measure which by Kolmogorov’s axioms, must lie between 0 and 1. In either of these cases an arbitrage exists and a trader can make a riskless profit, the market involves ‘turpe lucrum’. This account gives an insight as to why James Bernoulli, in his moral approach to probability, considered situations where probabilities did not sum to 1, he was considering problems that were pathological not because they failed the rules of arithmetic but because they were unfair. It follows that if there are no arbitrage opportunities then quantity q can be seen as representing the ‘probability’ that the XTU price will materialise in the future. Formally

X0 = qXTU + (1−q) XTD ≡ EQ XT

The connection between the financial concept of arbitrage and the mathematical object of a martingale is essentially a tautology: both statements mean that the price today of an asset must lie between its future minimum and maximum possible value. This first statement of the FTAP was anticipated by Frank Ramsey when he defined ‘probability’ in the Pragmatic sense of ‘a degree of belief’ and argues that measuring ‘degrees of belief’ is through betting odds. On this basis he formulates some axioms of probability, including that a probability must lie between 0 and 1. He then goes on to say that

These are the laws of probability, …If anyone’s mental condition violated these laws, his choice would depend on the precise form in which the options were offered him, which would be absurd. He could have a book made against him by a cunning better and would then stand to lose in any event.

This is a Pragmatic argument that identifies the absence of the martingale measure with the existence of arbitrage and today this forms the basis of the standard argument as to why arbitrages do not exist: if they did the, other market participants would bankrupt the agent who was mis-pricing the asset. This has become known in philosophy as the ‘Dutch Book’ argument and as a consequence of the fact/value dichotomy this is often presented as a ‘matter of fact’. However, ignoring the fact/value dichotomy, the Dutch book argument is an alternative of the ‘Golden Rule’– “Do to others as you would have them do to you.”– it is infused with the moral concepts of fairness and reciprocity (Jeffrey Wattles The Golden Rule).

FTAP is the ethical concept of Justice, capturing the social norms of reciprocity and fairness. This is significant in the context of Granovetter’s discussion of embeddedness in economics. It is conventional to assume that mainstream economic theory is ‘undersocialised’: agents are rational calculators seeking to maximise an objective function. The argument presented here is that a central theorem in contemporary economics, the FTAP, is deeply embedded in social norms, despite being presented as an undersocialised mathematical object. This embeddedness is a consequence of the origins of mathematical probability being in the ethical analysis of commercial contracts: the feudal shackles are still binding this most modern of economic theories.

Ramsey goes on to make an important point

Having any definite degree of belief implies a certain measure of consistency, namely willingness to bet on a given proposition at the same odds for any stake, the stakes being measured in terms of ultimate values. Having degrees of belief obeying the laws of probability implies a further measure of consistency, namely such a consistency between the odds acceptable on different propositions as shall prevent a book being made against you.

Ramsey is arguing that an agent needs to employ the same measure in pricing all assets in a market, and this is the key result in contemporary derivative pricing. Having identified the martingale measure on the basis of a ‘primal’ asset, it is then applied across the market, in particular to derivatives on the primal asset but the well-known result that if two assets offer different ‘market prices of risk’, an arbitrage exists. This explains why the market-price of risk appears in the Radon-Nikodym derivative and the Capital Market Line, it enforces Ramsey’s consistency in pricing. The second statement of the FTAP is concerned with incomplete markets, which appear in relation to Arrow-Debreu prices. In mathematics, in the special case that there are as many, or more, assets in a market as there are possible future, uncertain, states, a unique pricing vector can be deduced for the market because of Cramer’s Rule. If the elements of the pricing vector satisfy the axioms of probability, specifically each element is positive and they all sum to one, then the market precludes arbitrage opportunities. This is the case covered by the first statement of the FTAP. In the more realistic situation that there are more possible future states than assets, the market can still be arbitrage free but the pricing vector, the martingale measure, might not be unique. The agent can still be consistent in selecting which particular martingale measure they choose to use, but another agent might choose a different measure, such that the two do not agree on a price. In the context of the Law of One Price, this means that we cannot hedge, replicate or cover, a position in the market, such that the portfolio is riskless. The significance of the second statement of the FTAP is that it tells us that in the sensible world of imperfect knowledge and transaction costs, a model within the framework of the FTAP cannot give a precise price. When faced with incompleteness in markets, agents need alternative ways to price assets and behavioural techniques have come to dominate financial theory. This feature was realised in The Port Royal Logic when it recognised the role of transaction costs in lotteries.

Simulations of Representations: Rational Calculus versus Empirical Weights

While modeling a complex system, it should never be taken for granted that these models somehow simplify the systems, for that would only strip the models of the capability to account for encoding, decoding, and retaining information that are sine qua non for the environment they plan to model, and the environment that these models find themselves embedded in. Now, that the traditional problems of representation are fraught with loopholes, there needs to be a way to jump out of this quandary, if modeling complex systems are not to be impacted by the traces of these very traditional notions of representation. The employment of post-structuralist theories are sure indicative of getting rid of the symptoms, since they score over the analytical tradition, where, representation is only an analogue of the thing represented, whereas, simulation with its affinity to French theory is conducive to a distributed and a holistic analogy. Any argument against representation is not to be taken as meaning anti-scientific, since it is merely an argument against a particular scientific methodology and/or strategy that assumes complexity to be reducible, and therefore implementable or representable in a machine. The argument takes force only as an appreciation for the nature of complexity, something that could perhaps be repeated in a machine, should the machine itself be complex enough to cope with the distributed character of complexity. Representation is a state that stands-in for some other state, and hence is nothing short of “essentially” about meaning. The language, thought that is incorporated in understanding the world we are embedded in is efficacious only if representation relates to the world, and therefore “relationship” is another pillar of representation. Unless a relationship relates the two, one gets only an abstracted version of the so-called identities in themselves with no explanatory discourse. In the world of complexity, such identity based abstractions lose their essence, for modeling takes over the onus of explanations, and therefore, it is without doubt, the establishment of these relations that bring together states of representations as taking high priority. Representation holds a central value in both formal systems and in neural networks or connectionism, where the former is characterized by a rational calculus, and the latter by patterns that operate over the network lending it a more empirical weight.

figure8

Let logic programming be the starting point for deliberations here. The idea behind this is using mathematical logic to successfully apply to computer programming. When logic is used as such, it is used as a declarative representational language; declarative because, logic of computation is expressed without accounting for the flow of control. In other words, within this language, the question is centered around what-ness, rather than how-ness. Declarative representation has a counterpart in procedural representation, where the onus is on procedures, functions, routines and methods. Procedural representation is more algorithmic in nature, as it depends upon following steps to carry out computation. In other words, the question is centered around how-ness. But logic programming as it is commonly understood cannot do without both of them becoming a part of programming language at the same time. Since both of them are required, propositional logic that deals primarily with declarative representational languages would not suffice all alone, and hence, what is required is a logic that would touch upon predicates as well. This is made possible by first-order predicate logic that distinguishes itself from propositional logic by its use of quantifiers(1). The predicate logic thus finds its applications suited for deductive apparatus of formal systems, where axioms and rules of inferences are instrumental in deriving theorems that guide these systems. This setup is too formal in character and thus calls for a connectionist approach, since the latter is simply not keen to have predicate logic operate over deductive apparatus of a formal system at its party.

If brain and language (natural language and not computer languages, which are more rule-based and hence strict) as complex systems could be shown to have circumvented representationism via modeling techniques, the classical issues inherent in representation would be gotten rid of in the sense of a problematic. Functionalism as the prevalent theory in philosophy of mind that parallels computational model is the target here. In the words of Putnam,

I may have been the first philosopher to advance the thesis that the computer is the right model for mind. I gave my form of this doctrine the name ‘functionalism’, and under this name, it has become the dominant view – some say the orthodoxy – in contemporary philosophy of mind.

The computer metaphor with mind is clearly visible here, with the former having an hardware apparatus that is operated upon by the software programs, while the latter shares the same relation with brain (hardware) and mind (software). So far, so good, but there is a hitch. Like the computer side of metaphor, which can have a software loaded on to different hardwares, provided there is enough computational capability possessed by the hardware, the mind-brain relationship should meet the same criteria as well. If one goes by what Sterelny has hinted for functionalism as a certain physical state of the machine realizing a certain functional state, then a couple of descriptions, mutually exclusive of one another result, viz, a description on the physical level, and a description on the mental level. The consequences of such descriptions are bizarre to the extent that mind as a software can also find its implementation on any other hardware, provided the conditions for hardware’s capability to run the software are met successfully. One could hardly argue against these consequences that follow logically enough from the premisses, but a couple of blocks are not to be ignored at the same time, viz, the adequacy of the physical systems to implement the functional states, and what defines the relationships between these two mutually exclusive descriptions under the context of the same physical system. Sterelny comes up with a couple of criteria for adequate physical systems, designed, and teleological. Rather than provide any support for what he means by the systems as designed, he comes up with evolutionary tendencies, thus vouching for an external designer. The second one gets disturbing, if there is no description made, and this is precisely what Sterelny never offers. His citation of a bucket of water not having a telos in the sense of brain having one, only makes matters slide into metaphysics. Even otherwise, functionalism as a nature of mental states is metaphysical and ontological in import. This claim gets all the more highlighted, if one believes following Brentano that intentionality is the mark of the mental, then any theory of intentionality can be converted into a theory of of the ontological nature of psychological states. Getting back to the second description of Sterelny, functional states attain meaning, if they stand for something else, hence functionalism gets representational. And as Paul Cilliers says it cogently, grammatical structure of the language represents semantical content, and the neurological states of the brain represent certain mental states, thus proving without doubt, the responsibility on representation on establishing a link between the states of the system and conceptual meaning. This is again echoed in Sterelny,

There can be no informational sensitivity without representation. There can be no flexible and adaptive response to the world without representation. To learn about the world, and to use what we learn to act in new ways, we must be able to represent the world, our goals and options. Furthermore we must make appropriate inferences from these representations.

As representation is essentially about meaning, two levels are to be related with one another for any meaning to be possible. In the formal systems, or the rule-based approach, these relations are provided by creating a nexus between “symbol” and what it “symbolizes”. This fundamental linkage is offered by Fodor in his 1975 book, The Language of Thought. The main thesis of the book is about cognition and cognitive processes as remotely plausible, when computationally expressed in terms of representational systems. The language in possession of its own syntactic and semantic structures, and also independent of any medium, exhibits a causal effect on mental representations. Such a language is termed by him “mentalese”, which is implemented in the neural structure (a case in point for internal representation(2)), and following permutations allows for complex thoughts getting built up through simpler versions. The underlying hypothesis states that such a language applies to thoughts having propositional content, implying thoughts as having syntaxes. In order for complex thoughts to be generated, simple concepts are attached with the most basic linguistic token that combine following rules of logic (combinatorial rules). The language thus enriched is not only productive, with regard to length of the sentence getting longer (potentially so) without altering the meaning (concatenation), but also structured, in that rules of grammar that allow us to make inferences about linguistic elements previously unrelated. Once this task is accomplished, the representational theory of thought steps in to explicate on the essence of tokens and how they behave and relate. The representational theory of thought validates mental representations, that stand in uniquely for a subject of representation having a specific content to itself, to allow for causally generated complex thought. Sterelny echoes this when he says,

Internal representation helps us visualize our movements in the world and our embeddedness in the world. Internal representation takes it for granted that organisms inherently have such an attribute to have any cognition whatsoever. The plus point as in the work of Fodor is the absence of any other theory that successfully negotiates or challenges the very inherent-ness of internal representation.

For this model, and based on it, require an agent to represent the world as it is and as it might be, and to draw appropriate inferences from that representation. Fodor argues that the agent must have a language-like symbol system, for she can represent indefinitely many and indefinitely complex actual and possible states of her environment. She could not have this capacity without an appropriate means of representation, a language of thought. Mentalese thus is too rationalist in its approach, and hence in opposition to neural networks or connectionism. As there can be no possible cognitive processes without mental representations, the theory has many takers(3). One line of thought that supports this approach is the plausibility of psychological models that represent cognitive processes as representational thereby inviting computational thought to compute.

(1) Quantifier is an operator that binds a variable over a domain of discourse. The domain of discourse in turn specifies the range of these relevant variables.

(2) Internal representation helps us visualize our movements in the world and our embeddedness in the world. Internal representation takes it for granted that organisms inherently have such an attribute to have any cognition whatsoever. The plus point as in the work of Fodor is the absence of any other theory that successfully negotiates or challenges the very inherent-ness of internal representation.

(3) Tim Crane is a notable figure here. Crane explains Fodor’s Mentalese Hypothesis as desiring one thing and something else. Crane returns to the question of why we should believe the vehicle of mental representation is a language. Crane states that while he agrees with Fodor, his method of reaching it is very different. Crane goes on to say that reason: our ability as humans to decide a rational decision from the information giving is his argument for this question. Association of ideas lead to other ideas which only have a connection for the thinker. Fodor agrees that free association goes on but he says that is in a systemic, rational way that can be shown to work with the Language of Thought theory. Fodor states you must look at in a computational manner and that this allows it to be seen in a different light than normally and that free association follows a certain manner that can be broken down and explained with Language of Thought. Language of Thought.

scientificamericanmind0516-22-i5