# From Posets to Categories. Part 1

A poset (partially ordered set) is a pair (P, ≤), where

P is a set,

is a binary relation on P satisfying the three axioms of partial order:

(i) Reflexive: ∀x ∈ P, x ≤ x

(ii) Antisymmetric: ∀x ,y ∈ P, x ≤ y & y ≤ x ⇒ x = y

(iii) Transitive: ∀x, y, z ∈ P, x ≤ y & y ≤ z ⇒ x ≤ z.

And what does this have to do with category theory?

“x ≤ y” ⇐⇒ “ x → y” “x = y” ⇐⇒ “ x ↔ y”

Given x, y ∈ P,

we say that u ∈ P is a least upper bound of x, y ∈ P if we have x → u & y → u, and for all z ∈ P satisfying x → z & y → z we must have u → z. It is more convenient to express this definition with a picture. We say that u ∈ P is a least upper bound of x, y if for all z ∈ P the following picture holds:

Dually, we say that l ∈ P is a greatest lower bound of x, y if for all z ∈ P the following picture holds:

Now suppose that u1, u2 ∈ P are two least upper bounds for x, y. Applying the defininition in both directions gives

u1 → u2 and u2 → u1,

and then from antisymmetry it follows that u1 = u2, which just means that u1 and u2 are indistinguishable within the structure of P. For this reason we can speak of the least upper bound (or “join”) of x, y. If it exists, we denote it by

x ∨ y

Dually, if it exists, we denote the greatest lower bound (or “meet”) by

x ∧ y

The definitions of meet and join are called “universal properties”. Whenever an object defined by a universal property exists, it is automatically unique in a certain canonical sense. However, since the object might not exist, maybe it is better to refer to a universal property as a “characterization,” or a “prescription,” rather than a “definition.”

Let P be a poset. We say that t ∈ P is a top element

if for all z ∈ P the following picture holds:

z —> t

Dually, we say that b ∈ P is a bottom element if for all z ∈ P the following picture holds:

b —> z

For any subset of elements of a poset S ⊆ P we say that the element ⋁ S ∈ P is its join if for all z ∈ P the following diagram is satisfied:

Dually, we say that ⋀ S ∈ P is the meet of S if for all z ∈ P the following diagram is satisfied:

If the objects ⋁ S and ⋀ S exist then they are uniquely characterized by their universal properties.

The universal properties in these diagrams will be called the “limit” and “colimit” properties when we move from posets to categories. Note that a limit/colimit diagram looks like a “cone over S”. This is one example of the link between category theory and topology.

Note that all definitions so far are included in this single (pair of) definition(s):

⋁ {x, y} = x∨ y & ⋀ {x, y} = x ∧ y

⋁∅ = 0 & ⋀ ∅ = 1.

# Could Complexity Rehabilitate Mo/PoMo Ethics?

A well known passage from Marie Fleming could be invoked here to acquit complexity from the charges and accusation pertaining to relativism. He says,

Anyone who argues against reason is necessarily caught up in a contradiction: she asserts at the locutionary level that reason does not exist, while demonstrating by way of her performance in argumentative processes that such reason does in fact exist.

Such an absolute statement about complexity would similarly be eaten along its way.

Taking the locutionary from the above quote, it could be used to adequately distinguish from performative, or logic versus rhetoric. Such a distinction gains credibility, if one is able to locate an Archimedean point to share discourse/s, which, from the point of view of complexity theory would be a space outside the autopoietic system, or, in other words, would be a meta-theoretical framework. Such a framework is skeptically looked upon/at by complexity, which has no qualms in exhibiting an acknowledgement towards performative tensions at work. Such tensions are generative of ethical choices and consequences, since any accessibility to the finality of knowledge is built upon the denial of critical perspective/s, thus shrouding the entire exercise in either a veil of ignorance, or a hubristic pride, or illusory at best.

Morality gains significance, since its formulations is often ruptured for want of secure, and certain knowledge, and both of which are not provided for by complexity theory and French theory, according to the accusations labeled against them. Even if, in making choices that are normative in nature, a clear formulation of the ethical is obligated. Lyotard’s underlining conditions of knowledge is often considered unethical, as he admits to the desire for justice to be shrouded in an unknown intellectual territory. Lyotard has Habermas in mind in dealing with this, since for the latter’s communication therapy, what is mandated is clearly consensual agreement on the part of the public to seek out these metaprescriptions as universally valid and as spanning all language games. Habermas is targeted here for deliberately ignoring the diversity inherent in the post-modern society. For Lyotard,

It is the monster formed by the interweaving of various networks of heteromorphous classes of utterances (denotative, prescriptive, performative, technical, evaluative, etc.). there is no reason to think that it could be possible to determine metaprescriptive common to all of these language games or like the revisable consensus like the one in force at a given moment in the scientific community could embrace the totality of metaprescriptions regulating the totality of statements circulating in the social collectivity. As a matter of fact, the contemporary decline of narratives of legitimization – be they traditional or ‘modern’ (the emancipation of humanity, the realization of the idea) – is tied to the abandonment of this belief.

# What’s an Asset Reconstruction Company, and why does it even matter to NPAs?

As is suggestive of the name, ARCs are required to repackage assets to make them more saleable. But, in the context of bad loans, or non-performing assets, such companies often falter to garner enough firepower to root out the surging menace of NPAs. The Indian context is caused primarily by a systemic rot involving faulty practices of project finance and subsequent difficulties in recoveries on loans. ARCs are constituted to precisely address such hurdles. With their status as centralised agencies, these are programmed to buy up stressed/distressed and non-performing assets and repackage them to sell them to prospective promoters/buyers. ARCs are programmed to buy NPAs at a discounted price, which in turn help the banks and lenders to clean up their sticky balance sheets. ARCs can either be public, private or jointly owned, and are also armed to float bonds to recover dues from borrowers. Even if on paper the concept of an ARC looks robust with scalability to concoct a bad asset with a performing one in order to increase its saleability, in reality, ARCs are prone to failures for lack of buyers for their packages and limited by capital concerns. But, there are challenges galore for ARCs viz. debt aggregation is a far cry, and unless this is done, resolution will always be expeditious; hunt for fresh financial support; and discrepancy between banks and ARCs in pricing of assets, unless reached a commonality would continue to remain a contentious issue.

The whole concept of Asset Reconstruction Companies (ARCs) is closely modelled on the US model of Asset Management Companies (AMCs), and is thus a large industry in itself as far as buying and selling off of debts are concerned. It resembles a time to strike as the ducks are now lined, and the opportunities galore in private equities as ARCs get a chance to own the entire capital structure and reinstall management echelons, all thanks to Budgetary recommendations with a 100% FDI welcomed. But, it still is going a bit too far, and hence let us examine the evolution first.

The real trigger for ARCs to flourish came with Raghuram Rajan’s exhortation to banks to clean up its mess. A switch of trajectory happened sometime in 2013, when ever greening bad loans was put on to back burner and let ARCs face up. This is where banks were obligated to turnaround loans until then considered unredeemable. Adding to that trigger point is another one banking on Bankruptcy Code, which promises discount to buying off bad debts or distressed assets at a discount pitting them more profitable a venture compared with greenfield projects of similar magnitude.

ARCs are born out of SARFAESI Act, 2002, (The Securitisation and Reconstruction of Financial Assets and Enforcement of Security Interest Act, 2002) and enable the banks to acquire the securities which had been pledged. all of this was achievable without any interference from the jurisdiction of the civil courts thus lending authoritarian power to banks to cope up with NPAs. But regulatory directives prevented the smooth functioning of transactions involving bad debts, thanks to the decrepit system for enforcing securities. Sale of loans to ARCs is however the last resort banks undertake as statutory hurdles and deposed promoters speed break.

Basically, road to recovery is a step by step process involving, Bank selling a bad loan to ARC; ARCs paying 15% upfront, and issuing the remaining 85% as securities in the form of Security Receipts (SRs), which primarily deals with 5% upfront payment as was the case a year back; ARCs initiating the turnaround and in the process earning 1.5% management fee; recovery proceeds thus accrued get shared by the banks and the ARCs; and if the ARCs fail to recover the bad loans for eight years, the investments get written-off. Now, this can be seen as a clear shift from ever greening, or even liberal funding by the government year-on-year, and aptly reminds internal recapitalization as indispensable. Budgetary infusion or capital infusion or recapitalisation is putting capital for the purpose of helping the ailing public sector banks. The very understanding of PSBs would imply such an infusion required from time to time, but the problem lies in its ritualistic nature. On the contrary, if banks were to internally raise funds for recapitalisation, it would indicate a healthy practice. One reason the government does infusion is to meet Basel III norms, as reliance upon internal fund recapitalisation would not let that be accomplished. But, there is a caveat here to be always kept in mind. Written-off doesn’t necessarily mean that defaults and defaulters are not chased. They are undoubtedly sought after, with the only difference being the clean-up of balance sheets for the year such defaults happen for the banks. And, if they don’t write-offs, or alternatively called charge-offs effectuate helping banks not only erase the mess off their balance sheet, but saving them enormous tax liabilities. The real tussle is between charging-off loans and becoming industrious in selling them off. The winner takes it all, which in this case is hedging equivalent to selling off bad debts, and which is promulgated by the RBI and the government in jettisoning this messy baggage.

# Hyman Minsky, Karl Polanyi, Deleterious Markets and if there is any Alt-Right to them? Apparently No.

Karl Polanyi has often highlighted on the insight about the predicaments of the market. these perils realize that when the markets are left to their own devices, they are enough o cause attrition to the social relations and fabric. However, the social consequences of financial instability can be understood only by employing a Polanyian perspective on how processes of commodification and market expansion jeopardize social institutions. For someone like Hyman Minsky, equilibrium and stability are elusive conditions in markets with debt contracts. His financial instability hypothesis suggests that capitalist economies lead, through their own dynamics, to “the development over historical time of liability structures that cannot be validated by market-determined cash flows or asset values”. According to Minsky, a stable period generates optimistic expectations. Increased confidence and positive expectations of future income streams cause economic actors to decrease margins of safety in their investment decisions. This feeds a surge in economic activity and profits, which turns into a boom as investments are financed by higher degrees of indebtedness. As the economic boom matures, an increasing number of financial intermediaries and firms switch from hedge finance to speculative and Ponzi finance. Minsky argued that economists, misreading Keynes, downplay the role of financial institutions. In particular, he argued that financial innovation can create economic euphoria for a while before destabilizing the economy and hurling it into crises rivaling the Great Depression. Minsky’s insights are evident in the effects of innovations in mortgages and mortgage securities. Actors using speculative and Ponzi finance are vulnerable to macroeconomic volatility and interest rate fluctuations. A boom ends when movements in short-term and long-term interest rates render the liability structures of speculative and Ponzi finance unsustainable. The likelihood of a financial crisis (as opposed to a business cycle) depends on the preponderance of speculative and Ponzi finance in the economy under question.

Minsky regularly criticized economists for failing to grasp Keynes’s ideas. In his book Stabilizing an Unstable Economy Minsky argued that while economists assimilated some of Keynes’s insights into standard economic theory, they failed to grasp the connection between the financial and real sectors. Specifically, he argued that finance is missing from macroeconomic theory, with its focus on capital structure, asset-liability management, agency theory, and contracts. He wrote:

Keynes’s theory revolves around bankers and businessmen making deals on Wall Street … One of the peculiarities of the neoclassical theory that preceded Keynes and the neoclassical synthesis that now predominates economic theory is that neither allows the activities that take place on Wall Street to have any significant impact upon the coordination or lack of coordination of the economy…

Minsky’s work on financial crises builds on Keynes’s insights, using terms such as “euphoric economy”, and “unrealistic euphoric expectations with respect to costs, markets, and their development over time”. Yet Minsky considered the issues of rational prices and market efficiency as only the tip of an iceberg. His broad framework addresses issues related to the lending practices by financial institutions, central bank policy, fiscal policy, the efficacy of financial market regulation, employment policy, and income distribution. Financial institutions, such as banks, become increasingly innovative in their use of financial products when the business cycle expands, boosting their leverage and funding projects with ever increasing risk. Minsky’s words on financial innovation are striking, as if foretelling the recent crisis.

Over an expansion, new financial instruments and new ways of financing activity develop. Typically, defects of the new ways and the new institutions are revealed when the crunch comes.

Commercial banks sponsored conduits to finance long-term assets through special purpose entities such as structured investment vehicles (SIVs), something similar to the Indian version of Special Purpose Vehicles (SPVs). These were off balance sheet entities, subjecting them to lower regulatory capital requirements. Special purpose entities used commercial paper to raise funds they then used to buy mortgages and mortgage securities. In effect, banks relied on Minsky-type speculative and Ponzi financing, borrowing short-term and using these borrowed funds to buy long-term assets. Wrote Minsky,

The standard analysis of banking has led to a game that is played by central banks, henceforth to be called the authorities, and profit-seeking banks. In this game, the authorities impose interest rates and reserve regulations and operate in money markets to get what they consider to be the right amount of money, and the banks invent and innovate in order to circumvent the authorities. The authorities may constrain the rate of growth of the reserve base, but the banking and financial structure determines the efficacy of reserves…This is an unfair game. The entrepreneurs of the banking community have much more at stake than the bureaucrats of the central banks. In the postwar period, the initiative has been with the banking community, and the authorities have been “surprised” by changes in the way financial markets operate. The profit-seeking bankers almost always win their game with the authorities, but, in winning, the banking community destabilizes the economy; the true losers are those who are hurt by unemployment and inflation.

Combining Hyman Minsky’s insights on financial fragility with a Polanyian focus on commodification offers a distinct perspective on the causes and consequences of the foreclosure crisis. First, following Polanyi, we should expect to find commodity fiction applied to arenas of social life previously isolated from markets to be at the heart of the recent financial crisis. Second, following Minsky, the transformations caused by novel uses of commodity fiction should be among the primary causes of financial fragility. Finally, in line with a Polanyian focus on the effects of supply-demand-price mechanism, the price fluctuations caused by financial fragility should disrupt existing social relations and institutions in a significant manner. So, how does this all peter down to alt-right? Right-wing libertarianism is basically impossible. The “free” market as we know it today needs the state to be implemented – without reading Polanyi, you just know for example that without the force of the state, you just can’t have private property or all the legal arrangements that underpin property, labour and money. So it wouldn’t work anyway. Polanyi’s point is that if we want democracy to survive, we need to beware of financial overlords and their ideological allies peddling free-market utopias. And if democracy even stinks of stability, then stability is destabilizing as Minsky would have had it, thus corroborating the cross-purposes between the two thinkers in question, at least to the point of a beginning.

# Drake Equation and Astrobiology

The Drake equation is one of those rare mathematical beasts that has leaked into the public consciousness. It estimates the number of extraterrestrial civilisations that we might be able to detect today or in the near future. Frank Drake attempted to quantify the number by asking what fraction of stars have planets, what fraction of these might be habitable, then the fraction of these on which life actually evolves and the fraction of these on which life becomes intelligent and so on. Many of these numbers are little more than wild guesses. For example, the number of ET civilisations we can detect now is hugely sensitive to the fraction that destroy themselves with their own technology, through nuclear war for example. Obviously we have no way of knowing this figure. Of the many uncertainties in the Drake equation, one term is traditionally thought of as relatively reliable. That is the probability of life emerging on a planet in a habitable zone. On Earth, life arose about 3.8 billion years ago, just a few million years after the planet had cooled sufficiently to allow it. Astrobiologists naturally argue that because life arose so quickly here, it must be pretty likely to emerge in other places where conditions allow.

# Labour and Delhi-Mumbai Industrial Corridor. Queries.

Stepping out briefly of nomenclature excesses that the DMIC funding mechanisms deliberated upon, I briefly bring to attention the labour issues that deserve a discussion, albeit in much lower profile. Considering the gigantic proportions of the project and similar corridors on their way, the very definition, constitution and framework of labour is challenging, be it under the rubric of organized, unorganized, skilled, unskilled, semi-skilled, migrant and/or contractual and within manufacturing, service or knowledge economy. Laws and Acts are applicable to labour more due to a lack of this particular enframing of labour, since the former are twisted to be more accommodating for the law-makers and law-interpreters. So the preliminary question is, where to fit in multifarious conceptions of labour in equally multifarious lines of projects? How to envisage Laws and Acts in existence with the ones-to-come?

On a more downside and at first seemingly unrelated note, assuming (and I mean assuming) that such corridors (I prefer calling them verandahs, since it could help me to jump off the corridor!!!) become the engines of growth, where the labour somehow transforms from brute labour into consumers themselves, transforming the economy from investment/manufacturing-led to services/knowldege-led, from export orientation to import consumer-based orientation, then what can rule out the point of labour scarcity getting realized. I see three consequences of such a transformation:

1) Attaining a “Lewis Turning Point”, an inevitable developmental phase when wages surge sharply, squeezing industrial profits along the way and consequently resulting in a dip of investment.

2) A workerist struggle demands higher wages and broader social reforms implying a further shortage of labour output with a correspondingly higher cost of maintaining labour in production and meeting its social costs leading to further shrinking of profits.

3) Capital innovates and reterritorializes itself for a better profitable ground, thus deepening the crisis of accumulation in the real economy.

Such a position is serious because bringing in foreign technology and (cheap labour + capital) eventually risks running short on cheap labour, and the focus shifts to adding more capital without paying any heed to simultaneous diminishing returns. Growth engine starts losing its steam. The reason for this part of the post is to locate the importance of labour (cheap labour is a very derogatorily used here and apologies for the same. But, unless used this way, gravity would refuse to sink in) in forcing capital to take flight on the one hand, and become more socially-democratically aware on the other.

# Bataille and Solar Anus Economy/Capitalism: Note Quote

Focusing on Bataille as a pivotal point, his take on Solar Economy is a bit weird to begin with, but, then I do see its relevance to accelerationism. The take on political economy is driven by excess, rather than scarcity, a plethora of energy (like that from the sun) that would not just facilitate growth, but would be vulnerable to expenditure in a purely apathetic manner as well. That is how he differentiates the general from the restricted economy. I am keen to note how accelerationism, if guarded by the normative (I understand the term to carry a baggage of strictures) dromos (dromocracy), would shift the balance of the economic towards the general rather than the restricted. I think, if capitalism gets overridden by the belief in this economic, a probable fracture within it could be affected. Bataille, then would make his presence felt even more crucially for the ultimate eschatology of capitalism.

Bataille provides a premonitory text relating the energy of the sun, the sexual movements and excitements of the cosmos and of terrestrial life, and the anus of an eighteen-year-old girl. Operating at the intersection between his sexually explicit literary works – think here especially of Madame Edwarda, whose eponymous hero demonstrates that her labia are the copula of God: “Madame Edwarda’s old rag and ruin leered at me, hairy and pink, just as full of life as some loathsome squid. […] “You can see for yourself,” she said, “I am GOD.” – and his later development of a theory of a general economy of expenditure in La part maudite, “The Solar Anus” provides a rich but conceptually underdeveloped reading of the cosmic and terrestrial with regard to their potency, fertility, and fundamental antagonism.  Bataille writes,

Disasters, revolutions, and volcanoes do not make love with the stars. The erotic revolutionary and volcanic deflagrations antagonize the heavens. As in the case of violent love, they take place beyond fecundity. In opposition to celestial fertility there are terrestrial disasters, the image of terrestrial love without condition, erection without escape and without rule, scandal, and terror. […] The Sun exclusively loves the Night and directs its luminous violence, its ignoble shaft, toward the earth, but it finds itself incapable of reaching the gaze or the night, even though the nocturnal terrestrial expanses head continuously toward the indecency of the solar ray.

In La part maudite, Bataille goes on to develop his argument against scarcity, which contends that, from the point of view of a general economy, the key problem on the tellurian surface is not the conservation of energy, but its expenditure [depenser].  Bataille offers the following reversal of the political economy of scarcity:

I will begin with a basic fact: The living organism, in a situation determined by the play of energy on the surface of the globe, ordinarily receives more energy than is necessary for maintaining life; the excess energy (wealth) can be used for the growth of a system (e.g., an organism); if the system can no longer grow, or if the excess cannot be lost without profit; it must be spent, willingly or not, gloriously or catastrophically. (The Accursed Share)

The “curse” of the accursed share is disturbingly simple: the earth is bombarded with so much energy from the sun that it simply cannot spend it all without disaster. Over the course of millions of years of solar bombardment, the creatures enslaved to this “celestial fertility” by way of photosynthetic-reliant metabolic systems are forced to become increasingly burdensome forms of life. By the end of the Ediacaran period, we find the emergence of animals with bones, teeth, and claws, and eventually even more flamboyant expenditures like tigers and peacocks, and later still, tall buildings. Or, as Bataille suggests in his short text “Architecture,” for the surrealist Critical Dictionary: “Man would seem to represent merely an intermediate stage within the morphological development between monkey and building.” With this morphology of expenditure in mind, let us now return to the anal image of thought.

What the theory of expenditure calls into question in its most precise philosophical reading is the division between useful and wasteful (flamboyant) practices; this is because in order for any theory of use value to be coherent, it must first restrict the economy, or field of operations, within which it is operating. The restriction of this field of energy exchange is a moral action inasmuch as it sets up the conditions for any action in the field to be read as either productive or wasteful. For Bataille, the general economy permits us to evaluate the terms of restriction as a means to call into question the cultural values and forms of social organization they engender. Because of this, the “anus of her body at eighteen years old” must be intact: as a potential for pure loss, pure expenditure of energy without reserve and without reproduction, Bataille is transfixed by the analogy of glorious or catastrophic expenditure in relation to the energy of the sun and the potential for escaping this curse as much as the curse of the intact anus.

# Simulations of Representations: Rational Calculus versus Empirical Weights

While modeling a complex system, it should never be taken for granted that these models somehow simplify the systems, for that would only strip the models of the capability to account for encoding, decoding, and retaining information that are sine qua non for the environment they plan to model, and the environment that these models find themselves embedded in. Now, that the traditional problems of representation are fraught with loopholes, there needs to be a way to jump out of this quandary, if modeling complex systems are not to be impacted by the traces of these very traditional notions of representation. The employment of post-structuralist theories are sure indicative of getting rid of the symptoms, since they score over the analytical tradition, where, representation is only an analogue of the thing represented, whereas, simulation with its affinity to French theory is conducive to a distributed and a holistic analogy. Any argument against representation is not to be taken as meaning anti-scientific, since it is merely an argument against a particular scientific methodology and/or strategy that assumes complexity to be reducible, and therefore implementable or representable in a machine. The argument takes force only as an appreciation for the nature of complexity, something that could perhaps be repeated in a machine, should the machine itself be complex enough to cope with the distributed character of complexity. Representation is a state that stands-in for some other state, and hence is nothing short of “essentially” about meaning. The language, thought that is incorporated in understanding the world we are embedded in is efficacious only if representation relates to the world, and therefore “relationship” is another pillar of representation. Unless a relationship relates the two, one gets only an abstracted version of the so-called identities in themselves with no explanatory discourse. In the world of complexity, such identity based abstractions lose their essence, for modeling takes over the onus of explanations, and therefore, it is without doubt, the establishment of these relations that bring together states of representations as taking high priority. Representation holds a central value in both formal systems and in neural networks or connectionism, where the former is characterized by a rational calculus, and the latter by patterns that operate over the network lending it a more empirical weight.

Let logic programming be the starting point for deliberations here. The idea behind this is using mathematical logic to successfully apply to computer programming. When logic is used as such, it is used as a declarative representational language; declarative because, logic of computation is expressed without accounting for the flow of control. In other words, within this language, the question is centered around what-ness, rather than how-ness. Declarative representation has a counterpart in procedural representation, where the onus is on procedures, functions, routines and methods. Procedural representation is more algorithmic in nature, as it depends upon following steps to carry out computation. In other words, the question is centered around how-ness. But logic programming as it is commonly understood cannot do without both of them becoming a part of programming language at the same time. Since both of them are required, propositional logic that deals primarily with declarative representational languages would not suffice all alone, and hence, what is required is a logic that would touch upon predicates as well. This is made possible by first-order predicate logic that distinguishes itself from propositional logic by its use of quantifiers(1). The predicate logic thus finds its applications suited for deductive apparatus of formal systems, where axioms and rules of inferences are instrumental in deriving theorems that guide these systems. This setup is too formal in character and thus calls for a connectionist approach, since the latter is simply not keen to have predicate logic operate over deductive apparatus of a formal system at its party.

If brain and language (natural language and not computer languages, which are more rule-based and hence strict) as complex systems could be shown to have circumvented representationism via modeling techniques, the classical issues inherent in representation would be gotten rid of in the sense of a problematic. Functionalism as the prevalent theory in philosophy of mind that parallels computational model is the target here. In the words of Putnam,

I may have been the first philosopher to advance the thesis that the computer is the right model for mind. I gave my form of this doctrine the name ‘functionalism’, and under this name, it has become the dominant view – some say the orthodoxy – in contemporary philosophy of mind.

The computer metaphor with mind is clearly visible here, with the former having an hardware apparatus that is operated upon by the software programs, while the latter shares the same relation with brain (hardware) and mind (software). So far, so good, but there is a hitch. Like the computer side of metaphor, which can have a software loaded on to different hardwares, provided there is enough computational capability possessed by the hardware, the mind-brain relationship should meet the same criteria as well. If one goes by what Sterelny has hinted for functionalism as a certain physical state of the machine realizing a certain functional state, then a couple of descriptions, mutually exclusive of one another result, viz, a description on the physical level, and a description on the mental level. The consequences of such descriptions are bizarre to the extent that mind as a software can also find its implementation on any other hardware, provided the conditions for hardware’s capability to run the software are met successfully. One could hardly argue against these consequences that follow logically enough from the premisses, but a couple of blocks are not to be ignored at the same time, viz, the adequacy of the physical systems to implement the functional states, and what defines the relationships between these two mutually exclusive descriptions under the context of the same physical system. Sterelny comes up with a couple of criteria for adequate physical systems, designed, and teleological. Rather than provide any support for what he means by the systems as designed, he comes up with evolutionary tendencies, thus vouching for an external designer. The second one gets disturbing, if there is no description made, and this is precisely what Sterelny never offers. His citation of a bucket of water not having a telos in the sense of brain having one, only makes matters slide into metaphysics. Even otherwise, functionalism as a nature of mental states is metaphysical and ontological in import. This claim gets all the more highlighted, if one believes following Brentano that intentionality is the mark of the mental, then any theory of intentionality can be converted into a theory of of the ontological nature of psychological states. Getting back to the second description of Sterelny, functional states attain meaning, if they stand for something else, hence functionalism gets representational. And as Paul Cilliers says it cogently, grammatical structure of the language represents semantical content, and the neurological states of the brain represent certain mental states, thus proving without doubt, the responsibility on representation on establishing a link between the states of the system and conceptual meaning. This is again echoed in Sterelny,

There can be no informational sensitivity without representation. There can be no flexible and adaptive response to the world without representation. To learn about the world, and to use what we learn to act in new ways, we must be able to represent the world, our goals and options. Furthermore we must make appropriate inferences from these representations.

As representation is essentially about meaning, two levels are to be related with one another for any meaning to be possible. In the formal systems, or the rule-based approach, these relations are provided by creating a nexus between “symbol” and what it “symbolizes”. This fundamental linkage is offered by Fodor in his 1975 book, The Language of Thought. The main thesis of the book is about cognition and cognitive processes as remotely plausible, when computationally expressed in terms of representational systems. The language in possession of its own syntactic and semantic structures, and also independent of any medium, exhibits a causal effect on mental representations. Such a language is termed by him “mentalese”, which is implemented in the neural structure (a case in point for internal representation(2)), and following permutations allows for complex thoughts getting built up through simpler versions. The underlying hypothesis states that such a language applies to thoughts having propositional content, implying thoughts as having syntaxes. In order for complex thoughts to be generated, simple concepts are attached with the most basic linguistic token that combine following rules of logic (combinatorial rules). The language thus enriched is not only productive, with regard to length of the sentence getting longer (potentially so) without altering the meaning (concatenation), but also structured, in that rules of grammar that allow us to make inferences about linguistic elements previously unrelated. Once this task is accomplished, the representational theory of thought steps in to explicate on the essence of tokens and how they behave and relate. The representational theory of thought validates mental representations, that stand in uniquely for a subject of representation having a specific content to itself, to allow for causally generated complex thought. Sterelny echoes this when he says,

Internal representation helps us visualize our movements in the world and our embeddedness in the world. Internal representation takes it for granted that organisms inherently have such an attribute to have any cognition whatsoever. The plus point as in the work of Fodor is the absence of any other theory that successfully negotiates or challenges the very inherent-ness of internal representation.

For this model, and based on it, require an agent to represent the world as it is and as it might be, and to draw appropriate inferences from that representation. Fodor argues that the agent must have a language-like symbol system, for she can represent indefinitely many and indefinitely complex actual and possible states of her environment. She could not have this capacity without an appropriate means of representation, a language of thought. Mentalese thus is too rationalist in its approach, and hence in opposition to neural networks or connectionism. As there can be no possible cognitive processes without mental representations, the theory has many takers(3). One line of thought that supports this approach is the plausibility of psychological models that represent cognitive processes as representational thereby inviting computational thought to compute.

(1) Quantifier is an operator that binds a variable over a domain of discourse. The domain of discourse in turn specifies the range of these relevant variables.

(2) Internal representation helps us visualize our movements in the world and our embeddedness in the world. Internal representation takes it for granted that organisms inherently have such an attribute to have any cognition whatsoever. The plus point as in the work of Fodor is the absence of any other theory that successfully negotiates or challenges the very inherent-ness of internal representation.

(3) Tim Crane is a notable figure here. Crane explains Fodor’s Mentalese Hypothesis as desiring one thing and something else. Crane returns to the question of why we should believe the vehicle of mental representation is a language. Crane states that while he agrees with Fodor, his method of reaching it is very different. Crane goes on to say that reason: our ability as humans to decide a rational decision from the information giving is his argument for this question. Association of ideas lead to other ideas which only have a connection for the thinker. Fodor agrees that free association goes on but he says that is in a systemic, rational way that can be shown to work with the Language of Thought theory. Fodor states you must look at in a computational manner and that this allows it to be seen in a different light than normally and that free association follows a certain manner that can be broken down and explained with Language of Thought. Language of Thought.