The Fallacy of Deviant Analyticity. Thought of the Day 128.0

cropped-cropped-CR-reduced1

Carnap’s thesis of pluralism in mathematics is quite radical. We are told that “any postulates and any rules of inference [may] be chosen arbitrarily”; for example, the question of whether the Principle of Selection (that is, the Axiom of Choice (AC)) should be admitted is “purely one of expedience” (Logical Syntax of Language); more generally,

The [logico-mathematical sentences] are, from the point of view of material interpretation, expedients for the purpose of operating with the [descriptive sentences]. Thus, in laying down [a logico-mathematical sentence] as a primitive sentence, only usefulness for this purpose is to be taken into consideration.

So the pluralism is quite broad – it extends to AC and even to ∏01-sentences. There are problems in maintaining ∏01-pluralism. One cannot, on pain of inconsistency, think that statements about consistency are not “mere matters of expedience” without thinking that ∏01-statements generally are not mere “matters of expedience”. The question of whether a given ∏01-sentence holds is not a mere matter of expedience; rather, such questions fall within the provenance of theoretical reason. One reason is that in adopting a ∏01-sentence one could always be struck by a counter-example. Other reasons have to do with the clarity of our conception of the natural numbers and with our experience to date with that structure. On this basis, for no sentence of first-order arithmetic is the question of whether it holds a mere matter of experience. Certainly this is the default view from which one must be moved.

What does Carnap have to say that will sway us from the default view, and lead us to embrace his radical form of pluralism? In approaching this question it is important to bear in mind that there are two general interpretations of Carnap. According to the first interpretation – the substantive – Carnap is really trying to argue for the pluralist conception. According to the second interpretation – the non-substantive – he is merely trying to persuade us of it, that is, to show that of all the options it is most “expedient”.

The most obvious approach to securing pluralism is to appeal to the work on analyticity and content. For if mathematical truths are without content and, moreover, this claim can be maintained with respect to an arbitrary mathematical system, then one could argue that even apparently incompatible systems have null content and hence are really compatible (since there is no contentual-conflict).

Now, in order for this to secure radical pluralism, Carnap would have to first secure his claim that mathematical truths are without content. But, he has not done so. Instead, he has merely provided us with a piece of technical machinery that can be used to articulate any one of a number of views concerning mathematical content and he has adjusted the parameters so as to articulate his particular view. So he has not secured the thesis of radical pluralism. Thus, on the substantive interpretation, Carnap has failed to achieve his end.

This leaves us with the non-substantive interpretation. There are a number of problems that arise for this version of Carnap. To begin with, Carnap’s technical machinery is not even suitable for articulating his thesis of radical pluralism since (using either the definition of analyticity for Language I or Language II) there is no metalanguage in which one can say that two apparently incompatible systems S1 and S2 both have null content and hence are really contentually compatible. To fix ideas, consider a paradigm case of an apparent conflict that we should like to dissolve by saying that there is no contentual-conflict: Let S1 = PA + φ and S2 = PA + ¬φ, where φ is any arithmetical sentence, and let the metatheory be MA = ZFC. The trouble is that on the approach to Language I, although in MT (metatheory) we can prove that each system is ω-complete (which is a start since we wish to say that each system has null content), we can also prove that one has null content while the other has total content (that is, in ω-logic, every sentence of arithmetic is a consequence). So, we cannot, within MT articulate the idea that there is no contentual-conflict. The approach to Language II involves a complementary problem. To see this note that while a strong logic like ω-logic is something that one can apply to a formal system, a truth definition is something that applies to a language (in our modern sense). Thus, on this approach, in MT the definition of analyticity given for S1 and S2 is the same (since the two systems are couched in the same language). So, although in MT we can say that S1 and S2 do not have a contentual-conflict this is only because we have given a deviant definition of analyticity, one that is blind to the fact that in a very straightforward sense φ is analytic in S1 while ¬φ is analytic in S2.

Now, although Carnap’s machinery is not adequate to articulate the thesis of radical pluralism in a given metatheory, under certain circumstances he can attempt to articulate the thesis by changing the metatheory. For example, let S1 = PA + Con(ZF + AD) and S2 = PA + ¬Con(ZF + AD) and suppose we wish to articulate both the idea that the two systems have null content and the idea that Con(ZF + AD) is analytic in S1 while ¬Con(ZF + AD) is analytic in S2. No single metatheory (on either of Carnap’s approaches) can do this. But it turns out that because of the kind of assessment sensitivity, there are two metatheories MT1 and MT2 such that in MT1 we can say both that S1 has null content and that Con(ZF + AD) is analytic in S1, while in MT2 we can say both that S2 has null content and that ¬Con(ZF + AD) is analytic in S2. But, of course, this is simply because (any such metatheory) MT1 proves Con(ZF+AD) and (any such metatheory) MT2 proves ¬Con(ZF+AD). So we have done no more than reflect the difference between the systems in the metatheories. Thus, although Carnap does not have a way of articulating his radical pluralism (in a given metalanguage), he certainly has a way of manifesting it (by making corresponding changes in his metatheories).

As a final retreat Carnap might say that he is not trying to persuade us of a thesis that (concerning a collection of systems) can be articulated in a given framework but rather is trying to persuade us to adopt a thorough radical pluralism as a “way of life”. He has certainly shown us how we can make the requisite adjustments in our metatheory so as to consistently manifest radical pluralism. But does this amount to more than an algorithm for begging the question? Has Carnap shown us that there is no question to beg? He has not said anything persuasive in favour of embracing a thorough radical pluralism as the “most expedient” of the options. The trouble with Carnap’s entire approach is that the question of pluralism has been detached from actual developments in mathematics.

Advertisement

Morphed Ideologies. Thought of the Day 105.0

 

edited political spectrum

The sense of living in a post-fascist world is not shared by Marxists, of course, who ever since the first appearance of Mussolini’s virulently anti-communist squadrismo have instinctively assumed fascism to be be endemic to capitalism. No matter how much it may appear to be an autonomous force, it is for them inextricably bound up with the defensive reaction of bourgeoisie elites or big business to the attempts by revolutionary socialists to bring about the fundamental changes needed to assure social justice through a radical redistribution of wealth and power. According to which school or current of Marxism is carrying out the analysis, the precise sector or agency within capitalism that is the protagonist or backer of fascism’s elaborate pseudo-revolutionary pre-emptive strike, its degree of independence from the bourgeoisie elements who benefit from it, and the amount of genuine support it can win within the working class varies appreciably. But for all concerned, fascism is a copious taxonomic pot into which is thrown without too much intellectual agonizing over definitional or taxonomic niceties. For them, Brecht’s warning at the end of Arturo Ui has lost none of its topicality: “The womb that produced him is still fertile”.

The fact that two such conflicting perspectives can exist on the same subject can be explained as a consequence of the particular nature of all generic concepts within the human sciences. To go further into this phenomenon means entering a field of studies where philosophy of the social sciences has again proliferated conflicting positions, this time concerning the complex and largely subliminal processes involved in conceptualization and modeling in the pursuit of definite, if not definitive, knowledge. According to Max Weber, terms such as capitalism and socialism are ideal types, heuristic devices created by an act of idealizing abstraction. This cognitive process, which in good social scientific practice is carried out as consciously and scrupulously as possible, extracts a small group of salient features perceived as common to a particular generic phenomenon and assembles them into a definitional minimum which is at bottom a utopia.

The result of idealizing abstraction is a conceptually pure, artificially tidy model which does not correspond exactly to any concrete manifestation of the generic phenomenon being investigated, since in reality these are always inextricably mixed up with features, attributes, and surface details which are not considered definitional or as unique to that example of it. The dominant paradigm of the social sciences at any one time, the hegemonic political values and academic tradition prevailing in a particular geography, the political and moral values of the individual researcher all contribute to determining what common features are regarded as salient or definitional. There is no objective reality or objective definition of any aspect of it, and no simple correspondence between a word and what it means, the signifier and the signified, since it is axiomatic to Weber’s world-view that the human mind attaches significance to an essentially absurd universe and thus literally creates value and meaning, even when attempting to understand the world objectively. The basic question to be asked about any definition of fascism therefore, is not whether it is true, but whether it is heuristically useful: what can be seen or understood about concrete human phenomenon when it is applied that could not otherwise be seen, and what is obscured by it.

In his theory of ideological morphology, the British political scientist Michael Freeden has elaborated a nominalist and hence anti-essentialist approach to the definition of generic ideological terms that is deeply compatible with Weberian heuristics. He distinguishes between the ineliminable attributes or properties with which conventional usage endows them and those adjacent and peripheral to them which vary according to specific national, cultural or historical context. To cite the example he gives, liberalism can be argued to contain axiomatically, and hence at its definitional core, the idea of individual, rationally defensible liberty. however, the precise relationship of such liberty to laissez-faire capitalism, nationalism, the sanctuary, or the right of the state to override individual human rights in the defense of collective liberty or the welfare of the majority is infinitely negotiable and contestable. So are the ideal political institutions and policies that a state should adopt in order to guarantee liberty, which explains why democratic politics can never be fully consensual across a range of issues without there being something seriously wrong. It is the fact that each ideology is a cluster of concepts comprising ineliminable with eliminable ones that accounts for the way ideologies are able to evolve over time while still remaining recognizably the same and why so many variants of the same ideology can arise in different societies and historical contexts. It also explains why every concrete permutation of an ideology is simultaneously unique and the manifestation of the generic “ism”, which may assume radical morphological transformations in its outward appearance without losing its definitional ideological core.

 

Why Do Sovereign Borrowers Seek to Avoid Default? A Case of Self-Compliance With Contractual Terms.

6a00e551f080038834014e8c35f199970d

Every form of debt is typically a contractual agreement between a lender and a borrower. The former initially pays a money amount to the latter, the latter promises regular interest payments in the future (ct) for a certain time period (n years) and then return of the whole nominal value of the contract (C). This practically means that the owner of the contract (creditor) acquires a right on a future stream of payments and the contract a present value for the same reason. In a general case, the present value of the contract is given by the following formula (r is the discounting rate):

PV = ∑t=1n ct/(1 + r)t + C/(1 + r)n

Put simply, the equation gives the present value of the liability discounting all future anticipated payments. Default is by definition any ex post change in the stream of current and future payments on the debt contract. This change makes the contract less valuable to the creditor, reducing its present value for non-execution of the agreed payments.

In the case that the borrower is a private firm (or a household), law and related third party enforcers (including but not limited to the courts) guarantee the execution of the contractual terms. If the borrower in the international financial markets is a sovereign state, things are quite different as the third-party enforcement is typically futile. Sovereign borrowers may voluntarily choose to self-comply to the contractual terms; nevertheless, if not, there is no typical third-party enforcement on the international level. Even in the case that the debt contracts are subject to foreign law, the enforcement powers of the foreign courts are limited. The case of Argentina is indicative enough. As it is now well known and widely discussed, the court judgment of Thomas P. Griesa determined that the Argentine government should pay the holdouts pari passu despite the fact that the great majority of creditors had agreed to a restructuring. The decision had its results and triggered a new mini-default, but by no means could typically enforce a policy change to Argentina. In the relevant literature, this is usually called fundamental asymmetry of the sovereign debt market. In the mainstream misleading analytical context (where states, firms, and households are treated as coherent agents acting on a cost/benefit basis and pursuing the optimum position) the key question is the following: why do sovereign borrowers comply with the contractual terms much more often than expected?

Sovereign borrowers avoid default and self-comply with the contractual terms because the strategic benefits from a default do not exceed the anticipated losses. There is truth in this argument. For instance, a sovereign default would heavily affect the domestic financial system, which is usually not only exposed to domestic sovereign debt but would also face serious impediments in its organic connection to the international markets (in the case of a developed capitalist economy, this implies extra financial costs for the private sector and thus serious macroeconomic consequences for employment and growth). One should also take into consideration the economic and political consequences of a default, since negotiations with the creditors take considerable time. The list of cost/benefit analysis can be quite long, but this train of thought misses the crucial factor: the very nature of contemporary capitalist power.

Cost-benefit analysis takes a concrete form only within the contemporary context of capitalist power. International financial markets do not curtail the range of state sovereignty – they reshape the contour of capitalist power. Contemporary capitalism (the term “neoliberalism” is too restrictive to capture all its aspects) amounts to a recomposition or reshaping of the relations between capitalist states (as uneven links in the context of the global imperialist chain), individual capitals (which are constituted as such only in relation to a particular national social capital), and “liberalized” financial markets. This recomposition presupposes a proper reforming of all components involved, in a way that secures the reproduction of the dominant (neoliberal) capitalist paradigm. From this point of view, contemporary capitalism comprises a historical specific form of organization of capitalist power on a social-wide scale, wherein governmentality through financial markets acquires a crucial role. The new condition of governmentality (reproduction of capitalist rule) thus takes the form of a “state-and-market” type of connection. Regardless of the results of cost-benefit calculus, the organic inclusion of the economy in the international markets is a critical premise for the organization of capitalist rule. On the other hand, it is also clear that a recomposition of the relation to international markets (national self-sufficiency) can easily incite the most regressive and authoritarian forms of state governance, if it is not accompanied by a radical shift in the class relations of power.

Accelerated Capital as an Anathema to the Principles of Communicative Action. A Note Quote on the Reciprocity of Capital and Ethicality of Financial Economics

continuum

Markowitz portfolio theory explicitly observes that portfolio managers are not (expected) utility maximisers, as they diversify, and offers the hypothesis that a desire for reward is tempered by a fear of uncertainty. This model concludes that all investors should hold the same portfolio, their individual risk-reward objectives are satisfied by the weighting of this ‘index portfolio’ in comparison to riskless cash in the bank, a point on the capital market line. The slope of the Capital Market Line is the market price of risk, which is an important parameter in arbitrage arguments.

Merton had initially attempted to provide an alternative to Markowitz based on utility maximisation employing stochastic calculus. He was only able to resolve the problem by employing the hedging arguments of Black and Scholes, and in doing so built a model that was based on the absence of arbitrage, free of turpe-lucrum. That the prescriptive statement “it should not be possible to make sure profits”, is a statement explicit in the Efficient Markets Hypothesis and in employing an Arrow security in the context of the Law of One Price. Based on these observations, we conject that the whole paradigm for financial economics is built on the principle of balanced reciprocity. In order to explore this conjecture we shall examine the relationship between commerce and themes in Pragmatic philosophy. Specifically, we highlight Robert Brandom’s (Making It Explicit Reasoning, Representing, and Discursive Commitment) position that there is a pragmatist conception of norms – a notion of primitive correctnesses of performance implicit in practice that precludes and are presupposed by their explicit formulation in rules and principles.

The ‘primitive correctnesses’ of commercial practices was recognised by Aristotle when he investigated the nature of Justice in the context of commerce and then by Olivi when he looked favourably on merchants. It is exhibited in the doux-commerce thesis, compare Fourcade and Healey’s contemporary description of the thesis Commerce teaches ethics mainly through its communicative dimension, that is, by promoting conversations among equals and exchange between strangers, with Putnam’s description of Habermas’ communicative action based on the norm of sincerity, the norm of truth-telling, and the norm of asserting only what is rationally warranted …[and] is contrasted with manipulation (Hilary Putnam The Collapse of the Fact Value Dichotomy and Other Essays)

There are practices (that should be) implicit in commerce that make it an exemplar of communicative action. A further expression of markets as centres of communication is manifested in the Asian description of a market brings to mind Donald Davidson’s (Subjective, Intersubjective, Objective) argument that knowledge is not the product of a bipartite conversations but a tripartite relationship between two speakers and their shared environment. Replacing the negotiation between market agents with an algorithm that delivers a theoretical price replaces ‘knowledge’, generated through communication, with dogma. The problem with the performativity that Donald MacKenzie (An Engine, Not a Camera_ How Financial Models Shape Markets) is concerned with is one of monism. In employing pricing algorithms, the markets cannot perform to something that comes close to ‘true belief’, which can only be identified through communication between sapient humans. This is an almost trivial observation to (successful) market participants, but difficult to appreciate by spectators who seek to attain ‘objective’ knowledge of markets from a distance. To appreciate the relevance to financial crises of the position that ‘true belief’ is about establishing coherence through myriad triangulations centred on an asset rather than relying on a theoretical model.

Shifting gears now, unless the martingale measure is a by-product of a hedging approach, the price given by such martingale measures is not related to the cost of a hedging strategy therefore the meaning of such ‘prices’ is not clear. If the hedging argument cannot be employed, as in the markets studied by Cont and Tankov (Financial Modelling with Jump Processes), there is no conceptual framework supporting the prices obtained from the Fundamental Theorem of Asset Pricing. This lack of meaning can be interpreted as a consequence of the strict fact/value dichotomy in contemporary mathematics that came with the eclipse of Poincaré’s Intuitionism by Hilbert’s Formalism and Bourbaki’s Rationalism. The practical problem of supporting the social norms of market exchange has been replaced by a theoretical problem of developing formal models of markets. These models then legitimate the actions of agents in the market without having to make reference to explicitly normative values.

The Efficient Market Hypothesis is based on the axiom that the market price is determined by the balance between supply and demand, and so an increase in trading facilitates the convergence to equilibrium. If this axiom is replaced by the axiom of reciprocity, the justification for speculative activity in support of efficient markets disappears. In fact, the axiom of reciprocity would de-legitimise ‘true’ arbitrage opportunities, as being unfair. This would not necessarily make the activities of actual market arbitrageurs illicit, since there are rarely strategies that are without the risk of a loss, however, it would place more emphasis on the risks of speculation and inhibit the hubris that has been associated with the prelude to the recent Crisis. These points raise the question of the legitimacy of speculation in the markets. In an attempt to understand this issue Gabrielle and Reuven Brenner identify the three types of market participant. ‘Investors’ are preoccupied with future scarcity and so defer income. Because uncertainty exposes the investor to the risk of loss, investors wish to minimise uncertainty at the cost of potential profits, this is the basis of classical investment theory. ‘Gamblers’ will bet on an outcome taking odds that have been agreed on by society, such as with a sporting bet or in a casino, and relates to de Moivre’s and Montmort’s ‘taming of chance’. ‘Speculators’ bet on a mis-calculation of the odds quoted by society and the reason why speculators are regarded as socially questionable is that they have opinions that are explicitly at odds with the consensus: they are practitioners who rebel against a theoretical ‘Truth’. This is captured in Arjun Appadurai’s argument that the leading agents in modern finance believe in their capacity to channel the workings of chance to win in the games dominated by cultures of control . . . [they] are not those who wish to “tame chance” but those who wish to use chance to animate the otherwise deterministic play of risk [quantifiable uncertainty]”.

In the context of Pragmatism, financial speculators embody pluralism, a concept essential to Pragmatic thinking and an antidote to the problem of radical uncertainty. Appadurai was motivated to study finance by Marcel Mauss’ essay Le Don (The Gift), exploring the moral force behind reciprocity in primitive and archaic societies and goes on to say that the contemporary financial speculator is “betting on the obligation of return”, and this is the fundamental axiom of contemporary finance. David Graeber (Debt The First 5,000 Years) also recognises the fundamental position reciprocity has in finance, but where as Appadurai recognises the importance of reciprocity in the presence of uncertainty, Graeber essentially ignores uncertainty in his analysis that ends with the conclusion that “we don’t ‘all’ have to pay our debts”. In advocating that reciprocity need not be honoured, Graeber is not just challenging contemporary capitalism but also the foundations of the civitas, based on equality and reciprocity. The origins of Graeber’s argument are in the first half of the nineteenth century. In 1836 John Stuart Mill defined political economy as being concerned with [man] solely as a being who desires to possess wealth, and who is capable of judging of the comparative efficacy of means for obtaining that end.

In Principles of Political Economy With Some of Their Applications to Social Philosophy, Mill defended Thomas Malthus’ An Essay on the Principle of Population, which focused on scarcity. Mill was writing at a time when Europe was struck by the Cholera pandemic of 1829–1851 and the famines of 1845–1851 and while Lord Tennyson was describing nature as “red in tooth and claw”. At this time, society’s fear of uncertainty seems to have been replaced by a fear of scarcity, and these standards of objectivity dominated economic thought through the twentieth century. Almost a hundred years after Mill, Lionel Robbins defined economics as “the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses”. Dichotomies emerge in the aftermath of the Cartesian revolution that aims to remove doubt from philosophy. Theory and practice, subject and object, facts and values, means and ends are all separated. In this environment ex cathedra norms, in particular utility (profit) maximisation, encroach on commercial practice.

In order to set boundaries on commercial behaviour motivated by profit maximisation, particularly when market uncertainty returned after the Nixon shock of 1971, society imposes regulations on practice. As a consequence, two competing ethics, functional Consequential ethics guiding market practices and regulatory Deontological ethics attempting stabilise the system, vie for supremacy. It is in this debilitating competition between two essentially theoretical ethical frameworks that we offer an explanation for the Financial Crisis of 2007-2009: profit maximisation, not speculation, is destabilising in the presence of radical uncertainty and regulation cannot keep up with motivated profit maximisers who can justify their actions through abstract mathematical models that bare little resemblance to actual markets. An implication of reorienting financial economics to focus on the markets as centres of ‘communicative action’ is that markets could become self-regulating, in the same way that the legal or medical spheres are self-regulated through professions. This is not a ‘libertarian’ argument based on freeing the Consequential ethic from a Deontological brake. Rather it argues that being a market participant entails restricting norms on the agent such as sincerity and truth telling that support knowledge creation, of asset prices, within a broader objective of social cohesion. This immediately calls into question the legitimacy of algorithmic/high- frequency trading that seems an anathema in regard to the principles of communicative action.

Evental Sites. Thought of the Day 48.0

badiou_being_and_appearance1

According to Badiou, the undecidable truth is located beyond the boundaries of authoritative claims of knowledge. At the same time, undecidability indicates that truth has a post-evental character: “the heart of the truth is that the event in which it originates is undecidable” (Being and Event). Badiou explains that, in terms of forcing, undecidability means that the conditions belonging to the generic set force sentences that are not consequences of axioms of set theory. If in the domains of specific languages (of politics, science, art or love) the effects of event are not visible, the content of “Being and Event” is an empty exercise in abstraction.

Badiou distances himself from\ a narrow interpretation of the function played by axioms. He rather regards them as collections of basic convictions that organize situations, the conceptual or ideological framework of a historical situation. An event, named by an intervention, is at the theoretical site indexed by a proposition A, a new apparatus, demonstrative or axiomatic, such that A is henceforth clearly admissible as a proposition of the situation. Accordingly, the undecidability of a truth would consist in transcending the theoretical framework of a historical situation or even breaking with it in the sense that the faithful subject accepts beliefs that are impossible to reconcile with the old mode of thinking.

However, if one consequently identifies the effect of event with the structure of the generic extension, they need to conclude that these historical situations are by no means the effects of event. This is because a crucial property of every generic extension is that axioms of set theory remain valid within it. It is the very core of the method of forcing. Without this assumption, Cohen’s original construction would have no raison d’être because it would not establish the undecidability of the cardinality of infinite power sets. Every generic extension satisfies axioms of set theory. In reference to historical situations, it must be conceded that a procedure of fidelity may modify a situation by forcing undecidable sentences, nonetheless it never overrules its organizing principles.

Another notion which cannot be located within the generic theory of truth without extreme consequences is evental site. An evental site – an element “on the edge of the void” – opens up a situation to the possibility of an event. Ontologically, it is defined as “a multiple such that none of its elements are presented in the situation”. In other words, it is a set such that neither itself nor any of its subsets are elements of the state of the situation. As the double meaning of this word indicates, the state in the context of historical situations takes the shape of the State. A paradigmatic example of a historical evental site is the proletariat – entirely dispossessed, and absent from the political stage.

The existence of an evental site in a situation is a necessary requirement for an event to occur. Badiou is very strict about this point: “we shall posit once and for all that there are no natural events, nor are there neutral events” – and it should be clarified that situations are divided into natural, neutral, and those that contain an evental site. The very matheme of event – its formal definition is of no importance here is based on the evental site. The event raises the evental site to the surface, making it represented on the level of the state of the situation. Moreover, a novelty that has the structure of the generic set but it does not emerge from the void of an evental site, leads to a simulacrum of truth, which is one of the figures of Evil.

However, if one takes the mathematical framework of Badiou’s concept of event seriously, it turns out that there is no place for the evental site there – it is forbidden by the assumption of transitivity of the ground model M. This ingredient plays a fundamental role in forcing, and its removal would ruin the whole construction of the generic extension. As is known, transitivity means that if a set belongs to M, all its elements also belong to M. However, an evental site is a set none of whose elements belongs to M. Therefore, contrary to Badious intentions, there cannot exist evental sites in the ground model. Using Badiou’s terminology, one can say that forcing may only be the theory of the simulacrum of truth.

Representation as a Meaningful Philosophical Quandary

1456831690974

The deliberation on representation indeed becomes a meaningful quandary, if most of the shortcomings are to be overcome, without actually accepting the way they permeate the scientific and philosophical discourse. The problem is more ideological than one could have imagined, since, it is only within the space of this quandary that one can assume success in overthrowing the quandary. Unless the classical theory of representation that guides the expert systems has been accepted as existing, there is no way to dislodge the relationship of symbols and meanings that build up such systems, lest the predicament of falling prey to the Scylla of metaphysically strong notion of meaningful representation as natural or the Charybdis of an external designer should gobble us up. If one somehow escapes these maliciously aporetic entities, representation as a metaphysical monster stands to block our progress. Is it really viable then to think of machines that can survive this representational foe, a foe that gets no aid from the clusters of internal mechanisms? The answer is very much in the affirmative, provided, a consideration of the sort of such a non-representational system as continuous and homogeneous is done away with. And in its place is had functional units that are no more representational ones, for the former derive their efficiency and legitimacy through autopoiesis. What is required is to consider this notional representational critique of distributed systems on the objectivity of science, since objectivity as a property of science has an intrinsic value of independence from the subject who studies the discipline. Kuhn  had some philosophical problems to this precise way of treating science as an objective discipline. For Kuhn, scientists operate under or within paradigms thus obligating hierarchical structures. Such hierarchical structures ensure the position of scientists to voice their authority on matters of dispute, and when there is a crisis within, or, for the paradigm, scientists, to begin with, do not outrightly reject the paradigm, but try their level best at resolution of the same. In cases where resolution becomes a difficult task, an outright rejection of the paradigm would follow suit, thus effecting what is commonly called the paradigm shift. If such were the case, obviously, the objective tag for science goes for a hit, and Kuhn argues in favor of a shift in social order that science undergoes, signifying the subjective element. Importantly, these paradigm shifts occur to benefit scientific progress and in almost all of the cases, occur non-linearly. Such a view no doubt slides Kuhn into a position of relativism, and has been the main point of attack on paradigms shifting. At the forefront of attacks has been Michael Polanyi and his bunch of supporters, whose work on epistemology of science have much of the same ingredients, but was eventually deprived of fame. Kuhn was charged with plagiarism. The commonality of their arguments could be measured by a dissenting voice for objectivity in science. Polanyi thought of it as a false ideal, since for him the epistemological claims that defined science were based more on personal judgments, and therefore susceptible to fallibilism. The objective nature of science that obligates the scientists to see things as they really are is kind of dislodged by the above principle of subjectivity. But, if science were to be seen as objective, then the human subjectivity would indeed create a rupture as far as the purified version of scientific objectivity is sought for. The subject or the observer undergoes what is termed the “observer effect” that refers to the change impacting an act of observation being observed. This effect is as good as ubiquitous in most of the domains of science and technology ranging from Heisenbug(1) in computing via particle physics, science of thermodynamics to quantum mechanics. The quantum mechanics observer effect is quite perplexing, and is a result of a phenomenon called “superposition” that signifies the existence in all possible states and all at once. The superposition gets its credit due to Schrödinger’s cat experiment. The experiment entails a cat that is neither dead nor alive until observed. This has led physicists to take into account the acts of “observation” and “measurement” to comprehend the paradox in question, and thereby come out resolving it. But there is still a minority of quantum physicists out there who vouch for the supremacy of an observer, despite the quantum entanglement effect that go on to explain “observation” and “measurement” impacts.(2) Such a standpoint is indeed reflected in Derrida (9-10) as well, when he says (I quote him in full),

The modern dominance of the principle of reason had to go hand in hand with the interpretation of the essence of beings as objects, and object present as representation (Vorstellung), an object placed and positioned before a subject. This latter, a man who says ‘I’, an ego certain of itself, thus ensures his own technical mastery over the totality of what is. The ‘re-‘ of repraesentation also expresses the movement that accounts for – ‘renders reason to’ – a thing whose presence is encountered by rendering it present, by bringing it to the subject of representation, to the knowing self.

If Derridean deconstruction needs to work on science and theory, the only way out is to relinquish the boundaries that define or divide the two disciplines. Moreover, if there is any looseness encountered in objectivity, the ramifications are felt straight at the levels of scientific activities. Even theory does not remain immune to these consequences. Importantly, as scientific objectivity starts to wane, a corresponding philosophical luxury of avoiding the contingent wanes. Such a loss of representation congruent with a certain theory of meaning we live by has serious ethical-political affectations.

(1) Heisenbug is a pun on the Heisenberg’s uncertainty principle and is a bug in computing that is characterized by a disappearance of the bug itself when an attempt is made to study it. One common example is a bug that occurs in a program that was compiled with an optimizing compiler, but not in the same program when compiled without optimization (e.g., for generating a debug-mode version). Another example is a bug caused by a race condition. A heisenbug may also appear in a system that does not conform to the command-query separation design guideline, since a routine called more than once could return different values each time, generating hard- to-reproduce bugs in a race condition scenario. One common reason for heisenbug-like behaviour is that executing a program in debug mode often cleans memory before the program starts, and forces variables onto stack locations, instead of keeping them in registers. These differences in execution can alter the effect of bugs involving out-of-bounds member access, incorrect assumptions about the initial contents of memory, or floating- point comparisons (for instance, when a floating-point variable in a 32-bit stack location is compared to one in an 80-bit register). Another reason is that debuggers commonly provide watches or other user interfaces that cause additional code (such as property accessors) to be executed, which can, in turn, change the state of the program. Yet another reason is a fandango on core, the effect of a pointer running out of bounds. In C++, many heisenbugs are caused by uninitialized variables. Another similar pun intended bug encountered in computing is the Schrödinbug. A schrödinbug is a bug that manifests only after someone reading source code or using the program in an unusual way notices that it never should have worked in the first place, at which point the program promptly stops working for everybody until fixed. The Jargon File adds: “Though… this sounds impossible, it happens; some programs have harbored latent schrödinbugs for years.”

(2) There is a related issue in quantum mechanics relating to whether systems have pre-existing – prior to measurement, that is – properties corresponding to all measurements that could possibly be made on them. The assumption that they do is often referred to as “realism” in the literature, although it has been argued that the word “realism” is being used in a more restricted sense than philosophical realism. A recent experiment in the realm of quantum physics has been quoted as meaning that we have to “say goodbye” to realism, although the author of the paper states only that “we would [..] have to give up certain intuitive features of realism”. These experiments demonstrate a puzzling relationship between the act of measurement and the system being measured, although it is clear from experiment that an “observer” consisting of a single electron is sufficient – the observer need not be a conscious observer. Also, note that Bell’s Theorem suggests strongly that the idea that the state of a system exists independently of its observer may be false. Note that the special role given to observation (the claim that it affects the system being observed, regardless of the specific method used for observation) is a defining feature of the Copenhagen Interpretation of quantum mechanics. Other interpretations resolve the apparent paradoxes from experimental results in other ways. For instance, the Many- Worlds Interpretation posits the existence of multiple universes in which an observed system displays all possible states to all possible observers. In this model, observation of a system does not change the behavior of the system – it simply answers the question of which universe(s) the observer(s) is(are) located in: In some universes the observer would observe one result from one state of the system, and in others the observer would observe a different result from a different state of the system.