Austrian School of Economics: The Praxeological Synthetic. Thought of the Day 135.0

human-action-ethics-praxeology-history

Within the Austrian economics (here, here, here and here), the a priori stance has dominated a tradition running from Carl Menger to Murray Rothbard. The idea here is that the basic structures of economy is entrenched in the more basic structures of human action as such. Nowhere is this more evident than in the work of Ludwig von Mises – his so-called ‘praxeology’, which rests on the fundamental axiom that individual human beings act on the primordial fact that individuals engage in conscious actions toward chosen goals, is built from the idea that all basic laws of economy can be derived apriorically from one premiss: the concept of human action. Of course, this concept is no simple concept, containing within itself purpose, product, time, scarcity of resources, etc. – so it would be more fair to say that economics lies as the implication of the basic schema of human action as such.

Even if the Austrian economists’ conception of the a priori is decidedly objectivist and anti-subjectivist, it is important to remark their insistence on subjectivity within their ontological domain. The Austrian economics tradition is famous exactly for their emphasis on the role of subjectivity in economy. From Carl Menger onwards, they protest against the mainstream economical assumption that the economic agent in the market is fully rational, knows his own preferences in detail, has constant preferences over time, has access to all prices for a given commodity at a given moment, etc. Thus, von Mises’ famous criticism of socialist planned economy is built on this idea: the system of ever-changing prices in the market constitutes a dispersed knowledge about the conditions of resource allocation which is a priori impossible for any single agent – let alone, any central planner’s office – to possess. Thus, their conception of the objective a priori laws of the economic domain perhaps surprisingly had the implication that they warned against a too objectivist conception of economy not taking into account the limits of economic rationality stemming from the general limitations of the capacities of real subjects. Their ensuing liberalism is thus built on a priori conclusions about the relative unpredictability of economics founded on the role played by subjective intentionality. For the same reason, Hayek ended up with a distinction between simple and complex processes, respectively, cutting across all empirical disciplines, where only the former permit precise, predictive, quantitative calculi based on mathemathical modeling while the latter permit only recognition of patterns (which may also be mathematically modeled, to be sure, but without quantitative predictability). It is of paramount importance, though, to distinguish this emphasis on the ineradicable role of subjectivity in certain regional domains from Kantian-like ideas about the foundational role of subjectivity in the construction of knowledge as such. The Austrians are as much subjectivists in the former respect as they are objectivists in the latter. In the history of economics, the Austrians occupy a middle position, being against historicism on the one hand as well as against positivism on the other. Against the former, they insist that a priori structures of economy transgress history which does not possess the power to form institutions at random but only as constrained by a priori structures. And against the latter, they insist that the mere accumulation of empirical data subject to induction will never in itself give rise to the formation of theoretical insights. Structures of intelligible concepts are in all cases necessary for any understanding of empirical regularities – in so far, the Austrian a priori approach is tantamount to a non-skepticist version of the doctrine of ‘theory-ladenness’ of observations.

A late descendant of the Austrian tradition after its emigration to the Anglo-Saxon world (von Mises, Hayek, and Schumpeter were such emigrés) was the anarcho-liberal economist Murray Rothbard, and it is the inspiration from him which allows Barry Smith to articulate the principles underlying the Austrians as ‘fallibilistic apriorism’. Rothbard characterizes in a brief paper what he calls ‘Extreme Apriorism’ as follows:

there are two basic differences between the positivists’ model science of physics on the one hand, and sciences dealing with human actions on the other: the former permits experimental verification of consequences of hypotheses, which the latter do not (or, only to a limited degree, we may add); the former admits of no possibility of testing the premisses of hypotheses (like: what is gravity?), while the latter permits a rational investigation of the premisses of hypotheses (like: what is human action?). This state of affairs makes it possible for economics to derive its basic laws with absolute – a priori – certainty: in addition to the fundamental axiom – the existence of human action – only two empirical postulates are needed: ‘(1) the most fundamental variety of resources, both natural and human. From this follows directly the division of labor, the market, etc.; (2) less important, that leisure is a consumer good’. On this basis, it may e.g. be inferred, ‘that every firm aims always at maximizing its psychic profit’.

Rothbard draws forth this example so as to counterargue traditional economists who will claim that the following proposition could be added as a corollary: ‘that every firm aims always at maximizing its money profit’. This cannot be inferred and is, according to Rothbard, an economical prejudice – the manager may, e.g. prefer for nepotistic reasons to employ his stupid brother even if that decreases the firm’s financial profit possibilities. This is an example of how the Austrians refute the basic premiss of absolute rationality in terms of maximal profit seeking. Given this basis, other immediate implications are:

the means-ends relationship, the time-structure of production, time-preference, the law of diminishing marginal utility, the law of optimum returns, etc.

Rothbard quotes Mises for seeing the fundamental Axiom as a ‘Law of Thought’ – while he himself sees this as a much too Kantian way of expressing it, he prefers instead the simple Aristotelian/Thomist idea of a ‘Law of Reality’. Rothbard furthermore insists that this doctrine is not inherently political – in order to attain the Austrians’ average liberalist political orientation, the preference for certain types of ends must be added to the a priori theory (such as the preference for life over death, abundance over poverty, etc.). This also displays the radicality of the Austrian approach: nothing is assumed about the content of human ends – this is why they will never subscribe to theories about Man as economically rational agent or Man as necessarily economical egotist. All different ends meet and compete on the market – including both desire for profit in one end and idealist, utopian, or altruist goals in the other. The principal interest, in these features of economical theory is the high degree of awareness of the difference between the – extreme – synthetic a priori theory developed, on the one hand, and its incarnation in concrete empirical cases and their limiting conditions on the other.

 

The Second Trichotomy. Thought of the Day 120.0

Figure-2-Peirce's-triple-trichotomy

The second trichotomy (here is the first) is probably the most well-known piece of Peirce’s semiotics: it distinguishes three possible relations between the sign and its (dynamical) object. This relation may be motivated by similarity, by actual connection, or by general habit – giving rise to the sign classes icon, index, and symbol, respectively.

According to the second trichotomy, a Sign may be termed an Icon, an Index, or a Symbol.

An Icon is a sign which refers to the Object that it denotes merely by virtue of characters of its own, and which it possesses, just the same, whether any such Object actually exists or not. It is true that unless there really is such an Object, the Icon does not act as a sign; but this has nothing to do with its character as a sign. Anything whatever, be it quality, existent individual, or law, is an Icon of anything, in so far as it is like that thing and used as a sign of it.

An Index is a sign which refers to the Object that it denotes by virtue of being really affected by that Object. It cannot, therefore, be a Qualisign, because qualities are whatever they are independently of anything else. In so far as the Index is affected by the Object, it necessarily has some Quality in common with the Object, and it is in respect to these that it refers to the Object. It does, therefore, involve a sort of Icon, although an Icon of a peculiar kind; and it is not the mere resemblance of its Object, even in these respects which makes it a sign, but it is the actual modification of it by the Object. 

A Symbol is a sign which refers to the Object that it denotes by virtue of a law, usually an association of general ideas, which operates to cause the Symbol to be interpreted as referring to that Object. It is thus itself a general type or law, that is, a Legisign. As such it acts through a Replica. Not only is it general in itself, but the Object to which it refers is of general nature. Now that which is general has its being in the instances it will determine. There must, therefore, be existent instances of what the Symbol denotes, although we must here understand by ‘existent’, existent in the possibly imaginary universe to which the Symbol refers. The Symbol will indirectly, through the association or other law, be affected by those instances; and thus the Symbol will involve a sort of Index, although an Index of a peculiar kind. It will not, however, be by any means true that the slight effect upon the Symbol of those instances accounts for the significant character of the Symbol.

The icon refers to its object solely by means of its own properties. This implies that an icon potentially refers to an indefinite class of objects, namely all those objects which have, in some respect, a relation of similarity to it. In recent semiotics, it has often been remarked by someone like Nelson Goodman that any phenomenon can be said to be like any other phenomenon in some respect, if the criterion of similarity is chosen sufficiently general, just like the establishment of any convention immediately implies a similarity relation. If Nelson Goodman picks out two otherwise very different objects, then they are immediately similar to the extent that they now have the same relation to Nelson Goodman. Goodman and others have for this reason deemed the similarity relation insignificant – and consequently put the whole burden of semiotics on the shoulders of conventional signs only. But the counterargument against this rejection of the relevance of the icon lies close at hand. Given a tertium comparationis, a measuring stick, it is no longer possible to make anything be like anything else. This lies in Peirce’s observation that ‘It is true that unless there really is such an Object, the Icon does not act as a sign ’ The icon only functions as a sign to the extent that it is, in fact, used to refer to some object – and when it does that, some criterion for similarity, a measuring stick (or, at least, a delimited bundle of possible measuring sticks) are given in and with the comparison. In the quote just given, it is of course the immediate object Peirce refers to – it is no claim that there should in fact exist such an object as the icon refers to. Goodman and others are of course right in claiming that as ‘Anything whatever ( ) is an Icon of anything ’, then the universe is pervaded by a continuum of possible similarity relations back and forth, but as soon as some phenomenon is in fact used as an icon for an object, then a specific bundle of similarity relations are picked out: ‘ in so far as it is like that thing.’

Just like the qualisign, the icon is a limit category. ‘A possibility alone is an Icon purely by virtue of its quality; and its object can only be a Firstness.’ (Charles S. PeirceThe Essential Peirce_ Selected Philosophical Writings). Strictly speaking, a pure icon may only refer one possible Firstness to another. The pure icon would be an identity relation between possibilities. Consequently, the icon must, as soon as it functions as a sign, be more than iconic. The icon is typically an aspect of a more complicated sign, even if very often a most important aspect, because providing the predicative aspect of that sign. This Peirce records by his notion of ‘hypoicon’: ‘But a sign may be iconic, that is, may represent its object mainly by its similarity, no matter what its mode of being. If a substantive is wanted, an iconic representamen may be termed a hypoicon’. Hypoicons are signs which to a large extent makes use of iconical means as meaning-givers: images, paintings, photos, diagrams, etc. But the iconic meaning realized in hypoicons have an immensely fundamental role in Peirce’s semiotics. As icons are the only signs that look-like, then they are at the same time the only signs realizing meaning. Thus any higher sign, index and symbol alike, must contain, or, by association or inference terminate in, an icon. If a symbol can not give an iconic interpretant as a result, it is empty. In that respect, Peirce’s doctrine parallels that of Husserl where merely signitive acts require fulfillment by intuitive (‘anschauliche’) acts. This is actually Peirce’s continuation of Kant’s famous claim that intuitions without concepts are blind, while concepts without intuitions are empty. When Peirce observes that ‘With the exception of knowledge, in the present instant, of the contents of consciousness in that instant (the existence of which knowledge is open to doubt) all our thought and knowledge is by signs’ (Letters to Lady Welby), then these signs necessarily involve iconic components. Peirce has often been attacked for his tendency towards a pan-semiotism which lets all mental and physical processes take place via signs – in the quote just given, he, analogous to Husserl, claims there must be a basic evidence anterior to the sign – just like Husserl this evidence before the sign must be based on a ‘metaphysics of presence’ – the ‘present instant’ provides what is not yet mediated by signs. But icons provide the connection of signs, logic and science to this foundation for Peirce’s phenomenology: the icon is the only sign providing evidence (Charles S. Peirce The New Elements of Mathematics Vol. 4). The icon is, through its timeless similarity, apt to communicate aspects of an experience ‘in the present instant’. Thus, the typical index contains an icon (more or less elaborated, it is true): any symbol intends an iconic interpretant. Continuity is at stake in relation to the icon to the extent that the icon, while not in itself general, is the bearer of a potential generality. The infinitesimal generality is decisive for the higher sign types’ possibility to give rise to thought: the symbol thus contains a bundle of general icons defining its meaning. A special icon providing the condition of possibility for general and rigorous thought is, of course, the diagram.

The index connects the sign directly with its object via connection in space and time; as an actual sign connected to its object, the index is turned towards the past: the action which has left the index as a mark must be located in time earlier than the sign, so that the index presupposes, at least, the continuity of time and space without which an index might occur spontaneously and without any connection to a preceding action. Maybe surprisingly, in the Peircean doctrine, the index falls in two subtypes: designators vs. reagents. Reagents are the simplest – here the sign is caused by its object in one way or another. Designators, on the other hand, are more complex: the index finger as pointing to an object or the demonstrative pronoun as the subject of a proposition are prototypical examples. Here, the index presupposes an intention – the will to point out the object for some receiver. Designators, it must be argued, presuppose reagents: it is only possible to designate an object if you have already been in reagent contact (simulated or not) with it (this forming the rational kernel of causal reference theories of meaning). The closer determination of the object of an index, however, invariably involves selection on the background of continuities.

On the level of the symbol, continuity and generality play a main role – as always when approaching issues defined by Thirdness. The symbol is, in itself a legisign, that is, it is a general object which exists only due to its actual instantiations. The symbol itself is a real and general recipe for the production of similar instantiations in the future. But apart from thus being a legisign, it is connected to its object thanks to a habit, or regularity. Sometimes, this is taken to mean ‘due to a convention’ – in an attempt to distinguish conventional as opposed to motivated sign types. This, however, rests on a misunderstanding of Peirce’s doctrine in which the trichotomies record aspects of sign, not mutually exclusive, independent classes of signs: symbols and icons do not form opposed, autonomous sign classes; rather, the content of the symbol is constructed from indices and general icons. The habit realized by a symbol connects it, as a legisign, to an object which is also general – an object which just like the symbol itself exists in instantiations, be they real or imagined. The symbol is thus a connection between two general objects, each of them being actualized through replicas, tokens – a connection between two continua, that is:

Definition 1. Any Blank is a symbol which could not be vaguer than it is (although it may be so connected with a definite symbol as to form with it, a part of another partially definite symbol), yet which has a purpose.

Axiom 1. It is the nature of every symbol to blank in part. [ ]

Definition 2. Any Sheet would be that element of an entire symbol which is the subject of whatever definiteness it may have, and any such element of an entire symbol would be a Sheet. (‘Sketch of Dichotomic Mathematics’ (The New Elements of Mathematics Vol. 4 Mathematical Philosophy)

The symbol’s generality can be described as it having always blanks having the character of being indefinite parts of its continuous sheet. Thus, the continuity of its blank parts is what grants its generality. The symbol determines its object according to some rule, granting the object satisfies that rule – but leaving the object indeterminate in all other respects. It is tempting to take the typical symbol to be a word, but it should rather be taken as the argument – the predicate and the proposition being degenerate versions of arguments with further continuous blanks inserted by erasure, so to speak, forming the third trichotomy of term, proposition, argument.

Desire of the Pervert. Thought of the Day 102.0

hqdefault

The subject’s lack is the cynosure of the analytic process. The psychoanalytic discourse places the object a, the marker of lack, in the dominant position. The analyst embroiders the transferential relationship with the analysand by centralizing the constitutive lack of the object as a precondition for desire, which brings the subject to the locus of the Other. As well as lack, the specular image that takes over it and marks its boundaries, that is, the ego, is the other focal point of analysis. The image has its borders; this is the frame of the mirror. Around the limits of the image is where anxiety will make its appearance as what signals the momentary disruption of all points of identification. The limits of the mirror are symbolized by Lacan’s “little diamond” (<>), the sign which indicates the relation between the subject and the object in the matheme of fantasy ($<>a). This relation is mediated by desire. The role of the specular image, functioning as a sort of filter, is to protect the subject from anxiety by covering lack, but also marking it. The reflection in the mirror functions like a window frame that demarcates the illusory world of recognition (imaginary) from what Lacan calls “stage” (symbolic reality). In this stage, we find the desire of the masochist and the sadist. The extra-ordinary and the ordinary subject stage their desire in the same arena, playing the same part, with diametrically different techniques.

The scenarios of “perverse” desire do not just linger in a fantasmatic frame (as happens with neurosis); the extra-ordinary cross the window, taking fantasy on stage, that is, acting it out in the symbolic. The vacillation between desire and jouissance is absent from the extra-ordinary, because he is certain about what he wants. Contrary to the neurotic, whose desire always remains in doubt (this is the desire of the Other), the pervert does not have the doubt, but the knowledge of what he desires. The enduring question of “what the Other wants from me” is absent; the “pervert” takes the game in his hands, he knows and applies the rules. The desire of the “pervert” is to be passively enjoyed by the Other, as it is best manifested in masochism. Lacan notes that the masochist is supposed to know how to enjoy the Other. The masochist is the one who gives the orders, the commands, the knowledge, to the Other, who has to tackle its limits. The masochist is aiming at the jouissance of the Other . . . the final term he is aiming at is anxiety of the Other.

Constructivism. Note Quote.

f110f2532724a581461f7024fdde344c

Constructivism, as portrayed by its adherents, “is the idea that we construct our own world rather than it being determined by an outside reality”. Indeed, a common ground among constructivists of different persuasion lies in a commitment to the idea that knowledge is actively built up by the cognizing subject. But, whereas individualistic constructivism (which is most clearly enunciated by radical constructivism) focuses on the biological/psychological mechanisms that lead to knowledge construction, sociological constructivism focuses on the social factors that influence learning.

Let us briefly consider certain fundamental assumptions of individualistic constructivism. The first issue a constructivist theory of cognition ought to elucidate concerns of course the raw materials on which knowledge is constructed. On this issue, von Glaserfeld, an eminent representative of radical constructivism, gives a categorical answer: “from the constructivist point of view, the subject cannot transcend the limits of individual experience” (Michael R. Matthews Constructivism in Science Education_ A Philosophical Examination). This statement presents the keystone of constructivist epistemology, which conclusively asserts that “the only tools available to a ‘knower’ are the senses … [through which] the individual builds a picture of the world”. What is more, the so formed mental pictures do not shape an ‘external’ to the subject world, but the distinct personal reality of each individual. And this of course entails, in its turn, that the responsibility for the gained knowledge lies with the constructor; it cannot be shifted to a pre-existing world. As Ranulph Glanville confesses, “reality is what I sense, as I sense it, when I’m being honest about it” .

In this way, individualistic constructivism estranges the cognizing subject from the external world. Cognition is not considered as aiming at the discovery and investigation of an ‘independent’ world; it is viewed as a ‘tool’ that exclusively serves the adaptation of the subject to the world as it is experienced. From this perspective, ‘knowledge’ acquires an entirely new meaning. In the expression of von Glaserfeld,

the word ‘knowledge’ refers to conceptual structures that epistemic agents, given the range of present experience, within their tradition of thought and language, consider viable….[Furthermore] concepts have to be individually built up by reflective abstraction; and reflective abstraction is not a matter of looking closer but at operating mentally in a way that happens to be compatible with the perceptual material at hand.

To say it briefly, ‘knowledge’ signifies nothing more than an adequate organization of the experiential world, which makes the cognizing subject capable to effectively manipulate its perceptual experience.

It is evident that such insights, precluding any external point of reference, have impacts on knowledge evaluation. Indeed, the ascertainment that “for constructivists there are no structures other than those which the knower forms by its own activity” (Michael R. MatthewsConstructivism in Science Education A Philosophical Examination) yields unavoidably the conclusion drawn by Gerard De Zeeuw that “there is no mind-independent yardstick against which to measure the quality of any solution”. Hence, knowledge claims should not be evaluated by reference to a supposed ‘external’ world, but only by reference to their internal consistency and personal utility. This is precisely the reason that leads von Glaserfeld to suggest the substitution of the notion of “truth” by the notion of “viability” or “functional fit”: knowledge claims are appraised as “true”, if they “functionally fit” into the subject’s experiential world; and to find a “fit” simply means not to notice any discrepancies. This functional adaptation of ‘knowledge’ to experience is what finally secures the intended “viability”.

In accordance with the constructivist view, the notion of ‘object’, far from indicating any kind of ‘existence’, it explicitly refers to a strictly personal construction of the cognizing subject. Specifically, “any item of the furniture of someone’s experiential world can be called an ‘object’” (von Glaserfeld). From this point of view, the supposition that “the objects one has isolated in his experience are identical with those others have formed … is an illusion”. This of course deprives language of any rigorous criterion of objectivity; its physical-object statements, being dependent upon elements that are derived from personal experience, cannot be considered to reveal attributes of the objects as they factually are. Incorporating concepts whose meaning is highly associated with the individual experience of the cognizing subject, these statements form at the end a personal-specific description of the world. Conclusively, for constructivists the term ‘objectivity’ “shows no more than a relative compatibility of concepts” in situations where individuals have had occasion to compare their “individual uses of the particular words”.

From the viewpoint of radical constructivism, science, being a human enterprise, is amenable, by its very nature, to human limitations. It is then naturally inferred on constructivist grounds that “science cannot transcend [just as individuals cannot] the domain of experience” (von Glaserfeld). This statement, indicating that there is no essential differentiation between personal and scientific knowledge, permits, for instance, John Staver to assert that “for constructivists, observations, objects, events, data, laws and theory do not exist independent of observers. The lawful and certain nature of natural phenomena is a property of us, those who describe, not of nature, what is described”. Accordingly, by virtue of the preceding premise, one may argue that “scientific theories are derived from human experience and formulated in terms of human concepts” (von Glaserfeld).

In the framework now of social constructivism, if one accepts that the term ‘knowledge’ means no more than “what is collectively endorsed” (David Bloor Knowledge and Social Imagery), he will probably come to the conclusion that “the natural world has a small or non-existent role in the construction of scientific knowledge” (Collins). Or, in a weaker form, one can postulate that “scientific knowledge is symbolic in nature and socially negotiated. The objects of science are not the phenomena of nature but constructs advanced by the scientific community to interpret nature” (Rosalind Driver et al.). It is worth remarking that both views of constructivism eliminate, or at least downplay, the role of the natural world in the construction of scientific knowledge.

It is evident that the foregoing considerations lead most versions of constructivism to ultimately conclude that the very word ‘existence’ has no meaning in itself. It does acquire meaning only by referring to individuals or human communities. The acknowledgement of this fact renders subsequently the notion of ‘external’ physical reality useless and therefore redundant. As Riegler puts it, within the constructivist framework, “an external reality is neither rejected nor confirmed, it must be irrelevant”.

The Mystery of Modality. Thought of the Day 78.0

sixdimensionquantificationalmodallogic.01

The ‘metaphysical’ notion of what would have been no matter what (the necessary) was conflated with the epistemological notion of what independently of sense-experience can be known to be (the a priori), which in turn was identified with the semantical notion of what is true by virtue of meaning (the analytic), which in turn was reduced to a mere product of human convention. And what motivated these reductions?

The mystery of modality, for early modern philosophers, was how we can have any knowledge of it. Here is how the question arises. We think that when things are some way, in some cases they could have been otherwise, and in other cases they couldn’t. That is the modal distinction between the contingent and the necessary.

How do we know that the examples are examples of that of which they are supposed to be examples? And why should this question be considered a difficult problem, a kind of mystery? Well, that is because, on the one hand, when we ask about most other items of purported knowledge how it is we can know them, sense-experience seems to be the source, or anyhow the chief source of our knowledge, but, on the other hand, sense-experience seems able only to provide knowledge about what is or isn’t, not what could have been or couldn’t have been. How do we bridge the gap between ‘is’ and ‘could’? The classic statement of the problem was given by Immanuel Kant, in the introduction to the second or B edition of his first critique, The Critique of Pure Reason: ‘Experience teaches us that a thing is so, but not that it cannot be otherwise.’

Note that this formulation allows that experience can teach us that a necessary truth is true; what it is not supposed to be able to teach is that it is necessary. The problem becomes more vivid if one adopts the language that was once used by Leibniz, and much later re-popularized by Saul Kripke in his famous work on model theory for formal modal systems, the usage according to which the necessary is that which is ‘true in all possible worlds’. In these terms the problem is that the senses only show us this world, the world we live in, the actual world as it is called, whereas when we claim to know about what could or couldn’t have been, we are claiming knowledge of what is going on in some or all other worlds. For that kind of knowledge, it seems, we would need a kind of sixth sense, or extrasensory perception, or nonperceptual mode of apprehension, to see beyond the world in which we live to these various other worlds.

Kant concludes, that our knowledge of necessity must be what he calls a priori knowledge or knowledge that is ‘prior to’ or before or independent of experience, rather than what he calls a posteriori knowledge or knowledge that is ‘posterior to’ or after or dependant on experience. And so the problem of the origin of our knowledge of necessity becomes for Kant the problem of the origin of our a priori knowledge.

Well, that is not quite the right way to describe Kant’s position, since there is one special class of cases where Kant thinks it isn’t really so hard to understand how we can have a priori knowledge. He doesn’t think all of our a priori knowledge is mysterious, but only most of it. He distinguishes what he calls analytic from what he calls synthetic judgments, and holds that a priori knowledge of the former is unproblematic, since it is not really knowledge of external objects, but only knowledge of the content of our own concepts, a form of self-knowledge.

We can generate any number of examples of analytic truths by the following three-step process. First, take a simple logical truth of the form ‘Anything that is both an A and a B is a B’, for instance, ‘Anyone who is both a man and unmarried is unmarried’. Second, find a synonym C for the phrase ‘thing that is both an A and a B’, for instance, ‘bachelor’ for ‘one who is both a man and unmarried’. Third, substitute the shorter synonym for the longer phrase in the original logical truth to get the truth ‘Any C is a B’, or in our example, the truth ‘Any bachelor is unmarried’. Our knowledge of such a truth seems unproblematic because it seems to reduce to our knowledge of the meanings of our own words.

So the problem for Kant is not exactly how knowledge a priori is possible, but more precisely how synthetic knowledge a priori is possible. Kant thought we do have examples of such knowledge. Arithmetic, according to Kant, was supposed to be synthetic a priori, and geometry, too – all of pure mathematics. In his Prolegomena to Any Future Metaphysics, Kant listed ‘How is pure mathematics possible?’ as the first question for metaphysics, for the branch of philosophy concerned with space, time, substance, cause, and other grand general concepts – including modality.

Kant offered an elaborate explanation of how synthetic a priori knowledge is supposed to be possible, an explanation reducing it to a form of self-knowledge, but later philosophers questioned whether there really were any examples of the synthetic a priori. Geometry, so far as it is about the physical space in which we live and move – and that was the original conception, and the one still prevailing in Kant’s day – came to be seen as, not synthetic a priori, but rather a posteriori. The mathematician Carl Friedrich Gauß had already come to suspect that geometry is a posteriori, like the rest of physics. Since the time of Einstein in the early twentieth century the a posteriori character of physical geometry has been the received view (whence the need for border-crossing from mathematics into physics if one is to pursue the original aim of geometry).

As for arithmetic, the logician Gottlob Frege in the late nineteenth century claimed that it was not synthetic a priori, but analytic – of the same status as ‘Any bachelor is unmarried’, except that to obtain something like ‘29 is a prime number’ one needs to substitute synonyms in a logical truth of a form much more complicated than ‘Anything that is both an A and a B is a B’. This view was subsequently adopted by many philosophers in the analytic tradition of which Frege was a forerunner, whether or not they immersed themselves in the details of Frege’s program for the reduction of arithmetic to logic.

Once Kant’s synthetic a priori has been rejected, the question of how we have knowledge of necessity reduces to the question of how we have knowledge of analyticity, which in turn resolves into a pair of questions: On the one hand, how do we have knowledge of synonymy, which is to say, how do we have knowledge of meaning? On the other hand how do we have knowledge of logical truths? As to the first question, presumably we acquire knowledge, explicit or implicit, conscious or unconscious, of meaning as we learn to speak, by the time we are able to ask the question whether this is a synonym of that, we have the answer. But what about knowledge of logic? That question didn’t loom large in Kant’s day, when only a very rudimentary logic existed, but after Frege vastly expanded the realm of logic – only by doing so could he find any prospect of reducing arithmetic to logic – the question loomed larger.

Many philosophers, however, convinced themselves that knowledge of logic also reduces to knowledge of meaning, namely, of the meanings of logical particles, words like ‘not’ and ‘and’ and ‘or’ and ‘all’ and ‘some’. To be sure, there are infinitely many logical truths, in Frege’s expanded logic. But they all follow from or are generated by a finite list of logical rules, and philosophers were tempted to identify knowledge of the meanings of logical particles with knowledge of rules for using them: Knowing the meaning of ‘or’, for instance, would be knowing that ‘A or B’ follows from A and follows from B, and that anything that follows both from A and from B follows from ‘A or B’. So in the end, knowledge of necessity reduces to conscious or unconscious knowledge of explicit or implicit semantical rules or linguistics conventions or whatever.

Such is the sort of picture that had become the received wisdom in philosophy departments in the English speaking world by the middle decades of the last century. For instance, A. J. Ayer, the notorious logical positivist, and P. F. Strawson, the notorious ordinary-language philosopher, disagreed with each other across a whole range of issues, and for many mid-century analytic philosophers such disagreements were considered the main issues in philosophy (though some observers would speak of the ‘narcissism of small differences’ here). And people like Ayer and Strawson in the 1920s through 1960s would sometimes go on to speak as if linguistic convention were the source not only of our knowledge of modality, but of modality itself, and go on further to speak of the source of language lying in ourselves. Individually, as children growing up in a linguistic community, or foreigners seeking to enter one, we must consciously or unconsciously learn the explicit or implicit rules of the communal language as something with a source outside us to which we must conform. But by contrast, collectively, as a speech community, we do not so much learn as create the language with its rules. And so if the origin of modality, of necessity and its distinction from contingency, lies in language, it therefore lies in a creation of ours, and so in us. ‘We, the makers and users of language’ are the ground and source and origin of necessity. Well, this is not a literal quotation from any one philosophical writer of the last century, but a pastiche of paraphrases of several.

Production of the Schizoid, End of Capitalism and Laruelle’s Radical Immanence. Note Quote Didactics.

space

These are eclectics of the production, eclectics of the repetition, eclectics of the difference, where the fecundity of the novelty would either spring forth, or be weeded out. There is ‘schizoproduction’ prevalent in the world. This axiomatic schizoproduction is not a speech act, but discursive, in the sense that it constrains how meaning is distilled from relations, without the need for signifying, linguistic acts. Schizoproduction performs the relation. The bare minimum of schizoproduction is the gesture of transcending thought: namely, what François Laruelle calls a ‘decision’. Decision is differential, but it does not have to signify. It is the capacity to produce distinction and separation, in the most minimal, axiomatic form. Schizoproduction is capitalism turned into immanent capitalism, through a gesture of thought – sufficient thought. It is where capitalism has become a philosophy of life, in that it has a firm belief within a sufficient thought, whatever it comes in contact with. It is an expression of the real, the radical immanence as a transcending arrangement. It is a collective articulation bound up with intricate relations and management of carnal, affective, and discursive matter. The present form of capitalism is based on relationships, collaborations, and processuality, and in this is altogether different from the industrial period of modernism in the sense of subjectivity, production, governance, biopolitics and so on. In both cases, the life of a subject is valuable, since it is a substratum of potentiality and capacity, creativity and innovation; and in both cases, a subject is produced with physical, mental, cognitive and affective capacities compatible with each arrangement. Artistic practice is aligned with a shift from modern liberalism to the neoliberal dynamic position of the free agent.

Such attributes have thus become so obvious that the concepts of ‘competence’, ‘trust’ or ‘interest’ are taken as given facts, instead of perceiving them as functions within an arrangement. It is not that neoliberal management has leveraged the world from its joints, but that it is rather capitalism as philosophy, which has produced this world, where neoliberalism is just a part of the philosophy. Therefore, the thought of the end of capitalism will always be speculative, since we may regard the world without capitalism in the same way as we may regard the world-not-for-humans, which may be a speculative one, also. From its inception, capitalism paved a one-way path to annihilation, predicated as it was on unmitigated growth, the extraction of finite resources, the exaltation of individualism over communal ties, and the maximization of profit at the expense of the environment and society. The capitalist world was, as Thurston Clarke described so bleakly, ”dominated by the concerns of trade and Realpolitik rather than by human rights and spreading democracy”; it was a ”civilization influenced by the impersonal, bottom-line values of the corporations.” Capitalist industrial civilization was built on burning the organic remains of ancient organisms, but at the cost of destroying the stable climatic conditions which supported its very construction. The thirst for fossil fuels by our globalized, high-energy economy spurred increased technological development to extract the more difficult-to-reach reserves, but this frantic grasp for what was left only served to hasten the malignant transformation of Earth into an alien world. The ruling class tried to hold things together for as long as they could by printing money, propping up markets, militarizing domestic law enforcement, and orchestrating thinly veiled resource wars in the name of fighting terrorism, but the crisis of capitalism was intertwined with the ecological crisis and could never be solved by those whose jobs and social standing depended on protecting the status quo. All the corporate PR, greenwashing, political promises, cultural myths, and anthropocentrism could not hide the harsh Malthusian reality of ecological overshoot. As crime sky-rocketed and social unrest boiled over into rioting and looting, the elite retreated behind walled fortresses secured by armed guards, but the great unwinding of industrial civilization was already well underway. This evil genie was never going back in the bottle. And thats speculative too, or not really is a nuance to be fought hard on.

The immanence of capitalism is a transcending immanence: a system, which produces a world as an arrangement, through a capitalist form of thought—the philosophy of capitalism—which is a philosophy of sufficient reason in which economy is the determination in the last instance, and not the real. We need to specifically regard that this world is not real. The world is a process, a “geopolitical fiction”. Aside from this reason, there is an unthinkable world that is not for humans. It is not the world in itself, noumena, nor is it nature, bios, but rather it is the world indifferent to and foreclosed from human thought, a foreclosed and radical immanence – the real – which is not open nor will ever be opening itself for human thought. It will forever remain void and unilaterally indifferent. The radical immanence of the real is not an exception – analogous to the miracle in theology – but rather, it is an advent of the unprecedented unknown, where the lonely hour of last instance never comes. This radical immanence does not confer with ‘the new’ or with ‘the same’ and does not transcend through thought. It is matter in absolute movement, into which philosophy or oikonomia incorporates conditions, concepts, and operations. Now, a shift in thought is possible where the determination in the last instance would no longer be economy but rather a radical immanence of the real, as philosopher François Laruelle has argued. What is given, what is radically immanent in and as philosophy, is the mode of transcendental knowledge in which it operates. To know this mode of knowledge, to know it without entering into its circle, is to practice a science of the transcendental, the “transcendental science” of non-philosophy. This science is of the transcendental, but according to Laruelle, it must also itself be transcendental – it must be a global theory of the given-ness of the real. A non- philosophical transcendental is required if philosophy as a whole, including its transcendental structure, is to be received and known as it is. François Laruelle radicalises the Marxist term of determined-in-the-last-instance reworked by Louis Althusser, for whom the last instance as a dominating force was the economy. For Laruelle, the determination-in-the-last-instance is the Real and that “everything philosophy claims to master is in-the-last-instance thinkable from the One-Real”. For Althusser, referring to Engels, the economy is the ‘determination in the last instance’ in the long run, but only concerning the other determinations by the superstructures such as traditions. Following this, the “lonely hour of the ‘last instance’ never comes”.

Accelerated Capital as an Anathema to the Principles of Communicative Action. A Note Quote on the Reciprocity of Capital and Ethicality of Financial Economics

continuum

Markowitz portfolio theory explicitly observes that portfolio managers are not (expected) utility maximisers, as they diversify, and offers the hypothesis that a desire for reward is tempered by a fear of uncertainty. This model concludes that all investors should hold the same portfolio, their individual risk-reward objectives are satisfied by the weighting of this ‘index portfolio’ in comparison to riskless cash in the bank, a point on the capital market line. The slope of the Capital Market Line is the market price of risk, which is an important parameter in arbitrage arguments.

Merton had initially attempted to provide an alternative to Markowitz based on utility maximisation employing stochastic calculus. He was only able to resolve the problem by employing the hedging arguments of Black and Scholes, and in doing so built a model that was based on the absence of arbitrage, free of turpe-lucrum. That the prescriptive statement “it should not be possible to make sure profits”, is a statement explicit in the Efficient Markets Hypothesis and in employing an Arrow security in the context of the Law of One Price. Based on these observations, we conject that the whole paradigm for financial economics is built on the principle of balanced reciprocity. In order to explore this conjecture we shall examine the relationship between commerce and themes in Pragmatic philosophy. Specifically, we highlight Robert Brandom’s (Making It Explicit Reasoning, Representing, and Discursive Commitment) position that there is a pragmatist conception of norms – a notion of primitive correctnesses of performance implicit in practice that precludes and are presupposed by their explicit formulation in rules and principles.

The ‘primitive correctnesses’ of commercial practices was recognised by Aristotle when he investigated the nature of Justice in the context of commerce and then by Olivi when he looked favourably on merchants. It is exhibited in the doux-commerce thesis, compare Fourcade and Healey’s contemporary description of the thesis Commerce teaches ethics mainly through its communicative dimension, that is, by promoting conversations among equals and exchange between strangers, with Putnam’s description of Habermas’ communicative action based on the norm of sincerity, the norm of truth-telling, and the norm of asserting only what is rationally warranted …[and] is contrasted with manipulation (Hilary Putnam The Collapse of the Fact Value Dichotomy and Other Essays)

There are practices (that should be) implicit in commerce that make it an exemplar of communicative action. A further expression of markets as centres of communication is manifested in the Asian description of a market brings to mind Donald Davidson’s (Subjective, Intersubjective, Objective) argument that knowledge is not the product of a bipartite conversations but a tripartite relationship between two speakers and their shared environment. Replacing the negotiation between market agents with an algorithm that delivers a theoretical price replaces ‘knowledge’, generated through communication, with dogma. The problem with the performativity that Donald MacKenzie (An Engine, Not a Camera_ How Financial Models Shape Markets) is concerned with is one of monism. In employing pricing algorithms, the markets cannot perform to something that comes close to ‘true belief’, which can only be identified through communication between sapient humans. This is an almost trivial observation to (successful) market participants, but difficult to appreciate by spectators who seek to attain ‘objective’ knowledge of markets from a distance. To appreciate the relevance to financial crises of the position that ‘true belief’ is about establishing coherence through myriad triangulations centred on an asset rather than relying on a theoretical model.

Shifting gears now, unless the martingale measure is a by-product of a hedging approach, the price given by such martingale measures is not related to the cost of a hedging strategy therefore the meaning of such ‘prices’ is not clear. If the hedging argument cannot be employed, as in the markets studied by Cont and Tankov (Financial Modelling with Jump Processes), there is no conceptual framework supporting the prices obtained from the Fundamental Theorem of Asset Pricing. This lack of meaning can be interpreted as a consequence of the strict fact/value dichotomy in contemporary mathematics that came with the eclipse of Poincaré’s Intuitionism by Hilbert’s Formalism and Bourbaki’s Rationalism. The practical problem of supporting the social norms of market exchange has been replaced by a theoretical problem of developing formal models of markets. These models then legitimate the actions of agents in the market without having to make reference to explicitly normative values.

The Efficient Market Hypothesis is based on the axiom that the market price is determined by the balance between supply and demand, and so an increase in trading facilitates the convergence to equilibrium. If this axiom is replaced by the axiom of reciprocity, the justification for speculative activity in support of efficient markets disappears. In fact, the axiom of reciprocity would de-legitimise ‘true’ arbitrage opportunities, as being unfair. This would not necessarily make the activities of actual market arbitrageurs illicit, since there are rarely strategies that are without the risk of a loss, however, it would place more emphasis on the risks of speculation and inhibit the hubris that has been associated with the prelude to the recent Crisis. These points raise the question of the legitimacy of speculation in the markets. In an attempt to understand this issue Gabrielle and Reuven Brenner identify the three types of market participant. ‘Investors’ are preoccupied with future scarcity and so defer income. Because uncertainty exposes the investor to the risk of loss, investors wish to minimise uncertainty at the cost of potential profits, this is the basis of classical investment theory. ‘Gamblers’ will bet on an outcome taking odds that have been agreed on by society, such as with a sporting bet or in a casino, and relates to de Moivre’s and Montmort’s ‘taming of chance’. ‘Speculators’ bet on a mis-calculation of the odds quoted by society and the reason why speculators are regarded as socially questionable is that they have opinions that are explicitly at odds with the consensus: they are practitioners who rebel against a theoretical ‘Truth’. This is captured in Arjun Appadurai’s argument that the leading agents in modern finance believe in their capacity to channel the workings of chance to win in the games dominated by cultures of control . . . [they] are not those who wish to “tame chance” but those who wish to use chance to animate the otherwise deterministic play of risk [quantifiable uncertainty]”.

In the context of Pragmatism, financial speculators embody pluralism, a concept essential to Pragmatic thinking and an antidote to the problem of radical uncertainty. Appadurai was motivated to study finance by Marcel Mauss’ essay Le Don (The Gift), exploring the moral force behind reciprocity in primitive and archaic societies and goes on to say that the contemporary financial speculator is “betting on the obligation of return”, and this is the fundamental axiom of contemporary finance. David Graeber (Debt The First 5,000 Years) also recognises the fundamental position reciprocity has in finance, but where as Appadurai recognises the importance of reciprocity in the presence of uncertainty, Graeber essentially ignores uncertainty in his analysis that ends with the conclusion that “we don’t ‘all’ have to pay our debts”. In advocating that reciprocity need not be honoured, Graeber is not just challenging contemporary capitalism but also the foundations of the civitas, based on equality and reciprocity. The origins of Graeber’s argument are in the first half of the nineteenth century. In 1836 John Stuart Mill defined political economy as being concerned with [man] solely as a being who desires to possess wealth, and who is capable of judging of the comparative efficacy of means for obtaining that end.

In Principles of Political Economy With Some of Their Applications to Social Philosophy, Mill defended Thomas Malthus’ An Essay on the Principle of Population, which focused on scarcity. Mill was writing at a time when Europe was struck by the Cholera pandemic of 1829–1851 and the famines of 1845–1851 and while Lord Tennyson was describing nature as “red in tooth and claw”. At this time, society’s fear of uncertainty seems to have been replaced by a fear of scarcity, and these standards of objectivity dominated economic thought through the twentieth century. Almost a hundred years after Mill, Lionel Robbins defined economics as “the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses”. Dichotomies emerge in the aftermath of the Cartesian revolution that aims to remove doubt from philosophy. Theory and practice, subject and object, facts and values, means and ends are all separated. In this environment ex cathedra norms, in particular utility (profit) maximisation, encroach on commercial practice.

In order to set boundaries on commercial behaviour motivated by profit maximisation, particularly when market uncertainty returned after the Nixon shock of 1971, society imposes regulations on practice. As a consequence, two competing ethics, functional Consequential ethics guiding market practices and regulatory Deontological ethics attempting stabilise the system, vie for supremacy. It is in this debilitating competition between two essentially theoretical ethical frameworks that we offer an explanation for the Financial Crisis of 2007-2009: profit maximisation, not speculation, is destabilising in the presence of radical uncertainty and regulation cannot keep up with motivated profit maximisers who can justify their actions through abstract mathematical models that bare little resemblance to actual markets. An implication of reorienting financial economics to focus on the markets as centres of ‘communicative action’ is that markets could become self-regulating, in the same way that the legal or medical spheres are self-regulated through professions. This is not a ‘libertarian’ argument based on freeing the Consequential ethic from a Deontological brake. Rather it argues that being a market participant entails restricting norms on the agent such as sincerity and truth telling that support knowledge creation, of asset prices, within a broader objective of social cohesion. This immediately calls into question the legitimacy of algorithmic/high- frequency trading that seems an anathema in regard to the principles of communicative action.

Fundamental Theorem of Asset Pricing: Tautological Meeting of Mathematical Martingale and Financial Arbitrage by the Measure of Probability.

thinkstockphotos-496599823

The Fundamental Theorem of Asset Pricing (FTAP hereafter) has two broad tenets, viz.

1. A market admits no arbitrage, if and only if, the market has a martingale measure.

2. Every contingent claim can be hedged, if and only if, the martingale measure is unique.

The FTAP is a theorem of mathematics, and the use of the term ‘measure’ in its statement places the FTAP within the theory of probability formulated by Andrei Kolmogorov (Foundations of the Theory of Probability) in 1933. Kolmogorov’s work took place in a context captured by Bertrand Russell, who observed that

It is important to realise the fundamental position of probability in science. . . . As to what is meant by probability, opinions differ.

In the 1920s the idea of randomness, as distinct from a lack of information, was becoming substantive in the physical sciences because of the emergence of the Copenhagen Interpretation of quantum mechanics. In the social sciences, Frank Knight argued that uncertainty was the only source of profit and the concept was pervading John Maynard Keynes’ economics (Robert Skidelsky Keynes the return of the master).

Two mathematical theories of probability had become ascendant by the late 1920s. Richard von Mises (brother of the Austrian economist Ludwig) attempted to lay down the axioms of classical probability within a framework of Empiricism, the ‘frequentist’ or ‘objective’ approach. To counter–balance von Mises, the Italian actuary Bruno de Finetti presented a more Pragmatic approach, characterised by his claim that “Probability does not exist” because it was only an expression of the observer’s view of the world. This ‘subjectivist’ approach was closely related to the less well-known position taken by the Pragmatist Frank Ramsey who developed an argument against Keynes’ Realist interpretation of probability presented in the Treatise on Probability.

Kolmogorov addressed the trichotomy of mathematical probability by generalising so that Realist, Empiricist and Pragmatist probabilities were all examples of ‘measures’ satisfying certain axioms. In doing this, a random variable became a function while an expectation was an integral: probability became a branch of Analysis, not Statistics. Von Mises criticised Kolmogorov’s generalised framework as un-necessarily complex. About a decade and a half back, the physicist Edwin Jaynes (Probability Theory The Logic Of Science) champions Leonard Savage’s subjectivist Bayesianism as having a “deeper conceptual foundation which allows it to be extended to a wider class of applications, required by current problems of science”.

The objections to measure theoretic probability for empirical scientists can be accounted for as a lack of physicality. Frequentist probability is based on the act of counting; subjectivist probability is based on a flow of information, which, following Claude Shannon, is now an observable entity in Empirical science. Measure theoretic probability is based on abstract mathematical objects unrelated to sensible phenomena. However, the generality of Kolmogorov’s approach made it flexible enough to handle problems that emerged in physics and engineering during the Second World War and his approach became widely accepted after 1950 because it was practically more useful.

In the context of the first statement of the FTAP, a ‘martingale measure’ is a probability measure, usually labelled Q, such that the (real, rather than nominal) price of an asset today, X0, is the expectation, using the martingale measure, of its (real) price in the future, XT. Formally,

X0 = EQ XT

The abstract probability distribution Q is defined so that this equality exists, not on any empirical information of historical prices or subjective judgement of future prices. The only condition placed on the relationship that the martingale measure has with the ‘natural’, or ‘physical’, probability measures usually assigned the label P, is that they agree on what is possible.

The term ‘martingale’ in this context derives from doubling strategies in gambling and it was introduced into mathematics by Jean Ville in a development of von Mises’ work. The idea that asset prices have the martingale property was first proposed by Benoit Mandelbrot in response to an early formulation of Eugene Fama’s Efficient Market Hypothesis (EMH), the two concepts being combined by Fama. For Mandelbrot and Fama the key consequence of prices being martingales was that the current price was independent of the future price and technical analysis would not prove profitable in the long run. In developing the EMH there was no discussion on the nature of the probability under which assets are martingales, and it is often assumed that the expectation is calculated under the natural measure. While the FTAP employs modern terminology in the context of value-neutrality, the idea of equating a current price with a future, uncertain, has ethical ramifications.

The other technical term in the first statement of the FTAP, arbitrage, has long been used in financial mathematics. Liber Abaci Fibonacci (Laurence Sigler Fibonaccis Liber Abaci) discusses ‘Barter of Merchandise and Similar Things’, 20 arms of cloth are worth 3 Pisan pounds and 42 rolls of cotton are similarly worth 5 Pisan pounds; it is sought how many rolls of cotton will be had for 50 arms of cloth. In this case there are three commodities, arms of cloth, rolls of cotton and Pisan pounds, and Fibonacci solves the problem by having Pisan pounds ‘arbitrate’, or ‘mediate’ as Aristotle might say, between the other two commodities.

Within neo-classical economics, the Law of One Price was developed in a series of papers between 1954 and 1964 by Kenneth Arrow, Gérard Debreu and Lionel MacKenzie in the context of general equilibrium, in particular the introduction of the Arrow Security, which, employing the Law of One Price, could be used to price any asset. It was on this principle that Black and Scholes believed the value of the warrants could be deduced by employing a hedging portfolio, in introducing their work with the statement that “it should not be possible to make sure profits” they were invoking the arbitrage argument, which had an eight hundred year history. In the context of the FTAP, ‘an arbitrage’ has developed into the ability to formulate a trading strategy such that the probability, under a natural or martingale measure, of a loss is zero, but the probability of a positive profit is not.

To understand the connection between the financial concept of arbitrage and the mathematical idea of a martingale measure, consider the most basic case of a single asset whose current price, X0, can take on one of two (present) values, XTD < XTU, at time T > 0, in the future. In this case an arbitrage would exist if X0 ≤ XTD < XTU: buying the asset now, at a price that is less than or equal to the future pay-offs, would lead to a possible profit at the end of the period, with the guarantee of no loss. Similarly, if XTD < XTU ≤ X0, short selling the asset now, and buying it back would also lead to an arbitrage. So, for there to be no arbitrage opportunities we require that

XTD < X0 < XTU

This implies that there is a number, 0 < q < 1, such that

X0 = XTD + q(XTU − XTD)

= qXTU + (1−q)XTD

The price now, X0, lies between the future prices, XTU and XTD, in the ratio q : (1 − q) and represents some sort of ‘average’. The first statement of the FTAP can be interpreted simply as “the price of an asset must lie between its maximum and minimum possible (real) future price”.

If X0 < XTD ≤ XTU we have that q < 0 whereas if XTD ≤ XTU < X0 then q > 1, and in both cases q does not represent a probability measure which by Kolmogorov’s axioms, must lie between 0 and 1. In either of these cases an arbitrage exists and a trader can make a riskless profit, the market involves ‘turpe lucrum’. This account gives an insight as to why James Bernoulli, in his moral approach to probability, considered situations where probabilities did not sum to 1, he was considering problems that were pathological not because they failed the rules of arithmetic but because they were unfair. It follows that if there are no arbitrage opportunities then quantity q can be seen as representing the ‘probability’ that the XTU price will materialise in the future. Formally

X0 = qXTU + (1−q) XTD ≡ EQ XT

The connection between the financial concept of arbitrage and the mathematical object of a martingale is essentially a tautology: both statements mean that the price today of an asset must lie between its future minimum and maximum possible value. This first statement of the FTAP was anticipated by Frank Ramsey when he defined ‘probability’ in the Pragmatic sense of ‘a degree of belief’ and argues that measuring ‘degrees of belief’ is through betting odds. On this basis he formulates some axioms of probability, including that a probability must lie between 0 and 1. He then goes on to say that

These are the laws of probability, …If anyone’s mental condition violated these laws, his choice would depend on the precise form in which the options were offered him, which would be absurd. He could have a book made against him by a cunning better and would then stand to lose in any event.

This is a Pragmatic argument that identifies the absence of the martingale measure with the existence of arbitrage and today this forms the basis of the standard argument as to why arbitrages do not exist: if they did the, other market participants would bankrupt the agent who was mis-pricing the asset. This has become known in philosophy as the ‘Dutch Book’ argument and as a consequence of the fact/value dichotomy this is often presented as a ‘matter of fact’. However, ignoring the fact/value dichotomy, the Dutch book argument is an alternative of the ‘Golden Rule’– “Do to others as you would have them do to you.”– it is infused with the moral concepts of fairness and reciprocity (Jeffrey Wattles The Golden Rule).

FTAP is the ethical concept of Justice, capturing the social norms of reciprocity and fairness. This is significant in the context of Granovetter’s discussion of embeddedness in economics. It is conventional to assume that mainstream economic theory is ‘undersocialised’: agents are rational calculators seeking to maximise an objective function. The argument presented here is that a central theorem in contemporary economics, the FTAP, is deeply embedded in social norms, despite being presented as an undersocialised mathematical object. This embeddedness is a consequence of the origins of mathematical probability being in the ethical analysis of commercial contracts: the feudal shackles are still binding this most modern of economic theories.

Ramsey goes on to make an important point

Having any definite degree of belief implies a certain measure of consistency, namely willingness to bet on a given proposition at the same odds for any stake, the stakes being measured in terms of ultimate values. Having degrees of belief obeying the laws of probability implies a further measure of consistency, namely such a consistency between the odds acceptable on different propositions as shall prevent a book being made against you.

Ramsey is arguing that an agent needs to employ the same measure in pricing all assets in a market, and this is the key result in contemporary derivative pricing. Having identified the martingale measure on the basis of a ‘primal’ asset, it is then applied across the market, in particular to derivatives on the primal asset but the well-known result that if two assets offer different ‘market prices of risk’, an arbitrage exists. This explains why the market-price of risk appears in the Radon-Nikodym derivative and the Capital Market Line, it enforces Ramsey’s consistency in pricing. The second statement of the FTAP is concerned with incomplete markets, which appear in relation to Arrow-Debreu prices. In mathematics, in the special case that there are as many, or more, assets in a market as there are possible future, uncertain, states, a unique pricing vector can be deduced for the market because of Cramer’s Rule. If the elements of the pricing vector satisfy the axioms of probability, specifically each element is positive and they all sum to one, then the market precludes arbitrage opportunities. This is the case covered by the first statement of the FTAP. In the more realistic situation that there are more possible future states than assets, the market can still be arbitrage free but the pricing vector, the martingale measure, might not be unique. The agent can still be consistent in selecting which particular martingale measure they choose to use, but another agent might choose a different measure, such that the two do not agree on a price. In the context of the Law of One Price, this means that we cannot hedge, replicate or cover, a position in the market, such that the portfolio is riskless. The significance of the second statement of the FTAP is that it tells us that in the sensible world of imperfect knowledge and transaction costs, a model within the framework of the FTAP cannot give a precise price. When faced with incompleteness in markets, agents need alternative ways to price assets and behavioural techniques have come to dominate financial theory. This feature was realised in The Port Royal Logic when it recognised the role of transaction costs in lotteries.

Evental Sites. Thought of the Day 48.0

badiou_being_and_appearance1

According to Badiou, the undecidable truth is located beyond the boundaries of authoritative claims of knowledge. At the same time, undecidability indicates that truth has a post-evental character: “the heart of the truth is that the event in which it originates is undecidable” (Being and Event). Badiou explains that, in terms of forcing, undecidability means that the conditions belonging to the generic set force sentences that are not consequences of axioms of set theory. If in the domains of specific languages (of politics, science, art or love) the effects of event are not visible, the content of “Being and Event” is an empty exercise in abstraction.

Badiou distances himself from\ a narrow interpretation of the function played by axioms. He rather regards them as collections of basic convictions that organize situations, the conceptual or ideological framework of a historical situation. An event, named by an intervention, is at the theoretical site indexed by a proposition A, a new apparatus, demonstrative or axiomatic, such that A is henceforth clearly admissible as a proposition of the situation. Accordingly, the undecidability of a truth would consist in transcending the theoretical framework of a historical situation or even breaking with it in the sense that the faithful subject accepts beliefs that are impossible to reconcile with the old mode of thinking.

However, if one consequently identifies the effect of event with the structure of the generic extension, they need to conclude that these historical situations are by no means the effects of event. This is because a crucial property of every generic extension is that axioms of set theory remain valid within it. It is the very core of the method of forcing. Without this assumption, Cohen’s original construction would have no raison d’être because it would not establish the undecidability of the cardinality of infinite power sets. Every generic extension satisfies axioms of set theory. In reference to historical situations, it must be conceded that a procedure of fidelity may modify a situation by forcing undecidable sentences, nonetheless it never overrules its organizing principles.

Another notion which cannot be located within the generic theory of truth without extreme consequences is evental site. An evental site – an element “on the edge of the void” – opens up a situation to the possibility of an event. Ontologically, it is defined as “a multiple such that none of its elements are presented in the situation”. In other words, it is a set such that neither itself nor any of its subsets are elements of the state of the situation. As the double meaning of this word indicates, the state in the context of historical situations takes the shape of the State. A paradigmatic example of a historical evental site is the proletariat – entirely dispossessed, and absent from the political stage.

The existence of an evental site in a situation is a necessary requirement for an event to occur. Badiou is very strict about this point: “we shall posit once and for all that there are no natural events, nor are there neutral events” – and it should be clarified that situations are divided into natural, neutral, and those that contain an evental site. The very matheme of event – its formal definition is of no importance here is based on the evental site. The event raises the evental site to the surface, making it represented on the level of the state of the situation. Moreover, a novelty that has the structure of the generic set but it does not emerge from the void of an evental site, leads to a simulacrum of truth, which is one of the figures of Evil.

However, if one takes the mathematical framework of Badiou’s concept of event seriously, it turns out that there is no place for the evental site there – it is forbidden by the assumption of transitivity of the ground model M. This ingredient plays a fundamental role in forcing, and its removal would ruin the whole construction of the generic extension. As is known, transitivity means that if a set belongs to M, all its elements also belong to M. However, an evental site is a set none of whose elements belongs to M. Therefore, contrary to Badious intentions, there cannot exist evental sites in the ground model. Using Badiou’s terminology, one can say that forcing may only be the theory of the simulacrum of truth.

Non-self Self

Philosophy is the survey of all the sciences with the special object of their harmony and of their completion. It brings to this task not only the evidence of the separate sciences but also its special appeal to the concrete experience – Whitehead

bitchen-navel-power-tint

Vidya and Avidya, the Self and the not-Self, as well as sambhūti and asambhūti, Brahman and the world, are basically one, not two. Avidya affirms the world, as a self-sufficient reality. Vidya affirms God as the Other, as a far away reality. When true knowledge arises, says the Upanishads, this opposition is overcome.

The true knowledge involves comprehension of the total Reality, of the truth of both Being and Becoming. Philosophic knowledge or vision cannot be complete if it ignores or neglects any aspect of knowledge or experience. Philosophy is the synthesis of all knowledge and experience, according to the Upanishads and according also to modern thought. Brahmavidya, philosophy, is sarvavidyapratishthā, the basis and support of all knowledge, says the Mundaka Upanishad. All knowledge, according to that Upanishad, can be divided in to two distinct categories – the apara, the lower, and the para, the higher. It boldly relegates all sciences, arts, theologies, and holy scriptures of religions, including the Vedas, to the apara category. And that is para it says, yayā tadaksharam adhigamyate, by which the imperishable Reality is realized.’

The vision of the Totality therefore must include the vision of the para and the apara aspects of Reality. If brahmavidya, philosophy, is the pratisthā, support, of sarvavidyā, totality of knowledge, it must be a synthesis of both the aparā and the parā forms of knowledge.

This is endorsed by the Gita in its statement that the jnana, philosophy, is the synthesis of the knowledge of the not-Self and the Self:

क्षेत्रक्षेत्रज्ञयोर्ज्ञानं यत्तज्ज्ञानं मतं मम ।

kṣetrakṣetrajñayorjñānaṃ yattajjñānaṃ mataṃ mama |

The synthesis of the knowledge of the not-Self, avidya, which is positive science, with that of the Self, vidya, which is the science of religion, will give us true philosophy, which is the knowledge flowering in to vision and maturing into wisdom.

This is purnajñāna, fullness of knowledge, as termed by Ramakrishna. The Gita speaks of this as jñānam vijñāna sahitamjñāna coupled with vijñāna, and proclaims this as the summit of spiritual achievement:

बहूनां जन्मनामन्ते ज्ञानवान्मां प्रपद्यते ।
वासुदेवः सर्वमिति स महात्मा सुदुर्लभः ॥

bahūnāṃ janmanāmante jñānavānmāṃ prapadyate |
vāsudevaḥ sarvamiti sa mahātmā sudurlabhaḥ ||

‘At the end of many births, the wise man attains Me with the realization that all this (universe) is Vasudeva the indwelling Self); such a great-souled one is rare to come across’