Valencies of Predicates. Thought of the Day 125.0

Naturalizing semiotics - The triadic sign of Charles Sanders Pei

Since icons are the means of representing qualities, they generally constitute the predicative side of more complicated signs:

The only way of directly communicating an idea is by means of an icon; and every indirect method of communicating an idea must depend for its establishment upon the use of an icon. Hence, every assertion must contain an icon or set of icons, or else must contain signs whose meaning is only explicable by icons. The idea which the set of icons (or the equivalent of a set of icons) contained in an assertion signifies may be termed the predicate of the assertion. (Collected Papers of Charles Sanders Peirce)

Thus, the predicate in logic as well as ordinary language is essentially iconic. It is important to remember here Peirce’s generalization of the predicate from the traditional subject-copula-predicate structure. Predicates exist with more than one subject slot; this is the basis for Peirce’s logic of relatives and permits at the same time enlarging the scope of logic considerably and approaching it to ordinary language where several-slot-predicates prevail, for instance in all verbs with a valency larger than one. In his definition of these predicates by means of valency, that is, number of empty slots in which subjects or more generally indices may be inserted, Peirce is actually the founder of valency grammar in the tradition of Tesnière. So, for instance, the structure ‘_ gives _ to _’ where the underlinings refer to slots, is a trivalent predicate. Thus, the word classes associated with predicates are not only adjectives, but verbs and common nouns; in short all descriptive features in language are predicative.

This entails the fact that the similarity charted in icons covers more complicated cases than does the ordinary use of the word. Thus,

where ordinary logic considers only a single, special kind of relation, that of similarity, – a relation, too, of a particularly featureless and insignificant kind, the logic of relatives imagines a relation in general to be placed. Consequently, in place of the class, which is composed of a number of individual objects or facts brought together by means of their relation of similarity, the logic of relatives considers the system, which is composed of objects brought together by any kind of relations whatsoever. (The New Elements of Mathematics)

This allows for abstract similarity because one phenomenon may be similar to another in so far as both of them partake in the same relation, or more generally, in the same system – relations and systems being complicated predicates.

But not only more abstract features may thus act as the qualities invoked in an icon; these qualities may be of widely varying generality:

But instead of a single icon, or sign by resemblance of a familiar image or ‘dream’, evocable at will, there may be a complexus of such icons, forming a composite image of which the whole is not familiar. But though the whole is not familiar, yet not only are the parts familiar images, but there will also be a familiar image in its mode of composition. ( ) The sort of idea which an icon embodies, if it be such that it can convey any positive information, being applicable to some things but not to others, is called a first intention. The idea embodied by an icon, which cannot of itself convey any information, being applicable to everything or nothing, but which may, nevertheless, be useful in modifying other icons, is called a second intention. 

What Peirce distinguishes in these scholastic standard notions borrowed from Aquinas via Scotus, is, in fact, the difference between Husserlian formal and material ontology. Formal qualities like genus, species, dependencies, quantities, spatial and temporal extension, and so on are of course attributable to any phenomenon and do not as such, in themselves, convey any information in so far as they are always instantiated in and thus, like other Second Intentions, in the Husserlian manner dependent upon First Intentions, but they are nevertheless indispensable in the composition of first intentional descriptions. The fact that a certain phenomenon is composed of parts, has a form, belongs to a species, has an extension, has been mentioned in a sentence etc. does not convey the slightest information of it until it by means of first intentional icons is specified which parts in which composition, which species, which form, etc. Thus, here Peirce makes a hierarchy of icons which we could call material and formal, respectively, in which the latter are dependent on the former. One may note in passing that the distinctions in Peirce’s semiotics are themselves built upon such Second Intentions; thus it is no wonder that every sign must possess some Iconic element. Furthermore, the very anatomy of the proposition becomes just like in Husserlian rational grammar a question of formal, synthetic a priori regularities.

Among Peirce’s forms of inference, similarity plays a certain role within abduction, his notion for a ‘qualified guess’ in which a particular fact gives rise to the formation of a hypothesis which would have the fact in question as a consequence. Many such different hypotheses are of course possible for a given fact, and this inference is not necessary, but merely possible, suggestive. Precisely for this reason, similarity plays a seminal role here: an

originary Argument, or Abduction, is an argument which presents facts in its Premiss which presents a similarity to the fact stated in the conclusion but which could perfectly be true without the latter being so.

The hypothesis proposed is abducted by some sort of iconic relation to the fact to be explained. Thus, similarity is the very source of new ideas – which must subsequently be controlled deductively and inductively, to be sure. But iconicity does not only play this role in the contents of abductive inference, it plays an even more important role in the very form of logical inference in general:

Given a conventional or other general sign of an object, to deduce any other truth than that which it explicitly signifies, it is necessary, in all cases, to replace that sign by an icon. This capacity of revealing unexpected truth is precisely that wherein the utility of algebraic formulae consists, so that the iconic character is the prevailing one.

The very form of inferences depends on it being an icon; thus for Peirce the syllogistic schema inherent in reasoning has an iconic character:

‘Whenever one thing suggests another, both are together in the mind for an instant. [ ] every proposition like the premiss, that is having an icon like it, would involve [ ] a proposition related to it as the conclusion [ ]’. Thus, first and foremost deduction is an icon: ‘I suppose it would be the general opinion of logicians, as it certainly was long mine, that the Syllogism is a Symbol, because of its Generality.’ …. The truth, however, appears to be that all deductive reasoning, even simple syllogism, involves an element of observation; namely deduction consists in constructing an icon or diagram the relation of whose parts shall present a complete analogy with those of the parts of the objects of reasoning, of experimenting upon this image in the imagination, and of observing the result so as to discover unnoticed and hidden relations among the parts. 

It then is no wonder that synthetic a priori truths exist – even if Peirce prefers notions like ‘observable, universal truths’ – the result of a deduction may contain more than what is immediately present in the premises, due to the iconic quality of the inference.

Advertisement

Metaphysical Continuity in Peirce. Thought of the Day 122.0

image12

Continuity has wide implications in the different parts of Peirce’s architectonics of theories. Time and time again, Peirce refers to his ‘principle of continuity’ which has not immediately anything to do with Poncelet’s famous such principle in geometry, but, is rather, a metaphysical implication taken to follow from fallibilism: if all more or less distinct phenomena swim in a vague sea of continuity then it is no wonder that fallibilism must be accepted. And if the world is basically continuous, we should not expect conceptual borders to be definitive but rather conceive of terminological distinctions as relative to an underlying, monist continuity. In this system, mathematics is first science. Thereafter follows philosophy which is distinguished form purely hypothetical mathematics by having an empirical basis. Philosophy, in turn, has three parts, phenomenology, the normative sciences, and metaphysics. The first investigates solely ‘the Phaneron’ which is all what could be imagined to appear as an object for experience: ‘ by the word phaneron I mean the collective total of all that is in any way or in any sense present to the mind, quite regardless whether it corresponds to any real thing or not.’ (Charles Sanders Peirce – Collected Papers of Charles Sanders Peirce) As is evident, this definition of Peirce’s ‘phenomenology’ is parallel to Husserl’s phenomenological reduction in bracketing the issue of the existence of the phenomenon in question. Even if it thus is built on introspection and general experience, it is – analogous to Husserl and other Brentano disciples at the same time – conceived in a completely antipsychological manner: ‘It religiously abstains from all speculation as to any relations between its categories and physiological facts, cerebral or other.’ and ‘ I abstain from psychology which has nothing to do with ideoscopy.’ (Letter to Lady Welby). The normative sciences fall in three: aesthetics, ethics, logic, in that order (and hence decreasing generality), among which Peirce does not spend very much time on the former two. Aesthetics is the investigation of which possible goals it is possible to aim at (Good, Truth, Beauty, etc.), and ethics how they may be reached. Logic is concerned with the grasping and conservation of Truth and takes up the larger part of Peirce’s interest among the normative sciences. As it deals with how truth can be obtained by means of signs, it is also called semiotics (‘logic is formal semiotics’) which is thus coextensive with theory of science – logic in this broad sense contains all parts of philosophy of science, including contexts of discovery as well as contexts of justification. Semiotics has, in turn, three branches: grammatica speculativa (or stekheiotics), critical logic, and methodeutic (inspired by mediaeval trivium: grammar, logic, and rhetoric). The middle one of these three lies closest to our days’ conception of logic; it is concerned with the formal conditions for truth in symbols – that is, propositions, arguments, their validity and how to calculate them, including Peirce’s many developments of the logic of his time: quantifiers, logic of relations, ab-, de-, and induction, logic notation systems, etc. All of these, however, presuppose the existence of simple signs which are investigated by what is often seen as semiotics proper, the grammatica speculativa; it may also be called formal grammar. It investigates the formal condition for symbols having meaning, and it is here we find Peirce’s definition of signs and his trichotomies of different types of sign aspects. Methodeutic or formal rhetorics, on the other hand, concerns the pragmatical use of the former two branches, that is, the study of how to use logic in a fertile way in research, the formal conditions for the ‘power’ of symbols, that is, their reference to their interpretants; here can be found, e.g., Peirce’s famous definitions of pragmati(ci)sm and his directions for scientific investigation. To phenomenology – again in analogy to Husserl – logic adds the interest in signs and their truth. After logic, metaphysics follows in Peirce’s system, concerning the inventarium of existing objects, conceived in general – and strongly influenced by logic in the Kantian tradition for seeing metaphysics mirroring logic. Also here, Peirce has several proposals for subtypologies, even if none of them seem stable, and under this headline classical metaphysical issues mix freely with generalizations of scientific results and cosmological speculations.

Peirce himself saw this classification in an almost sociological manner, so that the criteria of distinction do not stem directly from the implied objects’ natural kinds, but after which groups of persons study which objects: ‘the only natural lines of demarcation between nearly related sciences are the divisions between the social groups of devotees of those sciences’. Science collects scientists into bundles, because they are defined by their causa finalis, a teleologial intention demanding of them to solve a central problem.

Measured on this definition, one has to say that Peirce himself was not modest, not only does he continuously transgress such boundaries in his production, he frequently does so even within the scope of single papers. There is always, in his writings, a brief distance only from mathematics to metaphysics – or between any other two issues in mathematics and philosophy, and this implies, first, that the investigation of continuity and generality in Peirce’s system is more systematic than any actually existing exposition of these issues in Peirce’s texts, second, that the discussion must constantly rely on cross-references. This has the structural motivation that as soon as you are below the level of mathematics in Peirce’s system, inspired by the Comtean system, the single science receives determinations from three different directions, each science consisting of material and formal aspects alike. First, it receives formal directives ‘from above’, from those more general sciences which stand above it, providing the general frameworks in which it must unfold. Second, it receives material determinations from its own object, requiring it to make certain choices in its use of formal insights from the higher sciences. The cosmological issue of the character of empirical space, for instance, can take from mathematics the different (non-)Euclidean geometries and investigate which of these are fit to describe spatial aspects of our universe, but it does not, in itself, provide the formal tools. Finally, the single sciences receive in practice determinations ‘from below’, from more specific sciences, when their results by means of abstraction, prescission, induction, and other procedures provide insights on its more general, material level. Even if cosmology is, for instance, part of metaphysics, it receives influences from the empirical results of physics (or biology, from where Peirce takes the generalized principle of evolution). The distinction between formal and material is thus level specific: what is material on one level is a formal bundle of possibilities for the level below; what is formal on one level is material on the level above.

For these reasons, the single step on the ladder of sciences is only partially independent in Peirce, hence also the tendency of his own investigations to zigzag between the levels. His architecture of theories thus forms a sort of phenomenological theory of aspects: the hierarchy of sciences is an architecture of more and less general aspects of the phenomena, not completely independent domains. Finally, Peirce’s realism has as a result a somewhat disturbing style of thinking: many of his central concepts receive many, often highly different determinations which has often led interpreters to assume inconsistencies or theoretical developments in Peirce where none necessarily exist. When Peirce, for instance, determines the icon as the sign possessing a similarity to its object, and elsewhere determines it as the sign by the contemplation of which it is possible to learn more about its object, then they are not conflicting definitions. Peirce’s determinations of concepts are rarely definitions at all in the sense that they provide necessary and sufficient conditions exhausting the phenomenon in question. His determinations should rather be seen as descriptions from different perspectives of a real (and maybe ideal) object – without these descriptions necessarily conflicting. This style of thinking can, however, be seen as motivated by metaphysical continuity. When continuous grading between concepts is the rule, definitions in terms of necessary and sufficient conditions should not be expected to be exhaustive.

The Third Trichotomy. Thought of the Day 121.0

peircetriangle

The decisive logical role is played by continuity in the third trichotomy which is Peirce’s generalization of the old distinction between term, proposition and argument in logic. In him, the technical notions are rhema, dicent and argument, and all of them may be represented by symbols. A crucial step in Peirce’s logic of relations (parallel to Frege) is the extension of the predicate from having only one possible subject in a proposition – to the possibility for a predicate to take potentially infinitely many subjects. Predicates so complicated may be reduced, however, to combination of (at most) three-subject predicates, according to Peirce’s reduction hypothesis. Let us consider the definitions from ‘Syllabus (The Essential Peirce Selected Philosophical Writings, Volume 2)’ in continuation of the earlier trichotomies:

According to the third trichotomy, a Sign may be termed a Rheme, a Dicisign or Dicent Sign (that is, a proposition or quasi-proposition), or an Argument.

A Rheme is a Sign which, for its Interpretant, is a Sign of qualitative possibility, that is, is understood as representing such and such a kind of possible Object. Any Rheme, perhaps, will afford some information; but it is not interpreted as doing so.

A Dicent Sign is a Sign, which, for its Interpretant, is a Sign of actual existence. It cannot, therefore, be an Icon, which affords no ground for an interpretation of it as referring to actual existence. A Dicisign necessarily involves, as a part of it, a Rheme, to describe the fact which it is interpreted as indicating. But this is a peculiar kind of Rheme; and while it is essential to the Dicisign, it by no means constitutes it.

An Argument is a Sign which, for its Interpretant, is a Sign of a law. Or we may say that a Rheme is a sign which is understood to represent its object in its characters merely; that a Dicisign is a sign which is understood to represent its object in respect to actual existence; and that an Argument is a Sign which is understood to represent its Object in its character as Sign. ( ) The proposition need not be asserted or judged. It may be contemplated as a sign capable of being asserted or denied. This sign itself retains its full meaning whether it be actually asserted or not. ( ) The proposition professes to be really affected by the actual existent or real law to which it refers. The argument makes the same pretension, but that is not the principal pretension of the argument. The rheme makes no such pretension.

The interpretant of the Argument represents it as an instance of a general class of Arguments, which class on the whole will always tend to the truth. It is this law, in some shape, which the argument urges; and this ‘urging’ is the mode of representation proper to Arguments.

Predicates being general is of course a standard logical notion; in Peirce’s version this generality is further emphasized by the fact that the simple predicate is seen as relational and containing up to three subject slots to be filled in; each of them may be occupied by a continuum of possible subjects. The predicate itself refers to a possible property, a possible relation between subjects; the empty – or partly satiated – predicate does not in itself constitute any claim that this relation does in fact hold. The information it contains is potential, because no single or general indication has yet been chosen to indicate which subjects among the continuum of possible subjects it refers to. The proposition, on the contrary, the dicisign, is a predicate where some of the empty slots have been filled in with indices (proper names, demonstrative pronomina, deixis, gesture, etc.), and is, in fact, asserted. It thus consists of an indexical part and an iconical part, corresponding to the usual distinction between subject and predicate, with its indexical part connecting it to some level of reference reality. This reality needs not, of course, be actual reality; the subject slots may be filled in with general subjects thus importing pieces of continuity into it – but the reality status of such subjects may vary, so it may equally be filled in with fictitious references of all sorts. Even if the dicisign, the proposition, is not an icon, it contains, via its rhematic core, iconical properties. Elsewhere, Peirce simply defines the dicisign as a sign making explicit its reference. Thus a portrait equipped with a sign indicating the portraitee will be a dicisign, just like a charicature draft with a pointing gesture towards the person it depicts will be a dicisign. Even such dicisigns may be general; the pointing gesture could single out a group or a representative for a whole class of objects. While the dicisign specifies its object, the argument is a sign specifying its interpretant – which is what is normally called the conclusion. The argument thus consists of two dicisigns, a premiss (which may be, in turn, composed of several dicisigns and is traditionally seen as consisting of two dicisigns) and a conclusion – a dicisign represented as ensuing from the premiss due to the power of some law. The argument is thus – just like the other thirdness signs in the trichotomies – in itself general. It is a legisign and a symbol – but adds to them the explicit specification of a general, lawlike interpretant. In the full-blown sign, the argument, the more primitive degenerate sign types are orchestrated together in a threefold generality where no less than three continua are evoked: first, the argument itself is a legisign with a halo of possible instantions of itself as a sign; second, it is a symbol referring to a general object, in turn with a halo of possible instantiations around it; third, the argument implies a general law which is represented by one instantiation (the premiss and the rule of inference) but which has a halo of other, related inferences as possible instantiations. As Peirce says, the argument persuades us that this lawlike connection holds for all other cases being of the same type.

Husserl’s Flip-Flop on Arithmetic Axiomatics. Thought of the Day 118.0

g5198

Husserl’s position in his Philosophy of Arithmetic (Psychological and Logical Investigations with Supplementary Texts) was resolutely anti-axiomatic. He attacked those who fell into remote, artificial constructions which, with the intent of building the elementary arithmetic concepts out of their ultimate definitional properties, interpret and change their meaning so much that totally strange, practically and scientifically useless conceptual formations finally result. Especially targeted was Frege’s ideal of the

founding of arithmetic on a sequence of formal definitions, out of which all the theorems of that science could be deduced purely syllogistically.

As soon as one comes to the ultimate, elemental concepts, Husserl reasoned, all defining has to come to an end. All one can then do is to point to the concrete phenomena from or through which the concepts are abstracted and show the nature of the abstractive process. A verbal explanation should place us in the proper state of mind for picking out, in inner or outer intuition, the abstract moments intended and for reproducing in ourselves the mental processes required for the formation of the concept. He said that his analyses had shown with incontestable clarity that the concepts of multiplicity and unity rest directly upon ultimate, elemental psychical data, and so belong among the indefinable concepts. Since the concept of number was so closely joined to them, one could scarcely speak of defining it either. All these points are made on the only pages of Philosophy of Arithmetic that Husserl ever explicitly retracted.

In On the Concept of Number, Husserl had set out to anchor arithmetical concepts in direct experience by analyzing the actual psychological processes to which he thought the concept of number owed its genesis. To obtain the concept of number of a concrete set of objects, say A, A, and A, he explained, one abstracts from the particular characteristics of the individual contents collected, only considering and retaining each one insofar as it is a something or a one. Regarding their collective combination, one thus obtains the general form of the set belonging to the set in question: one and one, etc. and. . . and one, to which a number name is assigned.

The enthusiastic espousal of psychologism of On the Concept of Number is not found in Philosophy of Arithmetic. Husserl later confessed that doubts about basic differences between the concept of number and the concept of collecting, which was all that could be obtained from reflection on acts, had troubled and tormented him from the very beginning and had eventually extended to all categorial concepts and to concepts of objectivities of any sort whatsoever, ultimately to include modern analysis and the theory of manifolds, and simultaneously to mathematical logic and the entire field of logic in general. He did not see how one could reconcile the objectivity of mathematics with psychological foundations for logic.

In sharp contrast to Brouwer who denounced logic as a source of truth, from the mid-1890s on, Husserl defended the view, which he attributed to Frege’s teacher Hermann Lotze, that pure arithmetic was basically no more than a branch of logic that had undergone independent development. He bid students not to be “scared” by that thought and to grow used to Lotze’s initially strange idea that arithmetic was only a particularly highly developed piece of logic.

Years later, Husserl would explain in Formal and Transcendental Logic that his

war against logical psychologism was meant to serve no other end than the supremely important one of making the specific province of analytic logic visible in its purity and ideal particularity, freeing it from the psychologizing confusions and misinterpretations in which it had remained enmeshed from the beginning.

He had come to see arithmetic truths as being analytic, as grounded in meanings independently of matters of fact. He had come to believe that the entire overthrowing of psychologism through phenomenology showed that his analyses in On the Concept of Number and Philosophy of Arithmetic had to be considered a pure a priori analysis of essence. For him, pure arithmetic, pure mathematics, and pure logic were a priori disciplines entirely grounded in conceptual essentialities, where truth was nothing other than the analysis of essences or concepts. Pure mathematics as pure arithmetic investigated what is grounded in the essence of number. Pure mathematical laws were laws of essence.

He is said to have told his students that it was to be stressed repeatedly and emphatically that the ideal entities so unpleasant for empiricistic logic, and so consistently disregarded by it, had not been artificially devised either by himself, or by Bolzano, but were given beforehand by the meaning of the universal talk of propositions and truths indispensable in all the sciences. This, he said, was an indubitable fact that had to be the starting point of all logic. All purely mathematical propositions, he taught, express something about the essence of what is mathematical. Their denial is consequently an absurdity. Denying a proposition of the natural sciences, a proposition about real matters of fact, never means an absurdity, a contradiction in terms. In denying the law of gravity, I cast experience to the wind. I violate the evident, extremely valuable probability that experience has established for the laws. But, I do not say anything “unthinkable,” absurd, something that nullifies the meaning of the word as I do when I say that 2 × 2 is not 4, but 5.

Husserl taught that every judgment either is a truth or cannot be a truth, that every presentation either accorded with a possible experience adequately redeeming it, or was in conflict with the experience, and that grounded in the essence of agreement was the fact that it was incompatible with the conflict, and grounded in the essence of conflict that it was incompatible with agreement. For him, that meant that truth ruled out falsehood and falsehood ruled out truth. And, likewise, existence and non-existence, correctness and incorrectness cancelled one another out in every sense. He believed that that became immediately apparent as soon as one had clarified the essence of existence and truth, of correctness and incorrectness, of Evidenz as consciousness of givenness, of being and not-being in fully redeeming intuition.

At the same time, Husserl contended, one grasps the “ultimate meaning” of the basic logical law of contradiction and of the excluded middle. When we state the law of validity that of any two contradictory propositions one holds and the other does not hold, when we say that for every proposition there is a contradictory one, Husserl explained, then we are continually speaking of the proposition in its ideal unity and not at all about mental experiences of individuals, not even in the most general way. With talk of truth it is always a matter of propositions in their ideal unity, of the meaning of statements, a matter of something identical and atemporal. What lies in the identically-ideal meaning of one’s words, what one cannot deny without invalidating the fixed meaning of one’s words has nothing at all to do with experience and induction. It has only to do with concepts. In sharp contrast to this, Brouwer saw intuitionistic mathematics as deviating from classical mathematics because the latter uses logic to generate theorems and in particular applies the principle of the excluded middle. He believed that Intuitionism had proven that no mathematical reality corresponds to the affirmation of the principle of the excluded middle and to conclusions derived by means of it. He reasoned that “since logic is based on mathematics – and not vice versa – the use of the Principle of the Excluded Middle is not permissible as part of a mathematical proof.”

Žižek’s Dialectical Coincidentia Oppositorium. Thought of the 98.0

Arch2O-Jouissance-Surplus-03

Without doubt, the cogent interlacing of Lacanian theorization with Hegelianism manifests Žižek’s prowess in articulating a highly pertinent critique of ideology for our epoch, but whether this comes from a position of Marxist orthodoxy or a position of a Lacanian doctrinaire who monitors Marxist politics is an open question.

Through this Lacanian prism, Žižek sees subjectivity as fragmented and decentred, considering its subordinate status to the unsurpassable realm of the signifiers. The acquisition of a consummate identity dwells in impossibility, in as much as it is bound to desire, provoked by a lacuna which is impossible to fill up. Thus, for Žižek, socio-political relations evolve from states of lack, linguistic fluidity, and contingency. What temporarily arrests this fluid state of the subject’s slithering in the realm of the signifiers, giving rise to her self-identity, is what Lacan calls point de capiton. The term refers to certain fundamental “anchoring” points in the signifying chain where the signifier is tied to the signified, providing an illusionary stability in signification. Laclau and Mouffe (Hegemony and Socialist Strategy Towards a Radical Democratic Politics) were the first to make use of the idea of the point de capiton in relation to hegemony and the formation of identities. In this context, ideology is conceptualized as a terrain of firm meanings, determined and comprised by numerous points de capiton (Zizek The Sublime Object of Ideology).

The real is the central Lacanian concept that Žižek implements in his rhetoric. He associates the real with antagonism (e.g., class conflict) as the unsymbolizable and irreducible gap that lies in the heart of the socio-symbolic order and around which society is formed. As Žižek argues, “class struggle designates the very antagonism that prevents the objective (social) reality from constituting itself as a self-enclosed whole” (Renata Salecl, Slavoj Zizek-Gaze and Voice As Love Objects). This logic is indebted to Laclau and Mouffe, who were the first to postulate that social antagonism is what impedes the closure of society, marking thus its impossibility. Žižek expanded this view and associated antagonism with the notion of the real.

Functioning as a hegemonic fantasmatic veil, ideology covers the lacuna of the symbolic, in the form of a fantasy, so that it protracts desire and hence subjectivity. On the imaginary level, ideology functions as the “mirror” that reflects antagonisms, that is to say, the real unrepresentable kernel that undermines the political. Around this emptiness of representation, the fictional narrative of ideology, its meaning, is to unfurl. The role of socio-ideological fantasy is to provide consistency to the symbolic order by veiling its void, and to foster the illusion of a coherent social unity.

Nevertheless, fantasy has both unifying and disjunctive features, as its role is to fill the void of the symbolic, but also to circumscribe this void. According to Žižek, “the notion of fantasy offers an exemplary case of the dialectical coincidentia oppositorium”. On the one side, it provides a “hallucinatory realisation of desire” and on the other side, it evokes disturbing images about the Other’s jouissance to which the subject has no (symbolic or imaginary) access. In so reasoning, ideology promises unity and, at the same time, creates another fantasy, where the failure of acquiring the anticipated ideological unity is ascribed.

Pertaining to Jacques Derrida’s work Specters of Marx (Specters of Marx The State of the Debt, The Work of Mourning; the New International), where the typical ontological conception of the living is seen to be incomplete and inseparable from the spectre, namely, a ghostly embodiment that haunts the living present (Derrida introduces the notion of hauntology to refer to this pseudo-material incarnation of the spirit that haunts and challenges ontological present), Žižek elaborates the spectral apparitions of the real in the politico–ideological domain. He makes a distinction between this “spectre” and “symbolic fiction”, that is, reality per se. Both have a common fantasmatic hypostasis, yet they perform antithetical functions. Symbolic fiction forecloses the real antagonism at the crux of reality, only to return as a spectre, as another fantasy.

The Mystery of Modality. Thought of the Day 78.0

sixdimensionquantificationalmodallogic.01

The ‘metaphysical’ notion of what would have been no matter what (the necessary) was conflated with the epistemological notion of what independently of sense-experience can be known to be (the a priori), which in turn was identified with the semantical notion of what is true by virtue of meaning (the analytic), which in turn was reduced to a mere product of human convention. And what motivated these reductions?

The mystery of modality, for early modern philosophers, was how we can have any knowledge of it. Here is how the question arises. We think that when things are some way, in some cases they could have been otherwise, and in other cases they couldn’t. That is the modal distinction between the contingent and the necessary.

How do we know that the examples are examples of that of which they are supposed to be examples? And why should this question be considered a difficult problem, a kind of mystery? Well, that is because, on the one hand, when we ask about most other items of purported knowledge how it is we can know them, sense-experience seems to be the source, or anyhow the chief source of our knowledge, but, on the other hand, sense-experience seems able only to provide knowledge about what is or isn’t, not what could have been or couldn’t have been. How do we bridge the gap between ‘is’ and ‘could’? The classic statement of the problem was given by Immanuel Kant, in the introduction to the second or B edition of his first critique, The Critique of Pure Reason: ‘Experience teaches us that a thing is so, but not that it cannot be otherwise.’

Note that this formulation allows that experience can teach us that a necessary truth is true; what it is not supposed to be able to teach is that it is necessary. The problem becomes more vivid if one adopts the language that was once used by Leibniz, and much later re-popularized by Saul Kripke in his famous work on model theory for formal modal systems, the usage according to which the necessary is that which is ‘true in all possible worlds’. In these terms the problem is that the senses only show us this world, the world we live in, the actual world as it is called, whereas when we claim to know about what could or couldn’t have been, we are claiming knowledge of what is going on in some or all other worlds. For that kind of knowledge, it seems, we would need a kind of sixth sense, or extrasensory perception, or nonperceptual mode of apprehension, to see beyond the world in which we live to these various other worlds.

Kant concludes, that our knowledge of necessity must be what he calls a priori knowledge or knowledge that is ‘prior to’ or before or independent of experience, rather than what he calls a posteriori knowledge or knowledge that is ‘posterior to’ or after or dependant on experience. And so the problem of the origin of our knowledge of necessity becomes for Kant the problem of the origin of our a priori knowledge.

Well, that is not quite the right way to describe Kant’s position, since there is one special class of cases where Kant thinks it isn’t really so hard to understand how we can have a priori knowledge. He doesn’t think all of our a priori knowledge is mysterious, but only most of it. He distinguishes what he calls analytic from what he calls synthetic judgments, and holds that a priori knowledge of the former is unproblematic, since it is not really knowledge of external objects, but only knowledge of the content of our own concepts, a form of self-knowledge.

We can generate any number of examples of analytic truths by the following three-step process. First, take a simple logical truth of the form ‘Anything that is both an A and a B is a B’, for instance, ‘Anyone who is both a man and unmarried is unmarried’. Second, find a synonym C for the phrase ‘thing that is both an A and a B’, for instance, ‘bachelor’ for ‘one who is both a man and unmarried’. Third, substitute the shorter synonym for the longer phrase in the original logical truth to get the truth ‘Any C is a B’, or in our example, the truth ‘Any bachelor is unmarried’. Our knowledge of such a truth seems unproblematic because it seems to reduce to our knowledge of the meanings of our own words.

So the problem for Kant is not exactly how knowledge a priori is possible, but more precisely how synthetic knowledge a priori is possible. Kant thought we do have examples of such knowledge. Arithmetic, according to Kant, was supposed to be synthetic a priori, and geometry, too – all of pure mathematics. In his Prolegomena to Any Future Metaphysics, Kant listed ‘How is pure mathematics possible?’ as the first question for metaphysics, for the branch of philosophy concerned with space, time, substance, cause, and other grand general concepts – including modality.

Kant offered an elaborate explanation of how synthetic a priori knowledge is supposed to be possible, an explanation reducing it to a form of self-knowledge, but later philosophers questioned whether there really were any examples of the synthetic a priori. Geometry, so far as it is about the physical space in which we live and move – and that was the original conception, and the one still prevailing in Kant’s day – came to be seen as, not synthetic a priori, but rather a posteriori. The mathematician Carl Friedrich Gauß had already come to suspect that geometry is a posteriori, like the rest of physics. Since the time of Einstein in the early twentieth century the a posteriori character of physical geometry has been the received view (whence the need for border-crossing from mathematics into physics if one is to pursue the original aim of geometry).

As for arithmetic, the logician Gottlob Frege in the late nineteenth century claimed that it was not synthetic a priori, but analytic – of the same status as ‘Any bachelor is unmarried’, except that to obtain something like ‘29 is a prime number’ one needs to substitute synonyms in a logical truth of a form much more complicated than ‘Anything that is both an A and a B is a B’. This view was subsequently adopted by many philosophers in the analytic tradition of which Frege was a forerunner, whether or not they immersed themselves in the details of Frege’s program for the reduction of arithmetic to logic.

Once Kant’s synthetic a priori has been rejected, the question of how we have knowledge of necessity reduces to the question of how we have knowledge of analyticity, which in turn resolves into a pair of questions: On the one hand, how do we have knowledge of synonymy, which is to say, how do we have knowledge of meaning? On the other hand how do we have knowledge of logical truths? As to the first question, presumably we acquire knowledge, explicit or implicit, conscious or unconscious, of meaning as we learn to speak, by the time we are able to ask the question whether this is a synonym of that, we have the answer. But what about knowledge of logic? That question didn’t loom large in Kant’s day, when only a very rudimentary logic existed, but after Frege vastly expanded the realm of logic – only by doing so could he find any prospect of reducing arithmetic to logic – the question loomed larger.

Many philosophers, however, convinced themselves that knowledge of logic also reduces to knowledge of meaning, namely, of the meanings of logical particles, words like ‘not’ and ‘and’ and ‘or’ and ‘all’ and ‘some’. To be sure, there are infinitely many logical truths, in Frege’s expanded logic. But they all follow from or are generated by a finite list of logical rules, and philosophers were tempted to identify knowledge of the meanings of logical particles with knowledge of rules for using them: Knowing the meaning of ‘or’, for instance, would be knowing that ‘A or B’ follows from A and follows from B, and that anything that follows both from A and from B follows from ‘A or B’. So in the end, knowledge of necessity reduces to conscious or unconscious knowledge of explicit or implicit semantical rules or linguistics conventions or whatever.

Such is the sort of picture that had become the received wisdom in philosophy departments in the English speaking world by the middle decades of the last century. For instance, A. J. Ayer, the notorious logical positivist, and P. F. Strawson, the notorious ordinary-language philosopher, disagreed with each other across a whole range of issues, and for many mid-century analytic philosophers such disagreements were considered the main issues in philosophy (though some observers would speak of the ‘narcissism of small differences’ here). And people like Ayer and Strawson in the 1920s through 1960s would sometimes go on to speak as if linguistic convention were the source not only of our knowledge of modality, but of modality itself, and go on further to speak of the source of language lying in ourselves. Individually, as children growing up in a linguistic community, or foreigners seeking to enter one, we must consciously or unconsciously learn the explicit or implicit rules of the communal language as something with a source outside us to which we must conform. But by contrast, collectively, as a speech community, we do not so much learn as create the language with its rules. And so if the origin of modality, of necessity and its distinction from contingency, lies in language, it therefore lies in a creation of ours, and so in us. ‘We, the makers and users of language’ are the ground and source and origin of necessity. Well, this is not a literal quotation from any one philosophical writer of the last century, but a pastiche of paraphrases of several.

Reductionism of Numerical Complexity: A Wittgensteinian Excursion

boyle10

Wittgenstein’s criticism of Russell’s logicist foundation of mathematics contained in (Remarks on the Foundation of Mathematics) consists in saying that it is not the formalized version of mathematical deduction which vouches for the validity of the intuitive version but conversely.

If someone tries to shew that mathematics is not logic, what is he trying to shew? He is surely trying to say something like: If tables, chairs, cupboards, etc. are swathed in enough paper, certainly they will look spherical in the end.

He is not trying to shew that it is impossible that, for every mathematical proof, a Russellian proof can be constructed which (somehow) ‘corresponds’ to it, but rather that the acceptance of such a correspondence does not lean on logic.

Taking up Wittgenstein’s criticism, Hao Wang (Computation, Logic, Philosophy) discusses the view that mathematics “is” axiomatic set theory as one of several possible answers to the question “What is mathematics?”. Wang points out that this view is epistemologically worthless, at least as far as the task of understanding the feature of cognition guiding is concerned:

Mathematics is axiomatic set theory. In a definite sense, all mathematics can be derived from axiomatic set theory. [ . . . ] There are several objections to this identification. [ . . . ] This view leaves unexplained why, of all the possible consequences of set theory, we select only those which happen to be our mathematics today, and why certain mathematical concepts are more interesting than others. It does not help to give us an intuitive grasp of mathematics such as that possessed by a powerful mathematician. By burying, e.g., the individuality of natural numbers, it seeks to explain the more basic and the clearer by the more obscure. It is a little analogous to asserting that all physical objects, such as tables, chairs, etc., are spherical if we swathe them with enough stuff.

Reductionism is an age-old project; a close forerunner of its incarnation in set theory was the arithmetization program of the 19th century. It is interesting that one of its prominent representatives, Richard Dedekind (Essays on the Theory of Numbers), exhibited a quite distanced attitude towards a consequent carrying out of the program:

It appears as something self-evident and not new that every theorem of algebra and higher analysis, no matter how remote, can be expressed as a theorem about natural numbers [ . . . ] But I see nothing meritorious [ . . . ] in actually performing this wearisome circumlocution and insisting on the use and recognition of no other than rational numbers.

Perec wrote a detective novel without using the letter ‘e’ (La disparition, English A void), thus proving not only that such an enormous enterprise is indeed possible but also that formal constraints sometimes have great aesthetic appeal. The translation of mathematical propositions into a poorer linguistic framework can easily be compared with such painful lipogrammatical exercises. In principle all logical connectives can be simulated in a framework exclusively using Sheffer’s stroke, and all cuts (in Gentzen’s sense) can be eliminated; one can do without common language at all in mathematics and formalize everything and so on: in principle, one could leave out a whole lot of things. However, in doing so one would depart from the true way of thinking employed by the mathematician (who really uses “and” and “not” and cuts and who does not reduce many things to formal systems). Obviously, it is the proof theorist as a working mathematician who is interested in things like the reduction to Sheffer’s stroke since they allow for more concise proofs by induction in the analysis of a logical calculus. Hence this proof theorist has much the same motives as a mathematician working on other problems who avoids a completely formalized treatment of these problems since he is not interested in the proof-theoretical aspect.

There might be quite similar reasons for the interest of some set theorists in expressing usual mathematical constructions exclusively with the expressive means of ZF (i.e., in terms of ∈). But beyond this, is there any philosophical interpretation of such a reduction? In the last analysis, mathematicians always transform (and that means: change) their objects of study in order to make them accessible to certain mathematical treatments. If one considers a mathematical concept as a tool, one does not only use it in a way different from the one in which it would be used if it were considered as an object; moreover, in semiotical representation of it, it is given a form which is different in both cases. In this sense, the proof theorist has to “change” the mathematical proof (which is his or her object of study to be treated with mathematical tools). When stating that something is used as object or as tool, we have always to ask: in which situation, or: by whom.

A second observation is that the translation of propositional formulæ in terms of Sheffer’s stroke in general yields quite complicated new formulæ. What is “simple” here is the particularly small number of symbols needed; but neither the semantics becomes clearer (p|q means “not both p and q”; cognitively, this looks more complex than “p and q” and so on), nor are the formulæ you get “short”. What is looked for in this case, hence, is a reduction of numerical complexity, while the primitive basis attained by the reduction cognitively looks less “natural” than the original situation (or, as Peirce expressed it, “the consciousness in the determined cognition is more lively than in the cognition which determines it”); similarly in the case of cut elimination. In contrast to this, many philosophers are convinced that the primitive basis of operating with sets constitutes really a “natural” basis of mathematical thinking, i.e., such operations are seen as the “standard bricks” of which this thinking is actually made – while no one will reasonably claim that expressions of the type p|q play a similar role for propositional logic. And yet: reduction to set theory does not really have the task of “explanation”. It is true, one thus reduces propositions about “complex” objects to propositions about “simple” objects; the propositions themselves, however, thus become in general more complex. Couched in Fregean terms, one can perhaps more easily grasp their denotation (since the denotation of a proposition is its truth value) but not their meaning. A more involved conceptual framework, however, might lead to simpler propositions (and in most cases has actually just been introduced in order to do so). A parallel argument concerns deductions: in its totality, a deduction becomes more complex (and less intelligible) by a decomposition into elementary steps.

Now, it will be subject to discussion whether in the case of some set operations it is admissible at all to claim that they are basic for thinking (which is certainly true in the case of the connectives of propositional logic). It is perfectly possible that the common sense which organizes the acceptance of certain operations as a natural basis relies on something different, not having the character of some eternal laws of thought: it relies on training.

Is it possible to observe that a surface is coloured red and blue; and not to observe that it is red? Imagine a kind of colour adjective were used for things that are half red and half blue: they are said to be ‘bu’. Now might not someone to be trained to observe whether something is bu; and not to observe whether it is also red? Such a man would then only know how to report: “bu” or “not bu”. And from the first report we could draw the conclusion that the thing was partly red.

Task of the Philosopher. Thought of the Day 75.0

4578-004-B2A539B2

Poincaré in Science and Method discusses how “reasonable” axioms (theories) are chosen. In a section which is intended to cool down the expectations put in the “logistic” project, he points out the problem as follows:

Even admitting that it has been established that all theorems can be deduced by purely analytical processes, by simple logical combinations of a finite number of axioms, and that these axioms are nothing but conventions, the philosopher would still retain the right to seek the origin of these conventions, and to ask why they were judged preferable to the contrary conventions.

[ …] A selection must be made out of all the constructions that can be combined with the materials furnished by logic. the true geometrician makes this decision judiciously, because he is guided by a sure instinct, or by some vague consciousness of I know not what profounder and more hidden geometry, which alone gives a value to the constructed edifice.

Hence, Poincaré sees the task of the philosophers to be the explanation of how conventions came to be. At the end of the quotation, Poincaré tries to give such an explanation, namely in referring to an “instinct” (in the sequel, he mentions briefly that one can obviously ask where such an instinct comes from, but he gives no answer to this question). The pragmatist position to be developed will lead to an essentially similar, but more complete and clear point of view.

According to Poincaré’s definition, the task of the philosopher starts where that of the mathematician ends: for a mathematician, a result is right if he or she has a proof, that means, if the result can be logically deduced from the axioms; that one has to adopt some axioms is seen as a necessary evil, and one perhaps puts some energy in the project to minimize the number of axioms (this might have been how set theory become thought of as a foundation of mathematics). A philosopher, however, wants to understand why exactly these axioms and no other were chosen. In particular, the philosopher is concerned with the question whether the chosen axioms actually grasp the intended model. This question is justified since formal definitions are not automatically sufficient to grasp the intention of a concept; at the same time, the question is methodologically very hard, since ultimately a concept is available in mathematical proof only by a formal explication. At any rate, it becomes clear that the task of the philosopher is related to a criterion problem.

Georg Kreisel thinks that we do indeed have the capacity to decide whether a given model was intended or not:

many formal independence proofs consist in the construction of models which we recognize to be different from the intended notion. It is a fact of experience that one can be honest about such matters! When we are shown a ‘non-standard’ model we can honestly say that it was not intended. [ . . . ] If it so happens that the intended notion is not formally definable this may be a useful thing to know about the notion, but it does not cast doubt on its objectivity.

Poincaré could not yet know (but he was experienced enough a mathematician to “feel”) that axiom systems quite often fail to grasp the intended model. It was seldom the work of professional philosophers and often the byproduct of the actual mathematical work to point out such discrepancies.

Following Kant, one defines the task of epistemology thus: to determine the conditions of the possibility of the cognition of objects. Now, what is meant by “cognition of objects”? It is meant that we have an insight into (the truth of) propositions about the objects (we can then speak about the propositions as facts); and epistemology asks what are the conditions for the possibility of such an insight. Hence, epistemology is not concerned with what objects are (ontology), but with what (and how) we can know about them (ways of access). This notwithstanding, both things are intimately related, especially, in the Peircean stream of pragmatist philosophy. The 19th century (in particular Helmholtz) stressed against Kant the importance of physiological conditions for this access to objects. Nevertheless, epistemology is concerned with logic and not with the brain. Pragmatism puts the accent on the means of cognition – to which also the brain belongs.

Kant in his epistemology stressed that the object depends on the subject, or, more precisely, that the cognition of an object depends on the means of cognition used by the subject. For him, the decisive means of cognition was reason; thus, his epistemology was to a large degree critique of reason. Other philosophers disagreed about this special role of reason but shared the view that the task of philosophy is to criticise the means of cognition. For all of them, philosophy has to point out about what we can speak “legitimately”. Such a critical approach is implicitly contained in Poincaré’s description of the task of the philosopher.

Reichenbach decomposes the task of epistemology into different parts: guiding, justification and limitation of cognition. While justification is usually considered as the most important of the three aspects, the “task of the philosopher” as specified above following Poincaré is not limited to it. Indeed, the question why just certain axioms and no others were chosen is obviously a question concerning the guiding principles of cognition: which criteria are at work? Mathematics presents itself at its various historical stages as the result of a series of decisions on questions of the kind “Which objects should we consider? Which definitions should we make? Which theorems should we try to prove?” etc. – for short: instances of the “criterion problem”. Epistemology, has all the task to evoke these criteria – used but not evoked by the researchers themselves. For after all, these criteria cannot be without effect on the conditions for the possibility of cognition of the objects which one has decided to consider. (In turn, the conditions for this possibility in general determine the range of objects from which one has to choose.) However, such an epistemology has not the task to resolve the criterion problem normatively (that means to prescribe for the scientist which choices he has to make).

Metaphysics of the Semantics of HoTT. Thought of the Day 73.0

PMquw

Types and tokens are interpreted as concepts (rather than spaces, as in the homotopy interpretation). In particular, a type is interpreted as a general mathematical concept, while a token of a given type is interpreted as a more specific mathematical concept qua instance of the general concept. This accords with the fact that each token belongs to exactly one type. Since ‘concept’ is a pre-mathematical notion, this interpretation is admissible as part of an autonomous foundation for mathematics.

Expressions in the language are the names of types and tokens. Those naming types correspond to propositions. A proposition is ‘true’ just if the corresponding type is inhabited (i.e. there is a token of that type, which we call a ‘certificate’ to the proposition). There is no way in the language of HoTT to express the absence or non-existence of a token. The negation of a proposition P is represented by the type P → 0, where P is the type corresponding to proposition P and 0 is a type that by definition has no token constructors (corresponding to a contradiction). The logic of HoTT is not bivalent, since the inability to construct a token of P does not guarantee that a token of P → 0 can be constructed, and vice versa.

The rules governing the formation of types are understood as ways of composing concepts to form more complex concepts, or as ways of combining propositions to form more complex propositions. They follow from the Curry-Howard correspondence between logical operations and operations on types. However, we depart slightly from the standard presentation of the Curry-Howard correspondence, in that the tokens of types are not to be thought of as ‘proofs’ of the corresponding propositions but rather as certificates to their truth. A proof of a proposition is the construction of a certificate to that proposition by a sequence of applications of the token construction rules. Two different such processes can result in construction of the same token, and so proofs and tokens are not in one-to-one correspondence.

When we work formally in HoTT we construct expressions in the language according to the formal rules. These expressions are taken to be the names of tokens and types of the theory. The rules are chosen such that if a construction process begins with non-contradictory expressions that all name tokens (i.e. none of the expressions are ‘empty names’) then the result will also name a token (i.e. the rules preserve non-emptiness of names).

Since we interpret tokens and types as concepts, the only metaphysical commitment required is to the existence of concepts. That human thought involves concepts is an uncontroversial position, and our interpretation does not require that concepts have any greater metaphysical status than is commonly attributed to them. Just as the existence of a concept such as ‘unicorn’ does not require the existence of actual unicorns, likewise our interpretation of tokens and types as mathematical concepts does not require the existence of mathematical objects. However, it is compatible with such beliefs. Thus a Platonist can take the concept, say, ‘equilateral triangle’ to be the concept corresponding to the abstract equilateral triangle (after filling in some account of how we come to know about these abstract objects in a way that lets us form the corresponding concepts). Even without invoking mathematical objects to be the ‘targets’ of mathematical concepts, one could still maintain that concepts have a mind-independent status, i.e. that the concept ‘triangle’ continues to exist even while no-one is thinking about triangles, and that the concept ‘elliptic curve’ did not come into existence at the moment someone first gave the definition. However, this is not a necessary part of the interpretation, and we could instead take concepts to be mind-dependent, with corresponding implications for the status of mathematics itself.

Momentum of Accelerated Capital. Note Quote.

high-frequency-trading

Distinct types of high frequency trading firms include independent proprietary firms, which use private funds and specific strategies which remain secretive, and may act as market makers generating automatic buy and sell orders continuously throughout the day. Broker-dealer proprietary desks are part of traditional broker-dealer firms but are not related to their client business, and are operated by the largest investment banks. Thirdly hedge funds focus on complex statistical arbitrage, taking advantage of pricing inefficiencies between asset classes and securities.

Today strategies using algorithmic trading and High Frequency Trading play a central role on financial exchanges, alternative markets, and banks‘ internalized (over-the-counter) dealings:

High frequency traders typically act in a proprietary capacity, making use of a number of strategies and generating a very large number of trades every single day. They leverage technology and algorithms from end-to-end of the investment chain – from market data analysis and the operation of a specific trading strategy to the generation, routing, and execution of orders and trades. What differentiates HFT from algorithmic trading is the high frequency turnover of positions as well as its implicit reliance on ultra-low latency connection and speed of the system.

The use of algorithms in computerised exchange trading has experienced a long evolution with the increasing digitalisation of exchanges:

Over time, algorithms have continuously evolved: while initial first-generation algorithms – fairly simple in their goals and logic – were pure trade execution algos, second-generation algorithms – strategy implementation algos – have become much more sophisticated and are typically used to produce own trading signals which are then executed by trade execution algos. Third-generation algorithms include intelligent logic that learns from market activity and adjusts the trading strategy of the order based on what the algorithm perceives is happening in the market. HFT is not a strategy per se, but rather a technologically more advanced method of implementing particular trading strategies. The objective of HFT strategies is to seek to benefit from market liquidity imbalances or other short-term pricing inefficiencies.

While algorithms are employed by most traders in contemporary markets, the intense focus on speed and the momentary holding periods are the unique practices of the high frequency traders. As the defence of high frequency trading is built around the principles that it increases liquidity, narrows spreads, and improves market efficiency, the high number of trades made by HFT traders results in greater liquidity in the market. Algorithmic trading has resulted in the prices of securities being updated more quickly with more competitive bid-ask prices, and narrowing spreads. Finally HFT enables prices to reflect information more quickly and accurately, ensuring accurate pricing at smaller time intervals. But there are critical differences between high frequency traders and traditional market makers:

  1. HFT do not have an affirmative market making obligation, that is they are not obliged to provide liquidity by constantly displaying two sides quotes, which may translate into a lack of liquidity during volatile conditions.
  2. HFT contribute little market depth due to the marginal size of their quotes, which may result in larger orders having to transact with many small orders, and this may impact on overall transaction costs.
  3. HFT quotes are barely accessible due to the extremely short duration for which the liquidity is available when orders are cancelled within milliseconds.

Besides the shallowness of the HFT contribution to liquidity, are the real fears of how HFT can compound and magnify risk by the rapidity of its actions:

There is evidence that high-frequency algorithmic trading also has some positive benefits for investors by narrowing spreads – the difference between the price at which a buyer is willing to purchase a financial instrument and the price at which a seller is willing to sell it – and by increasing liquidity at each decimal point. However, a major issue for regulators and policymakers is the extent to which high-frequency trading, unfiltered sponsored access, and co-location amplify risks, including systemic risk, by increasing the speed at which trading errors or fraudulent trades can occur.

Although there have always been occasional trading errors and episodic volatility spikes in markets, the speed, automation and interconnectedness of today‘s markets create a different scale of risk. These risks demand that exchanges and market participants employ effective quality management systems and sophisticated risk mitigation controls adapted to these new dynamics in order to protect against potential threats to market stability arising from technology malfunctions or episodic illiquidity. However, there are more deliberate aspects of HFT strategies which may present serious problems for market structure and functioning, and where conduct may be illegal, for example in order anticipation seeks to ascertain the existence of large buyers or sellers in the marketplace and then to trade ahead of those buyers and sellers in anticipation that their large orders will move market prices. A momentum strategy involves initiating a series of orders and trades in an attempt to ignite a rapid price move. HFT strategies can resemble traditional forms of market manipulation that violate the Exchange Act:

  1. Spoofing and layering occurs when traders create a false appearance of market activity by entering multiple non-bona fide orders on one side of the market at increasing or decreasing prices in order to induce others to buy or sell the stock at a price altered by the bogus orders.
  2. Painting the tape involves placing successive small amount of buy orders at increasing prices in order to stimulate increased demand.

  3. Quote Stuffing and price fade are additional HFT dubious practices: quote stuffing is a practice that floods the market with huge numbers of orders and cancellations in rapid succession which may generate buying or selling interest, or compromise the trading position of other market participants. Order or price fade involves the rapid cancellation of orders in response to other trades.

The World Federation of Exchanges insists: ― Exchanges are committed to protecting market stability and promoting orderly markets, and understand that a robust and resilient risk control framework adapted to today‘s high speed markets, is a cornerstone of enhancing investor confidence. However this robust and resilient risk control framework‘ seems lacking, including in the dark pools now established for trading that were initially proposed as safer than the open market.