Complete Manifolds’ Pure Logical Necessity as the Totality of Possible Formations. Thought of the Day 124.0

husserl-phenomenology

In Logical Investigations, Husserl called his theory of complete manifolds the key to the only possible solution to how in the realm of numbers impossible, non-existent, meaningless concepts might be dealt with as real ones. In Ideas, he wrote that his chief purpose in developing his theory of manifolds had been to find a theoretical solution to the problem of imaginary quantities (Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy).

Husserl saw how questions regarding imaginary numbers come up in mathematical contexts in which formalization yields constructions which arithmetically speaking are nonsense, but can be used in calculations. When formal reasoning is carried out mechanically as if these symbols have meaning, if the ordinary rules are observed, and the results do not contain any imaginary components, these symbols might be legitimately used. And this could be empirically verified (Philosophy of Arithmetic_ Psychological and Logical Investigations with Supplementary Texts).

In a letter to Carl Stumpf in the early 1890s, Husserl explained how, in trying to understand how operating with contradictory concepts could lead to correct theorems, he had found that for imaginary numbers like √2 and √-1, it was not a matter of the possibility or impossibility of concepts. Through the calculation itself and its rules, as defined for those fictive numbers, the impossible fell away, and a genuine equation remained. One could calculate again with the same signs, but referring to valid concepts, and the result was again correct. Even if one mistakenly imagined that what was contradictory existed, or held the most absurd theories about the content of the corresponding concepts of number, the calculation remained correct if it followed the rules. He concluded that this must be a result of the signs and their rules (Early Writings in the Philosophy of Logic and Mathematics). The fact that one can generalize, produce variations of formal arithmetic that lead outside the quantitative domain without essentially altering formal arithmetic’s theoretical nature and calculational methods brought Husserl to realize that there was more to the mathematical or formal sciences, or the mathematical method of calculation than could be captured in purely quantitative analyses.

Understanding the nature of theory forms, shows how reference to impossible objects can be justified. According to his theory of manifolds, one could operate freely within a manifold with imaginary concepts and be sure that what one deduced was correct when the axiomatic system completely and unequivocally determined the body of all the configurations possible in a domain by a purely analytical procedure. It was the completeness of the axiomatic system that gave one the right to operate in that free way. A domain was complete when each grammatically constructed proposition exclusively using the language of the domain was determined from the outset to be true or false in virtue of the axioms, i.e., necessarily followed from the axioms or did not. In that case, calculating with expressions without reference could never lead to contradictions. Complete manifolds have the

distinctive feature that a finite number of concepts and propositions – to be drawn as occasion requires from the essential nature of the domain under consideration –  determines completely and unambiguously on the lines of pure logical necessity the totality of all possible formations in the domain, so that in principle, therefore, nothing further remains open within it.

In such complete manifolds, he stressed, “the concepts true and formal implication of the axioms are equivalent (Ideas).

Husserl pointed out that there may be two valid discipline forms that stand in relation to one another in such a way that the axiom system of one may be a formal limitation of that of the other. It is then clear that everything deducible in the narrower axiom system is included in what is deducible in the expanded system, he explained. In the arithmetic of cardinal numbers, Husserl explained, there are no negative numbers, for the meaning of the axioms is so restrictive as to make subtracting 4 from 3 nonsense. Fractions are meaningless there. So are irrational numbers, √–1, and so on. Yet in practice, all the calculations of the arithmetic of cardinal numbers can be carried out as if the rules governing the operations are unrestrictedly valid and meaningful. One can disregard the limitations imposed in a narrower domain of deduction and act as if the axiom system were a more extended one. We cannot arbitrarily expand the concept of cardinal number, Husserl reasoned. But we can abandon it and define a new, pure formal concept of positive whole number with the formal system of definitions and operations valid for cardinal numbers. And, as set out in our definition, this formal concept of positive numbers can be expanded by new definitions while remaining free of contradiction. Fractions do not acquire any genuine meaning through our holding onto the concept of cardinal number and assuming that units are divisible, he theorized, but rather through our abandonment of the concept of cardinal number and our reliance on a new concept, that of divisible quantities. That leads to a system that partially coincides with that of cardinal numbers, but part of which is larger, meaning that it includes additional basic elements and axioms. And so in this way, with each new quantity, one also changes arithmetics. The different arithmetics do not have parts in common. They have totally different domains, but an analogous structure. They have forms of operation that are in part alike, but different concepts of operation.

For Husserl, formal constraints banning meaningless expressions, meaningless imaginary concepts, reference to non-existent and impossible objects restrict us in our theoretical, deductive work, but that resorting to the infinity of pure forms and transformations of forms frees us from such conditions and explains why having used imaginaries, what is meaningless, must lead, not to meaningless, but to true results.

Advertisements

Metaphysical Would-Be(s). Drunken Risibility.

2483194094_be8c241308_b

If one were to look at Quine’s commitment to similarity, natural kinds, dispositions, causal statements, etc., it is evident, that it takes him close to Peirce’s conception of Thirdness – even if Quine in an utopian vision imagines that all such concepts in a remote future will dissolve and vanish in favor of purely microstructural descriptions.

A crucial difference remains, however, which becomes evident when one looks at Quine’s brief formula for ontological commitment, the famous idea that ‘to be is to be the value of a bound variable’. For even if this motto is stated exactly to avoid commitment to several different types of being, it immediately prompts the question: the equation, in which the variable is presumably bound, which status does it have? Governing the behavior of existing variable values, is that not in some sense being real?

This will be Peirce’s realist idea – that regularities, tendencies, dispositions, patterns, may possess real existence, independent of any observer. In Peirce, this description of Thirdness is concentrated in the expression ‘real possibility’, and even it may sound exceedingly metaphysical at a first glance, it amounts, at a closer look, to regularities charted by science that are not mere shorthands for collections of single events but do possess reality status. In Peirce, the idea of real possibilities thus springs from his philosophy of science – he observes that science, contrary to philosophy, is spontaneously realist, and is right in being so. Real possibilities are thus counterposed to mere subjective possibilities due to lack of knowledge on the part of the subject speaking: the possibility of ‘not known not to be true’.

In a famous piece of self-critique from his late, realist period, Peirce attacks his earlier arguments (from ‘How to Make Our Ideas Clear’, in the late 1890s considered by himself the birth certificate of pragmatism after James’s reference to Peirce as pragmatism’s inventor). Then, he wrote

let us ask what we mean by calling a thing hard. Evidently that it will not be scratched by many other substances. The whole conception of this quality, as of every other, lies in its conceived effects. There is absolutely no difference between a hard thing and a soft thing so long as they are not brought to the test. Suppose, then, that a diamond could be crystallized in the midst of a cushion of soft cotton, and should remain there until it was finally burned up. Would it be false to say that that diamond was soft? […] Reflection will show that the reply is this: there would be no falsity in such modes of speech.

More than twenty-five years later, however, he attacks this argument as bearing witness to the nominalism of his youth. Now instead he supports the

scholastic doctrine of realism. This is usually defined as the opinion that there are real objects that are general, among the number being the modes of determination of existent singulars, if, indeed, these be not the only such objects. But the belief in this can hardly escape being accompanied by the acknowledgment that there are, besides, real vagues, and especially real possibilities. For possibility being the denial of a necessity, which is a kind of generality, is vague like any other contradiction of a general. Indeed, it is the reality of some possibilities that pragmaticism is most concerned to insist upon. The article of January 1878 endeavored to gloze over this point as unsuited to the exoteric public addressed; or perhaps the writer wavered in his own mind. He said that if a diamond were to be formed in a bed of cotton-wool, and were to be consumed there without ever having been pressed upon by any hard edge or point, it would be merely a question of nomenclature whether that diamond should be said to have been hard or not. No doubt this is true, except for the abominable falsehood in the word MERELY, implying that symbols are unreal. Nomenclature involves classification; and classification is true or false, and the generals to which it refers are either reals in the one case, or figments in the other. For if the reader will turn to the original maxim of pragmaticism at the beginning of this article, he will see that the question is, not what did happen, but whether it would have been well to engage in any line of conduct whose successful issue depended upon whether that diamond would resist an attempt to scratch it, or whether all other logical means of determining how it ought to be classed would lead to the conclusion which, to quote the very words of that article, would be ‘the belief which alone could be the result of investigation carried sufficiently far.’ Pragmaticism makes the ultimate intellectual purport of what you please to consist in conceived conditional resolutions, or their substance; and therefore, the conditional propositions, with their hypothetical antecedents, in which such resolutions consist, being of the ultimate nature of meaning, must be capable of being true, that is, of expressing whatever there be which is such as the proposition expresses, independently of being thought to be so in any judgment, or being represented to be so in any other symbol of any man or men. But that amounts to saying that possibility is sometimes of a real kind. (The Essential Peirce Selected Philosophical Writings, Volume 2)

In the same year, he states, in a letter to the Italian pragmatist Signor Calderoni:

I myself went too far in the direction of nominalism when I said that it was a mere question of the convenience of speech whether we say that a diamond is hard when it is not pressed upon, or whether we say that it is soft until it is pressed upon. I now say that experiment will prove that the diamond is hard, as a positive fact. That is, it is a real fact that it would resist pressure, which amounts to extreme scholastic realism. I deny that pragmaticism as originally defined by me made the intellectual purport of symbols to consist in our conduct. On the contrary, I was most careful to say that it consists in our concept of what our conduct would be upon conceivable occasions. For I had long before declared that absolute individuals were entia rationis, and not realities. A concept determinate in all respects is as fictitious as a concept definite in all respects. I do not think we can ever have a logical right to infer, even as probable, the existence of anything entirely contrary in its nature to all that we can experience or imagine. 

Here lies the core of Peirce’s metaphysical insistence on the reality of ‘would-be’s. Real possibilities, or would-bes, are vague to the extent that they describe certain tendential, conditional behaviors only, while they do not prescribe any other aspect of the single objects they subsume. They are, furthermore, represented in rationally interrelated clusters of concepts: the fact that the diamond is in fact hard, no matter if it scratches anything or not, lies in the fact that the diamond’s carbon structure displays a certain spatial arrangement – so it is an aspect of the very concept of diamond. And this is why the old pragmatic maxim may not work without real possibilities: it is they that the very maxim rests upon, because it is they that provide us with the ‘conceived consequences’ of accepting a concept. The maxim remains a test to weed out empty concepts with no conceived consequences – that is, empty a priori reasoning and superfluous metaphysical assumptions. But what remains after the maxim has been put to use, is real possibilities. Real possibilities thus connect epistemology, expressed in the pragmatic maxim, to ontology: real possibilities are what science may grasp in conditional hypotheses.

The question is whether Peirce’s revision of his old ‘nominalist’ beliefs form part of a more general development in Peirce from nominalism to realism. The locus classicus of this idea is Max Fisch (Peirce, Semeiotic and Pragmatism) where Fisch outlines a development from an initial nominalism (albeit of a strange kind, refusing, as always in Peirce, the existence of individuals determinate in all respects) via a series of steps towards realism, culminating after the turn of the century. Fisch’s first step is then Peirce’s theory of the real as that which reasoning would finally have as its result; the second step his Berkeley review with its anti-nominalism and the idea that the real is what is unaffected by what we may think of it; the third step is his pragmatist idea that beliefs are conceived habits of action, even if he here clings to the idea that the conditionals in which habits are expressed are material implications only – like the definition of ‘hard’; the fourth step his reading of Abbott’s realist Scientific Theism (which later influenced his conception of scientific universals) and his introduction of the index in his theory of signs; the fifth step his acceptance of the reality of continuity; the sixth the introduction of real possibilities, accompanied by the development of existential graphs, topology and Peirce’s changing view of Hegelianism; the seventh, the identification of pragmatism with realism; the eighth ‘his last stronghold, that of Philonian or material implication’. A further realist development exchanging Peirce’s early frequentist idea of probability for a dispositional theory of probability was, according to Fisch, never finished.

The issue of implication concerns the old discussion quoted by Cicero between the Hellenistic logicians Philo and Diodorus. The former formulated what we know today as material implication, while the latter objected on common-sense ground that material implication does not capture implication in everyday language and thought and another implication type should be sought. As is well known, material implication says that p ⇒ q is equivalent to the claim that either p is false or q is true – so that p ⇒ q is false only when p is true and q is false. The problems arise when p is false, for any false p makes the implication true, and this leads to strange possibilities of true inferences. The two parts of the implication have no connection with each other at all, such as would be the spontaneous idea in everyday thought. It is true that Peirce as a logician generally supports material (‘Philonian’) implication – but it is also true that he does express some second thoughts at around the same time as the afterthoughts on the diamond example.

Peirce is a forerunner of the attempts to construct alternatives such as strict implication, and the reason why is, of course, that real possibilities are not adequately depicted by material implication. Peirce is in need of an implication which may somehow picture the causal dependency of q on p. The basic reason for the mature Peirce’s problems with the representation of real possibilities is not primarily logical, however. It is scientific. Peirce realizes that the scientific charting of anything but singular, actual events necessitates the real existence of tendencies and relations connecting singular events. Now, what kinds are those tendencies and relations? The hard diamond example seems to emphasize causality, but this probably depends on the point of view chosen. The ‘conceived consequences’ of the pragmatic maxim may be causal indeed: if we accept gravity as a real concept, then masses will attract one another – but they may all the same be structural: if we accept horse riders as a real concept, then we should expect horses, persons, the taming of horses, etc. to exist, or they may be teleological. In any case, the interpretation of the pragmatic maxim in terms of real possibilities paves the way for a distinction between empty a priori suppositions and real a priori structures.

Fibrations of Elliptic Curves in F-Theory.

Pictures_network

F-theory compactifications are by definition compactifications of the type IIB string with non-zero, and in general non-constant string coupling – they are thus intrinsically non-perturbative. F-theory may also seen as a construction to geometrize (and thereby making manifest) certain features pertaining to the S-duality of the type IIB string.

Let us first recapitulate the most important massless bosonic fields of the type IIB string. From the NS-NS sector, we have the graviton gμν, the antisymmetric 2-form field B as well as the dilaton φ; the latter, when exponentiated, serves as the coupling constant of the theory. Moreover, from the R-R sector we have the p-form tensor fields C(p) with p = 0,2,4. It is also convenient to include the magnetic duals of these fields, B(6), C(6) and C(8) (C(4) has self-dual field strength). It is useful to combine the dilaton with the axion into one complex field:

τIIB ≡ C(0) + ie —– (1)

The S-duality then acts via projective SL(2, Z) transformations in the canonical manner:

τIIB → (aτIIB + b)/(cτIIB + d) with a, b, c, d ∈ Z and ad – bc = 1

Furthermore, it acts via simple matrix multiplication on the other fields if these are grouped into doublets (B(2)C(2)), (B(6)C(4)), while C(4) stays invariant.

The simplest F-theory compactifications are the highest dimensional ones, and simplest of all is the compactification of the type IIB string on the 2-sphere, P1. However, as the first Chern class does not vanish: C1(P1) = – 2, this by itself cannot be a good, supersymmetry preserving background. The remedy is to add extra 7-branes to the theory, which sit at arbitrary points zi on the P1, and otherwise fill the 7+1 non-compact space-time dimensions. If this is done in the right way, C1(P1) is cancelled, thereby providing a consistent background.

Untitled

Encircling the location of a 7-brane in the z-plane leads to a jump of the perceived type IIB string coupling, τIIB →τIIB  +1.

To explain how this works, consider first a single D7-brane located at an arbitrary given point z0 on the P1. A D7-brane carries by definition one unit of D7-brane charge, since it is a unit source of C(8). This means that is it magnetically charged with respect to the dual field C(0), which enters in the complexified type IIB coupling in (1). As a consequence, encircling the plane location z0 will induce a non-trivial monodromy, that is, a jump on the coupling. But this then implies that in the neighborhood of the D7-brane, we must have a non-constant string coupling of the form: τIIB(z) = 1/2πiIn[z – z0]; we thus indeed have a truly non-perturbative situation.

In view of the SL(2, Z) action on the string coupling (1), it is natural to interpret it as a modular parameter of a two-torus, T2, and this is what then gives a geometrical meaning to the S-duality group. This modular parameter τIIB = τIIB(Z) is not constant over the P1 compactification manifold, the shape of the T2 will accordingly vary along P1. The relevant geometrical object will therefore not be the direct product manifold T2 x P1, but rather a fibration of T2 over P1

Untitled

Fibration of an elliptic curve over P1, which in total makes a K3 surface.

The logarithmic behavior of τIIB(z) in the vicinity of a 7-brane means that the T2 fiber is singular at the brane location. It is known from mathematics that each of such singular fibers contributes 1/12 to the first Chern class. Therefore we need to put 24 of them in order to have a consistent type IIB background with C1 = 0. The mathematical data: “Tfibered over P1 with 24 singular fibers” is now exactly what characterizes the K3 surface; indeed it is the only complex two-dimensional manifold with vanishing first Chern class (apart from T4).

The K3 manifold that arises in this context is so far just a formal construct, introduced to encode of the behavior of the string coupling in the presence of 7-branes in an elegant and useful way. One may speculate about a possible more concrete physical significance, such as a compactification manifold of a yet unknown 12 dimensional “F-theory”. The existence of such a theory is still unclear, but all we need the K3 for is to use its intriguing geometric properties for computing physical quantities (the quartic gauge threshold couplings, ultimately).

In order to do explicit computations, we first of all need a concrete representation of the K3 surface. Since the families of K3’s in question are elliptically fibered, the natural starting point is the two-torus T2. It can be represented in the well-known “Weierstraβ” form:

WT2 = y2 + x3 + xf + g = 0 —– (2)

which in turn is invariantly characterized by the J-function:

J = 4(24f)3/(4f3 + 27g2) —– (3)

An elliptically fibered K3 surface can be made out of (2) by letting f → f8(z) and g → g12(z) become polynomials in the P1 coordinate z, of the indicated orders. The locations zi of the 7-branes, which correspond to the locations of the singular fibers where J(τIIB(zi)) → ∞, are then precisely where the discriminant

∆(z) ≡ 4f83(z) + 27g122(z)

=: ∏i=124(z –  zi) vanishes.

The Coming Swarm DDoS Actions, Hacktivism, and Civil Disobedience on the Internet

hacktivism_map

On November 28, 2010, Wikileaks, along with the New York Times, Der Spiegel, El Pais, Le Monde, and The Guardian began releasing documents from a leaked cache of 2,51,287 unclassified and classified US diplomatic cables, copied from the closed Department of Defense network SIPrnet. The US government was furious. In the days that followed, different organizations and corporations began distancing themselves from Wikileaks. Amazon WebServices declined to continue hosting Wikileaks’ website, and on December 1, removed its content from its servers. The next day, the public could no longer reach the Wikileaks website at wikileaks.org; Wikileaks’ Domain Name System (DNS) provider, EveryDNS, had dropped the site from its entries on December 2, temporarily making the site inaccessible through its URL. Shortly thereafter, what would be known as the “Banking Blockade” began, with PayPal, PostFinance, Mastercard, Visa, and Bank of America refusing to process online donations to Wikileaks, essentially halting the flow of monetary donations to the organization.

Wikileaks’ troubles attracted the attention of anonymous, a loose group of internet denizens, and in particular, a small subgroup known as AnonOps, who had been engaged in a retaliatory distributed denial of service (DDoS) campaign called Operation Payback, targeting the Motion Picture Association of America and other pro-copyright, antipiracy groups since September 2010. A DDoS action is, simply, when a large number of computers attempt to access one website over and over again in a short amount of time, in the hopes of overwhelming the server, rendering it incapable of responding to legitimate requests. Anons, as members of the anonymous subculture are known, were happy to extend Operation Payback’s range of targets to include the forces arrayed against Wikileaks and its public face, Julian Assange. On December 6, they launched their first DDoS action against the website of the Swiss banking service, PostFinance. Over the course of the next 4 days, anonymous and AnonOps would launch DDoS actions against the websites of the Swedish Prosecution Authority, EveryDNS, Senator Joseph Lieberman, Mastercard, two Swedish politicians, Visa, PayPal, and amazon.com, and others, forcing many of the sites to experience at least some amount of downtime.

For many in the media and public at large, Anonymous’ December 2010 DDoS campaign was their first exposure to the use of this tactic by activists, and the exact nature of the action was unclear. Was it an activist action, a legitimate act of protest, an act of terrorism, or a criminal act? These DDoS actions – concerted efforts by many individuals to bring down websites by making repeated requests of the websites’ servers in a short amount of time – were covered extensively by the media. This coverage was inconsistent in its characterization but was open to the idea that these actions could be legitimately political in nature. In the eyes of the media and public, Operation Payback opened the door to the potential for civil disobedience and disruptive activism on the internet. But Operation Payback was far from the first use of DDoS as a tool of activism. Rather, DDoS actions have been in use for over two decades, in support of activist campaigns ranging from pro-Zapatistas actions to protests against German immigration policy and trademark enforcement disputes….

The Coming Swarm DDOS Actions, Hacktivism, and Civil Disobedience on the Internet

Knowledge Limited for Dummies….Didactics.

header_Pipes

Bertrand Russell with Alfred North Whitehead, in the Principia Mathematica aimed to demonstrate that “all pure mathematics follows from purely logical premises and uses only concepts defined in logical terms.” Its goal was to provide a formalized logic for all mathematics, to develop the full structure of mathematics where every premise could be proved from a clear set of initial axioms.

Russell observed of the dense and demanding work, “I used to know of only six people who had read the later parts of the book. Three of those were Poles, subsequently (I believe) liquidated by Hitler. The other three were Texans, subsequently successfully assimilated.” The complex mathematical symbols of the manuscript required it to be written by hand, and its sheer size – when it was finally ready for the publisher, Russell had to hire a panel truck to send it off – made it impossible to copy. Russell recounted that “every time that I went out for a walk I used to be afraid that the house would catch fire and the manuscript get burnt up.”

Momentous though it was, the greatest achievement of Principia Mathematica was realized two decades after its completion when it provided the fodder for the metamathematical enterprises of an Austrian, Kurt Gödel. Although Gödel did face the risk of being liquidated by Hitler (therefore fleeing to the Institute of Advanced Studies at Princeton), he was neither a Pole nor a Texan. In 1931, he wrote a treatise entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which demonstrated that the goal Russell and Whitehead had so single-mindedly pursued was unattainable.

The flavor of Gödel’s basic argument can be captured in the contradictions contained in a schoolboy’s brainteaser. A sheet of paper has the words “The statement on the other side of this paper is true” written on one side and “The statement on the other side of this paper is false” on the reverse. The conflict isn’t resolvable. Or, even more trivially, a statement like; “This statement is unprovable.” You cannot prove the statement is true, because doing so would contradict it. If you prove the statement is false, then that means its converse is true – it is provable – which again is a contradiction.

The key point of contradiction for these two examples is that they are self-referential. This same sort of self-referentiality is the keystone of Gödel’s proof, where he uses statements that imbed other statements within them. This problem did not totally escape Russell and Whitehead. By the end of 1901, Russell had completed the first round of writing Principia Mathematica and thought he was in the homestretch, but was increasingly beset by these sorts of apparently simple-minded contradictions falling in the path of his goal. He wrote that “it seemed unworthy of a grown man to spend his time on such trivialities, but . . . trivial or not, the matter was a challenge.” Attempts to address the challenge extended the development of Principia Mathematica by nearly a decade.

Yet Russell and Whitehead had, after all that effort, missed the central point. Like granite outcroppings piercing through a bed of moss, these apparently trivial contradictions were rooted in the core of mathematics and logic, and were only the most readily manifest examples of a limit to our ability to structure formal mathematical systems. Just four years before Gödel had defined the limits of our ability to conquer the intellectual world of mathematics and logic with the publication of his Undecidability Theorem, the German physicist Werner Heisenberg’s celebrated Uncertainty Principle had delineated the limits of inquiry into the physical world, thereby undoing the efforts of another celebrated intellect, the great mathematician Pierre-Simon Laplace. In the early 1800s Laplace had worked extensively to demonstrate the purely mechanical and predictable nature of planetary motion. He later extended this theory to the interaction of molecules. In the Laplacean view, molecules are just as subject to the laws of physical mechanics as the planets are. In theory, if we knew the position and velocity of each molecule, we could trace its path as it interacted with other molecules, and trace the course of the physical universe at the most fundamental level. Laplace envisioned a world of ever more precise prediction, where the laws of physical mechanics would be able to forecast nature in increasing detail and ever further into the future, a world where “the phenomena of nature can be reduced in the last analysis to actions at a distance between molecule and molecule.”

What Gödel did to the work of Russell and Whitehead, Heisenberg did to Laplace’s concept of causality. The Uncertainty Principle, though broadly applied and draped in metaphysical context, is a well-defined and elegantly simple statement of physical reality – namely, the combined accuracy of a measurement of an electron’s location and its momentum cannot vary far from a fixed value. The reason for this, viewed from the standpoint of classical physics, is that accurately measuring the position of an electron requires illuminating the electron with light of a very short wavelength. But the shorter the wavelength the greater the amount of energy that hits the electron, and the greater the energy hitting the electron the greater the impact on its velocity.

What is true in the subatomic sphere ends up being true – though with rapidly diminishing significance – for the macroscopic. Nothing can be measured with complete precision as to both location and velocity because the act of measuring alters the physical properties. The idea that if we know the present we can calculate the future was proven invalid – not because of a shortcoming in our knowledge of mechanics, but because the premise that we can perfectly know the present was proven wrong. These limits to measurement imply limits to prediction. After all, if we cannot know even the present with complete certainty, we cannot unfailingly predict the future. It was with this in mind that Heisenberg, ecstatic about his yet-to-be-published paper, exclaimed, “I think I have refuted the law of causality.”

The epistemological extrapolation of Heisenberg’s work was that the root of the problem was man – or, more precisely, man’s examination of nature, which inevitably impacts the natural phenomena under examination so that the phenomena cannot be objectively understood. Heisenberg’s principle was not something that was inherent in nature; it came from man’s examination of nature, from man becoming part of the experiment. (So in a way the Uncertainty Principle, like Gödel’s Undecidability Proposition, rested on self-referentiality.) While it did not directly refute Einstein’s assertion against the statistical nature of the predictions of quantum mechanics that “God does not play dice with the universe,” it did show that if there were a law of causality in nature, no one but God would ever be able to apply it. The implications of Heisenberg’s Uncertainty Principle were recognized immediately, and it became a simple metaphor reaching beyond quantum mechanics to the broader world.

This metaphor extends neatly into the world of financial markets. In the purely mechanistic universe of classical physics, we could apply Newtonian laws to project the future course of nature, if only we knew the location and velocity of every particle. In the world of finance, the elementary particles are the financial assets. In a purely mechanistic financial world, if we knew the position each investor has in each asset and the ability and willingness of liquidity providers to take on those assets in the event of a forced liquidation, we would be able to understand the market’s vulnerability. We would have an early-warning system for crises. We would know which firms are subject to a liquidity cycle, and which events might trigger that cycle. We would know which markets are being overrun by speculative traders, and thereby anticipate tactical correlations and shifts in the financial habitat. The randomness of nature and economic cycles might remain beyond our grasp, but the primary cause of market crisis, and the part of market crisis that is of our own making, would be firmly in hand.

The first step toward the Laplacean goal of complete knowledge is the advocacy by certain financial market regulators to increase the transparency of positions. Politically, that would be a difficult sell – as would any kind of increase in regulatory control. Practically, it wouldn’t work. Just as the atomic world turned out to be more complex than Laplace conceived, the financial world may be similarly complex and not reducible to a simple causality. The problems with position disclosure are many. Some financial instruments are complex and difficult to price, so it is impossible to measure precisely the risk exposure. Similarly, in hedge positions a slight error in the transmission of one part, or asynchronous pricing of the various legs of the strategy, will grossly misstate the total exposure. Indeed, the problems and inaccuracies in using position information to assess risk are exemplified by the fact that major investment banking firms choose to use summary statistics rather than position-by-position analysis for their firmwide risk management despite having enormous resources and computational power at their disposal.

Perhaps more importantly, position transparency also has implications for the efficient functioning of the financial markets beyond the practical problems involved in its implementation. The problems in the examination of elementary particles in the financial world are the same as in the physical world: Beyond the inherent randomness and complexity of the systems, there are simply limits to what we can know. To say that we do not know something is as much a challenge as it is a statement of the state of our knowledge. If we do not know something, that presumes that either it is not worth knowing or it is something that will be studied and eventually revealed. It is the hubris of man that all things are discoverable. But for all the progress that has been made, perhaps even more exciting than the rolling back of the boundaries of our knowledge is the identification of realms that can never be explored. A sign in Einstein’s Princeton office read, “Not everything that counts can be counted, and not everything that can be counted counts.”

The behavioral analogue to the Uncertainty Principle is obvious. There are many psychological inhibitions that lead people to behave differently when they are observed than when they are not. For traders it is a simple matter of dollars and cents that will lead them to behave differently when their trades are open to scrutiny. Beneficial though it may be for the liquidity demander and the investor, for the liquidity supplier trans- parency is bad. The liquidity supplier does not intend to hold the position for a long time, like the typical liquidity demander might. Like a market maker, the liquidity supplier will come back to the market to sell off the position – ideally when there is another investor who needs liquidity on the other side of the market. If other traders know the liquidity supplier’s positions, they will logically infer that there is a good likelihood these positions shortly will be put into the market. The other traders will be loath to be the first ones on the other side of these trades, or will demand more of a price concession if they do trade, knowing the overhang that remains in the market.

This means that increased transparency will reduce the amount of liquidity provided for any given change in prices. This is by no means a hypothetical argument. Frequently, even in the most liquid markets, broker-dealer market makers (liquidity providers) use brokers to enter their market bids rather than entering the market directly in order to preserve their anonymity.

The more information we extract to divine the behavior of traders and the resulting implications for the markets, the more the traders will alter their behavior. The paradox is that to understand and anticipate market crises, we must know positions, but knowing and acting on positions will itself generate a feedback into the market. This feedback often will reduce liquidity, making our observations less valuable and possibly contributing to a market crisis. Or, in rare instances, the observer/feedback loop could be manipulated to amass fortunes.

One might argue that the physical limits of knowledge asserted by Heisenberg’s Uncertainty Principle are critical for subatomic physics, but perhaps they are really just a curiosity for those dwelling in the macroscopic realm of the financial markets. We cannot measure an electron precisely, but certainly we still can “kind of know” the present, and if so, then we should be able to “pretty much” predict the future. Causality might be approximate, but if we can get it right to within a few wavelengths of light, that still ought to do the trick. The mathematical system may be demonstrably incomplete, and the world might not be pinned down on the fringes, but for all practical purposes the world can be known.

Unfortunately, while “almost” might work for horseshoes and hand grenades, 30 years after Gödel and Heisenberg yet a third limitation of our knowledge was in the wings, a limitation that would close the door on any attempt to block out the implications of microscopic uncertainty on predictability in our macroscopic world. Based on observations made by Edward Lorenz in the early 1960s and popularized by the so-called butterfly effect – the fanciful notion that the beating wings of a butterfly could change the predictions of an otherwise perfect weather forecasting system – this limitation arises because in some important cases immeasurably small errors can compound over time to limit prediction in the larger scale. Half a century after the limits of measurement and thus of physical knowledge were demonstrated by Heisenberg in the world of quantum mechanics, Lorenz piled on a result that showed how microscopic errors could propagate to have a stultifying impact in nonlinear dynamic systems. This limitation could come into the forefront only with the dawning of the computer age, because it is manifested in the subtle errors of computational accuracy.

The essence of the butterfly effect is that small perturbations can have large repercussions in massive, random forces such as weather. Edward Lorenz was testing and tweaking a model of weather dynamics on a rudimentary vacuum-tube computer. The program was based on a small system of simultaneous equations, but seemed to provide an inkling into the variability of weather patterns. At one point in his work, Lorenz decided to examine in more detail one of the solutions he had generated. To save time, rather than starting the run over from the beginning, he picked some intermediate conditions that had been printed out by the computer and used those as the new starting point. The values he typed in were the same as the values held in the original simulation at that point, so the results the simulation generated from that point forward should have been the same as in the original; after all, the computer was doing exactly the same operations. What he found was that as the simulated weather pattern progressed, the results of the new run diverged, first very slightly and then more and more markedly, from those of the first run. After a point, the new path followed a course that appeared totally unrelated to the original one, even though they had started at the same place.

Lorenz at first thought there was a computer glitch, but as he investigated further, he discovered the basis of a limit to knowledge that rivaled that of Heisenberg and Gödel. The problem was that the numbers he had used to restart the simulation had been reentered based on his printout from the earlier run, and the printout rounded the values to three decimal places while the computer carried the values to six decimal places. This rounding, clearly insignificant at first, promulgated a slight error in the next-round results, and this error grew with each new iteration of the program as it moved the simulation of the weather forward in time. The error doubled every four simulated days, so that after a few months the solutions were going their own separate ways. The slightest of changes in the initial conditions had traced out a wholly different pattern of weather.

Intrigued by his chance observation, Lorenz wrote an article entitled “Deterministic Nonperiodic Flow,” which stated that “nonperiodic solutions are ordinarily unstable with respect to small modifications, so that slightly differing initial states can evolve into considerably different states.” Translation: Long-range weather forecasting is worthless. For his application in the narrow scientific discipline of weather prediction, this meant that no matter how precise the starting measurements of weather conditions, there was a limit after which the residual imprecision would lead to unpredictable results, so that “long-range forecasting of specific weather conditions would be impossible.” And since this occurred in a very simple laboratory model of weather dynamics, it could only be worse in the more complex equations that would be needed to properly reflect the weather. Lorenz discovered the principle that would emerge over time into the field of chaos theory, where a deterministic system generated with simple nonlinear dynamics unravels into an unrepeated and apparently random path.

The simplicity of the dynamic system Lorenz had used suggests a far-reaching result: Because we cannot measure without some error (harking back to Heisenberg), for many dynamic systems our forecast errors will grow to the point that even an approximation will be out of our hands. We can run a purely mechanistic system that is designed with well-defined and apparently well-behaved equations, and it will move over time in ways that cannot be predicted and, indeed, that appear to be random. This gets us to Santa Fe.

The principal conceptual thread running through the Santa Fe research asks how apparently simple systems, like that discovered by Lorenz, can produce rich and complex results. Its method of analysis in some respects runs in the opposite direction of the usual path of scientific inquiry. Rather than taking the complexity of the world and distilling simplifying truths from it, the Santa Fe Institute builds a virtual world governed by simple equations that when unleashed explode into results that generate unexpected levels of complexity.

In economics and finance, institute’s agenda was to create artificial markets with traders and investors who followed simple and reasonable rules of behavior and to see what would happen. Some of the traders built into the model were trend followers, others bought or sold based on the difference between the market price and perceived value, and yet others traded at random times in response to liquidity needs. The simulations then printed out the paths of prices for the various market instruments. Qualitatively, these paths displayed all the richness and variation we observe in actual markets, replete with occasional bubbles and crashes. The exercises did not produce positive results for predicting or explaining market behavior, but they did illustrate that it is not hard to create a market that looks on the surface an awful lot like a real one, and to do so with actors who are following very simple rules. The mantra is that simple systems can give rise to complex, even unpredictable dynamics, an interesting converse to the point that much of the complexity of our world can – with suitable assumptions – be made to appear simple, summarized with concise physical laws and equations.

The systems explored by Lorenz were deterministic. They were governed definitively and exclusively by a set of equations where the value in every period could be unambiguously and precisely determined based on the values of the previous period. And the systems were not very complex. By contrast, whatever the set of equations are that might be divined to govern the financial world, they are not simple and, furthermore, they are not deterministic. There are random shocks from political and economic events and from the shifting preferences and attitudes of the actors. If we cannot hope to know the course of the deterministic systems like fluid mechanics, then no level of detail will allow us to forecast the long-term course of the financial world, buffeted as it is by the vagaries of the economy and the whims of psychology.

Statistical Arbitrage. Thought of the Day 123.0

eg_arb_usd_hedge

In the perfect market paradigm, assets can be bought and sold instantaneously with no transaction costs. For many financial markets, such as listed stocks and futures contracts, the reality of the market comes close to this ideal – at least most of the time. The commission for most stock transactions by an institutional trader is just a few cents a share, and the bid/offer spread is between one and five cents. Also implicit in the perfect market paradigm is a level of liquidity where the act of buying or selling does not affect the price. The market is composed of participants who are so small relative to the market that they can execute their trades, extracting liquidity from the market as they demand, without moving the price.

That’s where the perfect market vision starts to break down. Not only does the demand for liquidity move prices, but it also is the primary driver of the day-by-day movement in prices – and the primary driver of crashes and price bubbles as well. The relationship between liquidity and the prices of related stocks also became the primary driver of one of the most powerful trading models in the past 20 years – statistical arbitrage.

If you spend any time at all on a trading floor, it becomes obvious that something more than information moves prices. Throughout the day, the 10-year bond trader gets orders from the derivatives desk to hedge a swap position, from the mortgage desk to hedge mortgage exposure, from insurance clients who need to sell bonds to meet liabilities, and from bond mutual funds that need to invest the proceeds of new accounts. None of these orders has anything to do with information; each one has everything to do with a need for liquidity. The resulting price changes give the market no signal concerning information; the price changes are only the result of the need for liquidity. And the party on the other side of the trade who provides this liquidity will on average make money for doing so. For the liquidity demander, time is more important than price; he is willing to make a price concession to get his need fulfilled.

Liquidity needs will be manifest in the bond traders’ own activities. If their inventory grows too large and they feel overexposed, they will aggressively hedge or liquidate a portion of the position. And they will do so in a way that respects the liquidity constraints of the market. A trader who needs to sell 2,000 bond futures to reduce exposure does not say, “The market is efficient and competitive, and my actions are not based on any information about prices, so I will just put those contracts in the market and everybody will pay the fair price for them.” If the trader dumps 2,000 contracts into the market, that offer obviously will affect the price even though the trader does not have any new information. Indeed, the trade would affect the market price even if the market knew the selling was not based on an informational edge.

So the principal reason for intraday price movement is the demand for liquidity. This view of the market – a liquidity view rather than an informational view – replaces the conventional academic perspective of the role of the market, in which the market is efficient and exists solely for conveying information. Why the change in roles? For one thing, it’s harder to get an information advantage, what with the globalization of markets and the widespread dissemination of real-time information. At the same time, the growth in the number of market participants means there are more incidents of liquidity demand. They want it, and they want it now.

Investors or traders who are uncomfortable with their level of exposure will be willing to pay up to get someone to take the position. The more uncomfortable the traders are, the more they will pay. And well they should, because someone else is getting saddled with the risk of the position, someone who most likely did not want to take on that position at the existing market price. Thus the demand for liquidity not only is the source of most price movement; it is at the root of most trading strategies. It is this liquidity-oriented, tectonic market shift that has made statistical arbitrage so powerful.

Statistical arbitrage originated in the 1980s from the hedging demand of Morgan Stanley’s equity block-trading desk, which at the time was the center of risk taking on the equity trading floor. Like other broker-dealers, Morgan Stanley continually faced the problem of how to execute large block trades efficiently without suffering a price penalty. Often, major institutions discover they can clear a large block trade only at a large discount to the posted price. The reason is simple: Other traders will not know if there is more stock to follow, and the large size will leave them uncertain about the reason for the trade. It could be that someone knows something they don’t and they will end up on the wrong side of the trade once the news hits the street. The institution can break the block into a number of smaller trades and put them into the market one at a time. Though that’s a step in the right direction, after a while it will become clear that there is persistent demand on one side of the market, and other traders, uncertain who it is and how long it will continue, will hesitate.

The solution to this problem is to execute the trade through a broker-dealer’s block-trading desk. The block-trading desk gives the institution a price for the entire trade, and then acts as an intermediary in executing the trade on the exchange floor. Because the block traders know the client, they have a pretty good idea if the trade is a stand-alone trade or the first trickle of a larger flow. For example, if the institution is a pension fund, it is likely it does not have any special information, but it simply needs to sell the stock to meet some liability or to buy stock to invest a new inflow of funds. The desk adjusts the spread it demands to execute the block accordingly. The block desk has many transactions from many clients, so it is in a good position to mask the trade within its normal business flow. And it also might have clients who would be interested in taking the other side of the transaction.

The block desk could end up having to sit on the stock because there is simply no demand and because throwing the entire position onto the floor will cause prices to run against it. Or some news could suddenly break, causing the market to move against the position held by the desk. Or, in yet a third scenario, another big position could hit the exchange floor that moves prices away from the desk’s position and completely fills existing demand. A strategy evolved at some block desks to reduce this risk by hedging the block with a position in another stock. For example, if the desk received an order to buy 100,000 shares of General Motors, it might immediately go out and buy 10,000 or 20,000 shares of Ford Motor Company against that position. If news moved the stock price prior to the GM block being acquired, Ford would also likely be similarly affected. So if GM rose, making it more expensive to fill the customer’s order, a position in Ford would also likely rise, partially offsetting this increase in cost.

This was the case at Morgan Stanley, where there were maintained a list of pairs of stocks – stocks that were closely related, especially in the short term, with other stocks – in order to have at the ready a solution for partially hedging positions. By reducing risk, the pairs trade also gave the desk more time to work out of the trade. This helped to lessen the liquidity-related movement of a stock price during a big block trade. As a result, this strategy increased the profit for the desk.

The pairs increased profits. Somehow that lightbulb didn’t go on in the world of equity trading, which was largely devoid of principal transactions and systematic risk taking. Instead, the block traders epitomized the image of cigar-chewing gamblers, playing market poker with millions of dollars of capital at a clip while working the phones from one deal to the next, riding in a cloud of trading mayhem. They were too busy to exploit the fact, or it never occurred to them, that the pairs hedging they routinely used held the secret to a revolutionary trading strategy that would dwarf their desk’s operations and make a fortune for a generation of less flamboyant, more analytical traders. Used on a different scale and applied for profit making rather than hedging, their pairwise hedges became the genesis of statistical arbitrage trading. The pairwise stock trades that form the elements of statistical arbitrage trading in the equity market are just one more flavor of spread trades. On an individual basis, they’re not very good spread trades. It is the diversification that comes from holding many pairs that makes this strategy a success. But even then, although its name suggests otherwise, statistical arbitrage is a spread trade, not a true arbitrage trade.

Metaphysical Continuity in Peirce. Thought of the Day 122.0

image12

Continuity has wide implications in the different parts of Peirce’s architectonics of theories. Time and time again, Peirce refers to his ‘principle of continuity’ which has not immediately anything to do with Poncelet’s famous such principle in geometry, but, is rather, a metaphysical implication taken to follow from fallibilism: if all more or less distinct phenomena swim in a vague sea of continuity then it is no wonder that fallibilism must be accepted. And if the world is basically continuous, we should not expect conceptual borders to be definitive but rather conceive of terminological distinctions as relative to an underlying, monist continuity. In this system, mathematics is first science. Thereafter follows philosophy which is distinguished form purely hypothetical mathematics by having an empirical basis. Philosophy, in turn, has three parts, phenomenology, the normative sciences, and metaphysics. The first investigates solely ‘the Phaneron’ which is all what could be imagined to appear as an object for experience: ‘ by the word phaneron I mean the collective total of all that is in any way or in any sense present to the mind, quite regardless whether it corresponds to any real thing or not.’ (Charles Sanders Peirce – Collected Papers of Charles Sanders Peirce) As is evident, this definition of Peirce’s ‘phenomenology’ is parallel to Husserl’s phenomenological reduction in bracketing the issue of the existence of the phenomenon in question. Even if it thus is built on introspection and general experience, it is – analogous to Husserl and other Brentano disciples at the same time – conceived in a completely antipsychological manner: ‘It religiously abstains from all speculation as to any relations between its categories and physiological facts, cerebral or other.’ and ‘ I abstain from psychology which has nothing to do with ideoscopy.’ (Letter to Lady Welby). The normative sciences fall in three: aesthetics, ethics, logic, in that order (and hence decreasing generality), among which Peirce does not spend very much time on the former two. Aesthetics is the investigation of which possible goals it is possible to aim at (Good, Truth, Beauty, etc.), and ethics how they may be reached. Logic is concerned with the grasping and conservation of Truth and takes up the larger part of Peirce’s interest among the normative sciences. As it deals with how truth can be obtained by means of signs, it is also called semiotics (‘logic is formal semiotics’) which is thus coextensive with theory of science – logic in this broad sense contains all parts of philosophy of science, including contexts of discovery as well as contexts of justification. Semiotics has, in turn, three branches: grammatica speculativa (or stekheiotics), critical logic, and methodeutic (inspired by mediaeval trivium: grammar, logic, and rhetoric). The middle one of these three lies closest to our days’ conception of logic; it is concerned with the formal conditions for truth in symbols – that is, propositions, arguments, their validity and how to calculate them, including Peirce’s many developments of the logic of his time: quantifiers, logic of relations, ab-, de-, and induction, logic notation systems, etc. All of these, however, presuppose the existence of simple signs which are investigated by what is often seen as semiotics proper, the grammatica speculativa; it may also be called formal grammar. It investigates the formal condition for symbols having meaning, and it is here we find Peirce’s definition of signs and his trichotomies of different types of sign aspects. Methodeutic or formal rhetorics, on the other hand, concerns the pragmatical use of the former two branches, that is, the study of how to use logic in a fertile way in research, the formal conditions for the ‘power’ of symbols, that is, their reference to their interpretants; here can be found, e.g., Peirce’s famous definitions of pragmati(ci)sm and his directions for scientific investigation. To phenomenology – again in analogy to Husserl – logic adds the interest in signs and their truth. After logic, metaphysics follows in Peirce’s system, concerning the inventarium of existing objects, conceived in general – and strongly influenced by logic in the Kantian tradition for seeing metaphysics mirroring logic. Also here, Peirce has several proposals for subtypologies, even if none of them seem stable, and under this headline classical metaphysical issues mix freely with generalizations of scientific results and cosmological speculations.

Peirce himself saw this classification in an almost sociological manner, so that the criteria of distinction do not stem directly from the implied objects’ natural kinds, but after which groups of persons study which objects: ‘the only natural lines of demarcation between nearly related sciences are the divisions between the social groups of devotees of those sciences’. Science collects scientists into bundles, because they are defined by their causa finalis, a teleologial intention demanding of them to solve a central problem.

Measured on this definition, one has to say that Peirce himself was not modest, not only does he continuously transgress such boundaries in his production, he frequently does so even within the scope of single papers. There is always, in his writings, a brief distance only from mathematics to metaphysics – or between any other two issues in mathematics and philosophy, and this implies, first, that the investigation of continuity and generality in Peirce’s system is more systematic than any actually existing exposition of these issues in Peirce’s texts, second, that the discussion must constantly rely on cross-references. This has the structural motivation that as soon as you are below the level of mathematics in Peirce’s system, inspired by the Comtean system, the single science receives determinations from three different directions, each science consisting of material and formal aspects alike. First, it receives formal directives ‘from above’, from those more general sciences which stand above it, providing the general frameworks in which it must unfold. Second, it receives material determinations from its own object, requiring it to make certain choices in its use of formal insights from the higher sciences. The cosmological issue of the character of empirical space, for instance, can take from mathematics the different (non-)Euclidean geometries and investigate which of these are fit to describe spatial aspects of our universe, but it does not, in itself, provide the formal tools. Finally, the single sciences receive in practice determinations ‘from below’, from more specific sciences, when their results by means of abstraction, prescission, induction, and other procedures provide insights on its more general, material level. Even if cosmology is, for instance, part of metaphysics, it receives influences from the empirical results of physics (or biology, from where Peirce takes the generalized principle of evolution). The distinction between formal and material is thus level specific: what is material on one level is a formal bundle of possibilities for the level below; what is formal on one level is material on the level above.

For these reasons, the single step on the ladder of sciences is only partially independent in Peirce, hence also the tendency of his own investigations to zigzag between the levels. His architecture of theories thus forms a sort of phenomenological theory of aspects: the hierarchy of sciences is an architecture of more and less general aspects of the phenomena, not completely independent domains. Finally, Peirce’s realism has as a result a somewhat disturbing style of thinking: many of his central concepts receive many, often highly different determinations which has often led interpreters to assume inconsistencies or theoretical developments in Peirce where none necessarily exist. When Peirce, for instance, determines the icon as the sign possessing a similarity to its object, and elsewhere determines it as the sign by the contemplation of which it is possible to learn more about its object, then they are not conflicting definitions. Peirce’s determinations of concepts are rarely definitions at all in the sense that they provide necessary and sufficient conditions exhausting the phenomenon in question. His determinations should rather be seen as descriptions from different perspectives of a real (and maybe ideal) object – without these descriptions necessarily conflicting. This style of thinking can, however, be seen as motivated by metaphysical continuity. When continuous grading between concepts is the rule, definitions in terms of necessary and sufficient conditions should not be expected to be exhaustive.