Complete Manifolds’ Pure Logical Necessity as the Totality of Possible Formations. Thought of the Day 124.0

husserl-phenomenology

In Logical Investigations, Husserl called his theory of complete manifolds the key to the only possible solution to how in the realm of numbers impossible, non-existent, meaningless concepts might be dealt with as real ones. In Ideas, he wrote that his chief purpose in developing his theory of manifolds had been to find a theoretical solution to the problem of imaginary quantities (Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy).

Husserl saw how questions regarding imaginary numbers come up in mathematical contexts in which formalization yields constructions which arithmetically speaking are nonsense, but can be used in calculations. When formal reasoning is carried out mechanically as if these symbols have meaning, if the ordinary rules are observed, and the results do not contain any imaginary components, these symbols might be legitimately used. And this could be empirically verified (Philosophy of Arithmetic_ Psychological and Logical Investigations with Supplementary Texts).

In a letter to Carl Stumpf in the early 1890s, Husserl explained how, in trying to understand how operating with contradictory concepts could lead to correct theorems, he had found that for imaginary numbers like √2 and √-1, it was not a matter of the possibility or impossibility of concepts. Through the calculation itself and its rules, as defined for those fictive numbers, the impossible fell away, and a genuine equation remained. One could calculate again with the same signs, but referring to valid concepts, and the result was again correct. Even if one mistakenly imagined that what was contradictory existed, or held the most absurd theories about the content of the corresponding concepts of number, the calculation remained correct if it followed the rules. He concluded that this must be a result of the signs and their rules (Early Writings in the Philosophy of Logic and Mathematics). The fact that one can generalize, produce variations of formal arithmetic that lead outside the quantitative domain without essentially altering formal arithmetic’s theoretical nature and calculational methods brought Husserl to realize that there was more to the mathematical or formal sciences, or the mathematical method of calculation than could be captured in purely quantitative analyses.

Understanding the nature of theory forms, shows how reference to impossible objects can be justified. According to his theory of manifolds, one could operate freely within a manifold with imaginary concepts and be sure that what one deduced was correct when the axiomatic system completely and unequivocally determined the body of all the configurations possible in a domain by a purely analytical procedure. It was the completeness of the axiomatic system that gave one the right to operate in that free way. A domain was complete when each grammatically constructed proposition exclusively using the language of the domain was determined from the outset to be true or false in virtue of the axioms, i.e., necessarily followed from the axioms or did not. In that case, calculating with expressions without reference could never lead to contradictions. Complete manifolds have the

distinctive feature that a finite number of concepts and propositions – to be drawn as occasion requires from the essential nature of the domain under consideration –  determines completely and unambiguously on the lines of pure logical necessity the totality of all possible formations in the domain, so that in principle, therefore, nothing further remains open within it.

In such complete manifolds, he stressed, “the concepts true and formal implication of the axioms are equivalent (Ideas).

Husserl pointed out that there may be two valid discipline forms that stand in relation to one another in such a way that the axiom system of one may be a formal limitation of that of the other. It is then clear that everything deducible in the narrower axiom system is included in what is deducible in the expanded system, he explained. In the arithmetic of cardinal numbers, Husserl explained, there are no negative numbers, for the meaning of the axioms is so restrictive as to make subtracting 4 from 3 nonsense. Fractions are meaningless there. So are irrational numbers, √–1, and so on. Yet in practice, all the calculations of the arithmetic of cardinal numbers can be carried out as if the rules governing the operations are unrestrictedly valid and meaningful. One can disregard the limitations imposed in a narrower domain of deduction and act as if the axiom system were a more extended one. We cannot arbitrarily expand the concept of cardinal number, Husserl reasoned. But we can abandon it and define a new, pure formal concept of positive whole number with the formal system of definitions and operations valid for cardinal numbers. And, as set out in our definition, this formal concept of positive numbers can be expanded by new definitions while remaining free of contradiction. Fractions do not acquire any genuine meaning through our holding onto the concept of cardinal number and assuming that units are divisible, he theorized, but rather through our abandonment of the concept of cardinal number and our reliance on a new concept, that of divisible quantities. That leads to a system that partially coincides with that of cardinal numbers, but part of which is larger, meaning that it includes additional basic elements and axioms. And so in this way, with each new quantity, one also changes arithmetics. The different arithmetics do not have parts in common. They have totally different domains, but an analogous structure. They have forms of operation that are in part alike, but different concepts of operation.

For Husserl, formal constraints banning meaningless expressions, meaningless imaginary concepts, reference to non-existent and impossible objects restrict us in our theoretical, deductive work, but that resorting to the infinity of pure forms and transformations of forms frees us from such conditions and explains why having used imaginaries, what is meaningless, must lead, not to meaningless, but to true results.

Advertisements

Metaphysical Would-Be(s). Drunken Risibility.

2483194094_be8c241308_b

If one were to look at Quine’s commitment to similarity, natural kinds, dispositions, causal statements, etc., it is evident, that it takes him close to Peirce’s conception of Thirdness – even if Quine in an utopian vision imagines that all such concepts in a remote future will dissolve and vanish in favor of purely microstructural descriptions.

A crucial difference remains, however, which becomes evident when one looks at Quine’s brief formula for ontological commitment, the famous idea that ‘to be is to be the value of a bound variable’. For even if this motto is stated exactly to avoid commitment to several different types of being, it immediately prompts the question: the equation, in which the variable is presumably bound, which status does it have? Governing the behavior of existing variable values, is that not in some sense being real?

This will be Peirce’s realist idea – that regularities, tendencies, dispositions, patterns, may possess real existence, independent of any observer. In Peirce, this description of Thirdness is concentrated in the expression ‘real possibility’, and even it may sound exceedingly metaphysical at a first glance, it amounts, at a closer look, to regularities charted by science that are not mere shorthands for collections of single events but do possess reality status. In Peirce, the idea of real possibilities thus springs from his philosophy of science – he observes that science, contrary to philosophy, is spontaneously realist, and is right in being so. Real possibilities are thus counterposed to mere subjective possibilities due to lack of knowledge on the part of the subject speaking: the possibility of ‘not known not to be true’.

In a famous piece of self-critique from his late, realist period, Peirce attacks his earlier arguments (from ‘How to Make Our Ideas Clear’, in the late 1890s considered by himself the birth certificate of pragmatism after James’s reference to Peirce as pragmatism’s inventor). Then, he wrote

let us ask what we mean by calling a thing hard. Evidently that it will not be scratched by many other substances. The whole conception of this quality, as of every other, lies in its conceived effects. There is absolutely no difference between a hard thing and a soft thing so long as they are not brought to the test. Suppose, then, that a diamond could be crystallized in the midst of a cushion of soft cotton, and should remain there until it was finally burned up. Would it be false to say that that diamond was soft? […] Reflection will show that the reply is this: there would be no falsity in such modes of speech.

More than twenty-five years later, however, he attacks this argument as bearing witness to the nominalism of his youth. Now instead he supports the

scholastic doctrine of realism. This is usually defined as the opinion that there are real objects that are general, among the number being the modes of determination of existent singulars, if, indeed, these be not the only such objects. But the belief in this can hardly escape being accompanied by the acknowledgment that there are, besides, real vagues, and especially real possibilities. For possibility being the denial of a necessity, which is a kind of generality, is vague like any other contradiction of a general. Indeed, it is the reality of some possibilities that pragmaticism is most concerned to insist upon. The article of January 1878 endeavored to gloze over this point as unsuited to the exoteric public addressed; or perhaps the writer wavered in his own mind. He said that if a diamond were to be formed in a bed of cotton-wool, and were to be consumed there without ever having been pressed upon by any hard edge or point, it would be merely a question of nomenclature whether that diamond should be said to have been hard or not. No doubt this is true, except for the abominable falsehood in the word MERELY, implying that symbols are unreal. Nomenclature involves classification; and classification is true or false, and the generals to which it refers are either reals in the one case, or figments in the other. For if the reader will turn to the original maxim of pragmaticism at the beginning of this article, he will see that the question is, not what did happen, but whether it would have been well to engage in any line of conduct whose successful issue depended upon whether that diamond would resist an attempt to scratch it, or whether all other logical means of determining how it ought to be classed would lead to the conclusion which, to quote the very words of that article, would be ‘the belief which alone could be the result of investigation carried sufficiently far.’ Pragmaticism makes the ultimate intellectual purport of what you please to consist in conceived conditional resolutions, or their substance; and therefore, the conditional propositions, with their hypothetical antecedents, in which such resolutions consist, being of the ultimate nature of meaning, must be capable of being true, that is, of expressing whatever there be which is such as the proposition expresses, independently of being thought to be so in any judgment, or being represented to be so in any other symbol of any man or men. But that amounts to saying that possibility is sometimes of a real kind. (The Essential Peirce Selected Philosophical Writings, Volume 2)

In the same year, he states, in a letter to the Italian pragmatist Signor Calderoni:

I myself went too far in the direction of nominalism when I said that it was a mere question of the convenience of speech whether we say that a diamond is hard when it is not pressed upon, or whether we say that it is soft until it is pressed upon. I now say that experiment will prove that the diamond is hard, as a positive fact. That is, it is a real fact that it would resist pressure, which amounts to extreme scholastic realism. I deny that pragmaticism as originally defined by me made the intellectual purport of symbols to consist in our conduct. On the contrary, I was most careful to say that it consists in our concept of what our conduct would be upon conceivable occasions. For I had long before declared that absolute individuals were entia rationis, and not realities. A concept determinate in all respects is as fictitious as a concept definite in all respects. I do not think we can ever have a logical right to infer, even as probable, the existence of anything entirely contrary in its nature to all that we can experience or imagine. 

Here lies the core of Peirce’s metaphysical insistence on the reality of ‘would-be’s. Real possibilities, or would-bes, are vague to the extent that they describe certain tendential, conditional behaviors only, while they do not prescribe any other aspect of the single objects they subsume. They are, furthermore, represented in rationally interrelated clusters of concepts: the fact that the diamond is in fact hard, no matter if it scratches anything or not, lies in the fact that the diamond’s carbon structure displays a certain spatial arrangement – so it is an aspect of the very concept of diamond. And this is why the old pragmatic maxim may not work without real possibilities: it is they that the very maxim rests upon, because it is they that provide us with the ‘conceived consequences’ of accepting a concept. The maxim remains a test to weed out empty concepts with no conceived consequences – that is, empty a priori reasoning and superfluous metaphysical assumptions. But what remains after the maxim has been put to use, is real possibilities. Real possibilities thus connect epistemology, expressed in the pragmatic maxim, to ontology: real possibilities are what science may grasp in conditional hypotheses.

The question is whether Peirce’s revision of his old ‘nominalist’ beliefs form part of a more general development in Peirce from nominalism to realism. The locus classicus of this idea is Max Fisch (Peirce, Semeiotic and Pragmatism) where Fisch outlines a development from an initial nominalism (albeit of a strange kind, refusing, as always in Peirce, the existence of individuals determinate in all respects) via a series of steps towards realism, culminating after the turn of the century. Fisch’s first step is then Peirce’s theory of the real as that which reasoning would finally have as its result; the second step his Berkeley review with its anti-nominalism and the idea that the real is what is unaffected by what we may think of it; the third step is his pragmatist idea that beliefs are conceived habits of action, even if he here clings to the idea that the conditionals in which habits are expressed are material implications only – like the definition of ‘hard’; the fourth step his reading of Abbott’s realist Scientific Theism (which later influenced his conception of scientific universals) and his introduction of the index in his theory of signs; the fifth step his acceptance of the reality of continuity; the sixth the introduction of real possibilities, accompanied by the development of existential graphs, topology and Peirce’s changing view of Hegelianism; the seventh, the identification of pragmatism with realism; the eighth ‘his last stronghold, that of Philonian or material implication’. A further realist development exchanging Peirce’s early frequentist idea of probability for a dispositional theory of probability was, according to Fisch, never finished.

The issue of implication concerns the old discussion quoted by Cicero between the Hellenistic logicians Philo and Diodorus. The former formulated what we know today as material implication, while the latter objected on common-sense ground that material implication does not capture implication in everyday language and thought and another implication type should be sought. As is well known, material implication says that p ⇒ q is equivalent to the claim that either p is false or q is true – so that p ⇒ q is false only when p is true and q is false. The problems arise when p is false, for any false p makes the implication true, and this leads to strange possibilities of true inferences. The two parts of the implication have no connection with each other at all, such as would be the spontaneous idea in everyday thought. It is true that Peirce as a logician generally supports material (‘Philonian’) implication – but it is also true that he does express some second thoughts at around the same time as the afterthoughts on the diamond example.

Peirce is a forerunner of the attempts to construct alternatives such as strict implication, and the reason why is, of course, that real possibilities are not adequately depicted by material implication. Peirce is in need of an implication which may somehow picture the causal dependency of q on p. The basic reason for the mature Peirce’s problems with the representation of real possibilities is not primarily logical, however. It is scientific. Peirce realizes that the scientific charting of anything but singular, actual events necessitates the real existence of tendencies and relations connecting singular events. Now, what kinds are those tendencies and relations? The hard diamond example seems to emphasize causality, but this probably depends on the point of view chosen. The ‘conceived consequences’ of the pragmatic maxim may be causal indeed: if we accept gravity as a real concept, then masses will attract one another – but they may all the same be structural: if we accept horse riders as a real concept, then we should expect horses, persons, the taming of horses, etc. to exist, or they may be teleological. In any case, the interpretation of the pragmatic maxim in terms of real possibilities paves the way for a distinction between empty a priori suppositions and real a priori structures.

Knowledge Limited for Dummies….Didactics.

header_Pipes

Bertrand Russell with Alfred North Whitehead, in the Principia Mathematica aimed to demonstrate that “all pure mathematics follows from purely logical premises and uses only concepts defined in logical terms.” Its goal was to provide a formalized logic for all mathematics, to develop the full structure of mathematics where every premise could be proved from a clear set of initial axioms.

Russell observed of the dense and demanding work, “I used to know of only six people who had read the later parts of the book. Three of those were Poles, subsequently (I believe) liquidated by Hitler. The other three were Texans, subsequently successfully assimilated.” The complex mathematical symbols of the manuscript required it to be written by hand, and its sheer size – when it was finally ready for the publisher, Russell had to hire a panel truck to send it off – made it impossible to copy. Russell recounted that “every time that I went out for a walk I used to be afraid that the house would catch fire and the manuscript get burnt up.”

Momentous though it was, the greatest achievement of Principia Mathematica was realized two decades after its completion when it provided the fodder for the metamathematical enterprises of an Austrian, Kurt Gödel. Although Gödel did face the risk of being liquidated by Hitler (therefore fleeing to the Institute of Advanced Studies at Princeton), he was neither a Pole nor a Texan. In 1931, he wrote a treatise entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which demonstrated that the goal Russell and Whitehead had so single-mindedly pursued was unattainable.

The flavor of Gödel’s basic argument can be captured in the contradictions contained in a schoolboy’s brainteaser. A sheet of paper has the words “The statement on the other side of this paper is true” written on one side and “The statement on the other side of this paper is false” on the reverse. The conflict isn’t resolvable. Or, even more trivially, a statement like; “This statement is unprovable.” You cannot prove the statement is true, because doing so would contradict it. If you prove the statement is false, then that means its converse is true – it is provable – which again is a contradiction.

The key point of contradiction for these two examples is that they are self-referential. This same sort of self-referentiality is the keystone of Gödel’s proof, where he uses statements that imbed other statements within them. This problem did not totally escape Russell and Whitehead. By the end of 1901, Russell had completed the first round of writing Principia Mathematica and thought he was in the homestretch, but was increasingly beset by these sorts of apparently simple-minded contradictions falling in the path of his goal. He wrote that “it seemed unworthy of a grown man to spend his time on such trivialities, but . . . trivial or not, the matter was a challenge.” Attempts to address the challenge extended the development of Principia Mathematica by nearly a decade.

Yet Russell and Whitehead had, after all that effort, missed the central point. Like granite outcroppings piercing through a bed of moss, these apparently trivial contradictions were rooted in the core of mathematics and logic, and were only the most readily manifest examples of a limit to our ability to structure formal mathematical systems. Just four years before Gödel had defined the limits of our ability to conquer the intellectual world of mathematics and logic with the publication of his Undecidability Theorem, the German physicist Werner Heisenberg’s celebrated Uncertainty Principle had delineated the limits of inquiry into the physical world, thereby undoing the efforts of another celebrated intellect, the great mathematician Pierre-Simon Laplace. In the early 1800s Laplace had worked extensively to demonstrate the purely mechanical and predictable nature of planetary motion. He later extended this theory to the interaction of molecules. In the Laplacean view, molecules are just as subject to the laws of physical mechanics as the planets are. In theory, if we knew the position and velocity of each molecule, we could trace its path as it interacted with other molecules, and trace the course of the physical universe at the most fundamental level. Laplace envisioned a world of ever more precise prediction, where the laws of physical mechanics would be able to forecast nature in increasing detail and ever further into the future, a world where “the phenomena of nature can be reduced in the last analysis to actions at a distance between molecule and molecule.”

What Gödel did to the work of Russell and Whitehead, Heisenberg did to Laplace’s concept of causality. The Uncertainty Principle, though broadly applied and draped in metaphysical context, is a well-defined and elegantly simple statement of physical reality – namely, the combined accuracy of a measurement of an electron’s location and its momentum cannot vary far from a fixed value. The reason for this, viewed from the standpoint of classical physics, is that accurately measuring the position of an electron requires illuminating the electron with light of a very short wavelength. But the shorter the wavelength the greater the amount of energy that hits the electron, and the greater the energy hitting the electron the greater the impact on its velocity.

What is true in the subatomic sphere ends up being true – though with rapidly diminishing significance – for the macroscopic. Nothing can be measured with complete precision as to both location and velocity because the act of measuring alters the physical properties. The idea that if we know the present we can calculate the future was proven invalid – not because of a shortcoming in our knowledge of mechanics, but because the premise that we can perfectly know the present was proven wrong. These limits to measurement imply limits to prediction. After all, if we cannot know even the present with complete certainty, we cannot unfailingly predict the future. It was with this in mind that Heisenberg, ecstatic about his yet-to-be-published paper, exclaimed, “I think I have refuted the law of causality.”

The epistemological extrapolation of Heisenberg’s work was that the root of the problem was man – or, more precisely, man’s examination of nature, which inevitably impacts the natural phenomena under examination so that the phenomena cannot be objectively understood. Heisenberg’s principle was not something that was inherent in nature; it came from man’s examination of nature, from man becoming part of the experiment. (So in a way the Uncertainty Principle, like Gödel’s Undecidability Proposition, rested on self-referentiality.) While it did not directly refute Einstein’s assertion against the statistical nature of the predictions of quantum mechanics that “God does not play dice with the universe,” it did show that if there were a law of causality in nature, no one but God would ever be able to apply it. The implications of Heisenberg’s Uncertainty Principle were recognized immediately, and it became a simple metaphor reaching beyond quantum mechanics to the broader world.

This metaphor extends neatly into the world of financial markets. In the purely mechanistic universe of classical physics, we could apply Newtonian laws to project the future course of nature, if only we knew the location and velocity of every particle. In the world of finance, the elementary particles are the financial assets. In a purely mechanistic financial world, if we knew the position each investor has in each asset and the ability and willingness of liquidity providers to take on those assets in the event of a forced liquidation, we would be able to understand the market’s vulnerability. We would have an early-warning system for crises. We would know which firms are subject to a liquidity cycle, and which events might trigger that cycle. We would know which markets are being overrun by speculative traders, and thereby anticipate tactical correlations and shifts in the financial habitat. The randomness of nature and economic cycles might remain beyond our grasp, but the primary cause of market crisis, and the part of market crisis that is of our own making, would be firmly in hand.

The first step toward the Laplacean goal of complete knowledge is the advocacy by certain financial market regulators to increase the transparency of positions. Politically, that would be a difficult sell – as would any kind of increase in regulatory control. Practically, it wouldn’t work. Just as the atomic world turned out to be more complex than Laplace conceived, the financial world may be similarly complex and not reducible to a simple causality. The problems with position disclosure are many. Some financial instruments are complex and difficult to price, so it is impossible to measure precisely the risk exposure. Similarly, in hedge positions a slight error in the transmission of one part, or asynchronous pricing of the various legs of the strategy, will grossly misstate the total exposure. Indeed, the problems and inaccuracies in using position information to assess risk are exemplified by the fact that major investment banking firms choose to use summary statistics rather than position-by-position analysis for their firmwide risk management despite having enormous resources and computational power at their disposal.

Perhaps more importantly, position transparency also has implications for the efficient functioning of the financial markets beyond the practical problems involved in its implementation. The problems in the examination of elementary particles in the financial world are the same as in the physical world: Beyond the inherent randomness and complexity of the systems, there are simply limits to what we can know. To say that we do not know something is as much a challenge as it is a statement of the state of our knowledge. If we do not know something, that presumes that either it is not worth knowing or it is something that will be studied and eventually revealed. It is the hubris of man that all things are discoverable. But for all the progress that has been made, perhaps even more exciting than the rolling back of the boundaries of our knowledge is the identification of realms that can never be explored. A sign in Einstein’s Princeton office read, “Not everything that counts can be counted, and not everything that can be counted counts.”

The behavioral analogue to the Uncertainty Principle is obvious. There are many psychological inhibitions that lead people to behave differently when they are observed than when they are not. For traders it is a simple matter of dollars and cents that will lead them to behave differently when their trades are open to scrutiny. Beneficial though it may be for the liquidity demander and the investor, for the liquidity supplier trans- parency is bad. The liquidity supplier does not intend to hold the position for a long time, like the typical liquidity demander might. Like a market maker, the liquidity supplier will come back to the market to sell off the position – ideally when there is another investor who needs liquidity on the other side of the market. If other traders know the liquidity supplier’s positions, they will logically infer that there is a good likelihood these positions shortly will be put into the market. The other traders will be loath to be the first ones on the other side of these trades, or will demand more of a price concession if they do trade, knowing the overhang that remains in the market.

This means that increased transparency will reduce the amount of liquidity provided for any given change in prices. This is by no means a hypothetical argument. Frequently, even in the most liquid markets, broker-dealer market makers (liquidity providers) use brokers to enter their market bids rather than entering the market directly in order to preserve their anonymity.

The more information we extract to divine the behavior of traders and the resulting implications for the markets, the more the traders will alter their behavior. The paradox is that to understand and anticipate market crises, we must know positions, but knowing and acting on positions will itself generate a feedback into the market. This feedback often will reduce liquidity, making our observations less valuable and possibly contributing to a market crisis. Or, in rare instances, the observer/feedback loop could be manipulated to amass fortunes.

One might argue that the physical limits of knowledge asserted by Heisenberg’s Uncertainty Principle are critical for subatomic physics, but perhaps they are really just a curiosity for those dwelling in the macroscopic realm of the financial markets. We cannot measure an electron precisely, but certainly we still can “kind of know” the present, and if so, then we should be able to “pretty much” predict the future. Causality might be approximate, but if we can get it right to within a few wavelengths of light, that still ought to do the trick. The mathematical system may be demonstrably incomplete, and the world might not be pinned down on the fringes, but for all practical purposes the world can be known.

Unfortunately, while “almost” might work for horseshoes and hand grenades, 30 years after Gödel and Heisenberg yet a third limitation of our knowledge was in the wings, a limitation that would close the door on any attempt to block out the implications of microscopic uncertainty on predictability in our macroscopic world. Based on observations made by Edward Lorenz in the early 1960s and popularized by the so-called butterfly effect – the fanciful notion that the beating wings of a butterfly could change the predictions of an otherwise perfect weather forecasting system – this limitation arises because in some important cases immeasurably small errors can compound over time to limit prediction in the larger scale. Half a century after the limits of measurement and thus of physical knowledge were demonstrated by Heisenberg in the world of quantum mechanics, Lorenz piled on a result that showed how microscopic errors could propagate to have a stultifying impact in nonlinear dynamic systems. This limitation could come into the forefront only with the dawning of the computer age, because it is manifested in the subtle errors of computational accuracy.

The essence of the butterfly effect is that small perturbations can have large repercussions in massive, random forces such as weather. Edward Lorenz was testing and tweaking a model of weather dynamics on a rudimentary vacuum-tube computer. The program was based on a small system of simultaneous equations, but seemed to provide an inkling into the variability of weather patterns. At one point in his work, Lorenz decided to examine in more detail one of the solutions he had generated. To save time, rather than starting the run over from the beginning, he picked some intermediate conditions that had been printed out by the computer and used those as the new starting point. The values he typed in were the same as the values held in the original simulation at that point, so the results the simulation generated from that point forward should have been the same as in the original; after all, the computer was doing exactly the same operations. What he found was that as the simulated weather pattern progressed, the results of the new run diverged, first very slightly and then more and more markedly, from those of the first run. After a point, the new path followed a course that appeared totally unrelated to the original one, even though they had started at the same place.

Lorenz at first thought there was a computer glitch, but as he investigated further, he discovered the basis of a limit to knowledge that rivaled that of Heisenberg and Gödel. The problem was that the numbers he had used to restart the simulation had been reentered based on his printout from the earlier run, and the printout rounded the values to three decimal places while the computer carried the values to six decimal places. This rounding, clearly insignificant at first, promulgated a slight error in the next-round results, and this error grew with each new iteration of the program as it moved the simulation of the weather forward in time. The error doubled every four simulated days, so that after a few months the solutions were going their own separate ways. The slightest of changes in the initial conditions had traced out a wholly different pattern of weather.

Intrigued by his chance observation, Lorenz wrote an article entitled “Deterministic Nonperiodic Flow,” which stated that “nonperiodic solutions are ordinarily unstable with respect to small modifications, so that slightly differing initial states can evolve into considerably different states.” Translation: Long-range weather forecasting is worthless. For his application in the narrow scientific discipline of weather prediction, this meant that no matter how precise the starting measurements of weather conditions, there was a limit after which the residual imprecision would lead to unpredictable results, so that “long-range forecasting of specific weather conditions would be impossible.” And since this occurred in a very simple laboratory model of weather dynamics, it could only be worse in the more complex equations that would be needed to properly reflect the weather. Lorenz discovered the principle that would emerge over time into the field of chaos theory, where a deterministic system generated with simple nonlinear dynamics unravels into an unrepeated and apparently random path.

The simplicity of the dynamic system Lorenz had used suggests a far-reaching result: Because we cannot measure without some error (harking back to Heisenberg), for many dynamic systems our forecast errors will grow to the point that even an approximation will be out of our hands. We can run a purely mechanistic system that is designed with well-defined and apparently well-behaved equations, and it will move over time in ways that cannot be predicted and, indeed, that appear to be random. This gets us to Santa Fe.

The principal conceptual thread running through the Santa Fe research asks how apparently simple systems, like that discovered by Lorenz, can produce rich and complex results. Its method of analysis in some respects runs in the opposite direction of the usual path of scientific inquiry. Rather than taking the complexity of the world and distilling simplifying truths from it, the Santa Fe Institute builds a virtual world governed by simple equations that when unleashed explode into results that generate unexpected levels of complexity.

In economics and finance, institute’s agenda was to create artificial markets with traders and investors who followed simple and reasonable rules of behavior and to see what would happen. Some of the traders built into the model were trend followers, others bought or sold based on the difference between the market price and perceived value, and yet others traded at random times in response to liquidity needs. The simulations then printed out the paths of prices for the various market instruments. Qualitatively, these paths displayed all the richness and variation we observe in actual markets, replete with occasional bubbles and crashes. The exercises did not produce positive results for predicting or explaining market behavior, but they did illustrate that it is not hard to create a market that looks on the surface an awful lot like a real one, and to do so with actors who are following very simple rules. The mantra is that simple systems can give rise to complex, even unpredictable dynamics, an interesting converse to the point that much of the complexity of our world can – with suitable assumptions – be made to appear simple, summarized with concise physical laws and equations.

The systems explored by Lorenz were deterministic. They were governed definitively and exclusively by a set of equations where the value in every period could be unambiguously and precisely determined based on the values of the previous period. And the systems were not very complex. By contrast, whatever the set of equations are that might be divined to govern the financial world, they are not simple and, furthermore, they are not deterministic. There are random shocks from political and economic events and from the shifting preferences and attitudes of the actors. If we cannot hope to know the course of the deterministic systems like fluid mechanics, then no level of detail will allow us to forecast the long-term course of the financial world, buffeted as it is by the vagaries of the economy and the whims of psychology.

Metaphysical Continuity in Peirce. Thought of the Day 122.0

image12

Continuity has wide implications in the different parts of Peirce’s architectonics of theories. Time and time again, Peirce refers to his ‘principle of continuity’ which has not immediately anything to do with Poncelet’s famous such principle in geometry, but, is rather, a metaphysical implication taken to follow from fallibilism: if all more or less distinct phenomena swim in a vague sea of continuity then it is no wonder that fallibilism must be accepted. And if the world is basically continuous, we should not expect conceptual borders to be definitive but rather conceive of terminological distinctions as relative to an underlying, monist continuity. In this system, mathematics is first science. Thereafter follows philosophy which is distinguished form purely hypothetical mathematics by having an empirical basis. Philosophy, in turn, has three parts, phenomenology, the normative sciences, and metaphysics. The first investigates solely ‘the Phaneron’ which is all what could be imagined to appear as an object for experience: ‘ by the word phaneron I mean the collective total of all that is in any way or in any sense present to the mind, quite regardless whether it corresponds to any real thing or not.’ (Charles Sanders Peirce – Collected Papers of Charles Sanders Peirce) As is evident, this definition of Peirce’s ‘phenomenology’ is parallel to Husserl’s phenomenological reduction in bracketing the issue of the existence of the phenomenon in question. Even if it thus is built on introspection and general experience, it is – analogous to Husserl and other Brentano disciples at the same time – conceived in a completely antipsychological manner: ‘It religiously abstains from all speculation as to any relations between its categories and physiological facts, cerebral or other.’ and ‘ I abstain from psychology which has nothing to do with ideoscopy.’ (Letter to Lady Welby). The normative sciences fall in three: aesthetics, ethics, logic, in that order (and hence decreasing generality), among which Peirce does not spend very much time on the former two. Aesthetics is the investigation of which possible goals it is possible to aim at (Good, Truth, Beauty, etc.), and ethics how they may be reached. Logic is concerned with the grasping and conservation of Truth and takes up the larger part of Peirce’s interest among the normative sciences. As it deals with how truth can be obtained by means of signs, it is also called semiotics (‘logic is formal semiotics’) which is thus coextensive with theory of science – logic in this broad sense contains all parts of philosophy of science, including contexts of discovery as well as contexts of justification. Semiotics has, in turn, three branches: grammatica speculativa (or stekheiotics), critical logic, and methodeutic (inspired by mediaeval trivium: grammar, logic, and rhetoric). The middle one of these three lies closest to our days’ conception of logic; it is concerned with the formal conditions for truth in symbols – that is, propositions, arguments, their validity and how to calculate them, including Peirce’s many developments of the logic of his time: quantifiers, logic of relations, ab-, de-, and induction, logic notation systems, etc. All of these, however, presuppose the existence of simple signs which are investigated by what is often seen as semiotics proper, the grammatica speculativa; it may also be called formal grammar. It investigates the formal condition for symbols having meaning, and it is here we find Peirce’s definition of signs and his trichotomies of different types of sign aspects. Methodeutic or formal rhetorics, on the other hand, concerns the pragmatical use of the former two branches, that is, the study of how to use logic in a fertile way in research, the formal conditions for the ‘power’ of symbols, that is, their reference to their interpretants; here can be found, e.g., Peirce’s famous definitions of pragmati(ci)sm and his directions for scientific investigation. To phenomenology – again in analogy to Husserl – logic adds the interest in signs and their truth. After logic, metaphysics follows in Peirce’s system, concerning the inventarium of existing objects, conceived in general – and strongly influenced by logic in the Kantian tradition for seeing metaphysics mirroring logic. Also here, Peirce has several proposals for subtypologies, even if none of them seem stable, and under this headline classical metaphysical issues mix freely with generalizations of scientific results and cosmological speculations.

Peirce himself saw this classification in an almost sociological manner, so that the criteria of distinction do not stem directly from the implied objects’ natural kinds, but after which groups of persons study which objects: ‘the only natural lines of demarcation between nearly related sciences are the divisions between the social groups of devotees of those sciences’. Science collects scientists into bundles, because they are defined by their causa finalis, a teleologial intention demanding of them to solve a central problem.

Measured on this definition, one has to say that Peirce himself was not modest, not only does he continuously transgress such boundaries in his production, he frequently does so even within the scope of single papers. There is always, in his writings, a brief distance only from mathematics to metaphysics – or between any other two issues in mathematics and philosophy, and this implies, first, that the investigation of continuity and generality in Peirce’s system is more systematic than any actually existing exposition of these issues in Peirce’s texts, second, that the discussion must constantly rely on cross-references. This has the structural motivation that as soon as you are below the level of mathematics in Peirce’s system, inspired by the Comtean system, the single science receives determinations from three different directions, each science consisting of material and formal aspects alike. First, it receives formal directives ‘from above’, from those more general sciences which stand above it, providing the general frameworks in which it must unfold. Second, it receives material determinations from its own object, requiring it to make certain choices in its use of formal insights from the higher sciences. The cosmological issue of the character of empirical space, for instance, can take from mathematics the different (non-)Euclidean geometries and investigate which of these are fit to describe spatial aspects of our universe, but it does not, in itself, provide the formal tools. Finally, the single sciences receive in practice determinations ‘from below’, from more specific sciences, when their results by means of abstraction, prescission, induction, and other procedures provide insights on its more general, material level. Even if cosmology is, for instance, part of metaphysics, it receives influences from the empirical results of physics (or biology, from where Peirce takes the generalized principle of evolution). The distinction between formal and material is thus level specific: what is material on one level is a formal bundle of possibilities for the level below; what is formal on one level is material on the level above.

For these reasons, the single step on the ladder of sciences is only partially independent in Peirce, hence also the tendency of his own investigations to zigzag between the levels. His architecture of theories thus forms a sort of phenomenological theory of aspects: the hierarchy of sciences is an architecture of more and less general aspects of the phenomena, not completely independent domains. Finally, Peirce’s realism has as a result a somewhat disturbing style of thinking: many of his central concepts receive many, often highly different determinations which has often led interpreters to assume inconsistencies or theoretical developments in Peirce where none necessarily exist. When Peirce, for instance, determines the icon as the sign possessing a similarity to its object, and elsewhere determines it as the sign by the contemplation of which it is possible to learn more about its object, then they are not conflicting definitions. Peirce’s determinations of concepts are rarely definitions at all in the sense that they provide necessary and sufficient conditions exhausting the phenomenon in question. His determinations should rather be seen as descriptions from different perspectives of a real (and maybe ideal) object – without these descriptions necessarily conflicting. This style of thinking can, however, be seen as motivated by metaphysical continuity. When continuous grading between concepts is the rule, definitions in terms of necessary and sufficient conditions should not be expected to be exhaustive.

The Third Trichotomy. Thought of the Day 121.0

peircetriangle

The decisive logical role is played by continuity in the third trichotomy which is Peirce’s generalization of the old distinction between term, proposition and argument in logic. In him, the technical notions are rhema, dicent and argument, and all of them may be represented by symbols. A crucial step in Peirce’s logic of relations (parallel to Frege) is the extension of the predicate from having only one possible subject in a proposition – to the possibility for a predicate to take potentially infinitely many subjects. Predicates so complicated may be reduced, however, to combination of (at most) three-subject predicates, according to Peirce’s reduction hypothesis. Let us consider the definitions from ‘Syllabus (The Essential Peirce Selected Philosophical Writings, Volume 2)’ in continuation of the earlier trichotomies:

According to the third trichotomy, a Sign may be termed a Rheme, a Dicisign or Dicent Sign (that is, a proposition or quasi-proposition), or an Argument.

A Rheme is a Sign which, for its Interpretant, is a Sign of qualitative possibility, that is, is understood as representing such and such a kind of possible Object. Any Rheme, perhaps, will afford some information; but it is not interpreted as doing so.

A Dicent Sign is a Sign, which, for its Interpretant, is a Sign of actual existence. It cannot, therefore, be an Icon, which affords no ground for an interpretation of it as referring to actual existence. A Dicisign necessarily involves, as a part of it, a Rheme, to describe the fact which it is interpreted as indicating. But this is a peculiar kind of Rheme; and while it is essential to the Dicisign, it by no means constitutes it.

An Argument is a Sign which, for its Interpretant, is a Sign of a law. Or we may say that a Rheme is a sign which is understood to represent its object in its characters merely; that a Dicisign is a sign which is understood to represent its object in respect to actual existence; and that an Argument is a Sign which is understood to represent its Object in its character as Sign. ( ) The proposition need not be asserted or judged. It may be contemplated as a sign capable of being asserted or denied. This sign itself retains its full meaning whether it be actually asserted or not. ( ) The proposition professes to be really affected by the actual existent or real law to which it refers. The argument makes the same pretension, but that is not the principal pretension of the argument. The rheme makes no such pretension.

The interpretant of the Argument represents it as an instance of a general class of Arguments, which class on the whole will always tend to the truth. It is this law, in some shape, which the argument urges; and this ‘urging’ is the mode of representation proper to Arguments.

Predicates being general is of course a standard logical notion; in Peirce’s version this generality is further emphasized by the fact that the simple predicate is seen as relational and containing up to three subject slots to be filled in; each of them may be occupied by a continuum of possible subjects. The predicate itself refers to a possible property, a possible relation between subjects; the empty – or partly satiated – predicate does not in itself constitute any claim that this relation does in fact hold. The information it contains is potential, because no single or general indication has yet been chosen to indicate which subjects among the continuum of possible subjects it refers to. The proposition, on the contrary, the dicisign, is a predicate where some of the empty slots have been filled in with indices (proper names, demonstrative pronomina, deixis, gesture, etc.), and is, in fact, asserted. It thus consists of an indexical part and an iconical part, corresponding to the usual distinction between subject and predicate, with its indexical part connecting it to some level of reference reality. This reality needs not, of course, be actual reality; the subject slots may be filled in with general subjects thus importing pieces of continuity into it – but the reality status of such subjects may vary, so it may equally be filled in with fictitious references of all sorts. Even if the dicisign, the proposition, is not an icon, it contains, via its rhematic core, iconical properties. Elsewhere, Peirce simply defines the dicisign as a sign making explicit its reference. Thus a portrait equipped with a sign indicating the portraitee will be a dicisign, just like a charicature draft with a pointing gesture towards the person it depicts will be a dicisign. Even such dicisigns may be general; the pointing gesture could single out a group or a representative for a whole class of objects. While the dicisign specifies its object, the argument is a sign specifying its interpretant – which is what is normally called the conclusion. The argument thus consists of two dicisigns, a premiss (which may be, in turn, composed of several dicisigns and is traditionally seen as consisting of two dicisigns) and a conclusion – a dicisign represented as ensuing from the premiss due to the power of some law. The argument is thus – just like the other thirdness signs in the trichotomies – in itself general. It is a legisign and a symbol – but adds to them the explicit specification of a general, lawlike interpretant. In the full-blown sign, the argument, the more primitive degenerate sign types are orchestrated together in a threefold generality where no less than three continua are evoked: first, the argument itself is a legisign with a halo of possible instantions of itself as a sign; second, it is a symbol referring to a general object, in turn with a halo of possible instantiations around it; third, the argument implies a general law which is represented by one instantiation (the premiss and the rule of inference) but which has a halo of other, related inferences as possible instantiations. As Peirce says, the argument persuades us that this lawlike connection holds for all other cases being of the same type.

The Second Trichotomy. Thought of the Day 120.0

Figure-2-Peirce's-triple-trichotomy

The second trichotomy (here is the first) is probably the most well-known piece of Peirce’s semiotics: it distinguishes three possible relations between the sign and its (dynamical) object. This relation may be motivated by similarity, by actual connection, or by general habit – giving rise to the sign classes icon, index, and symbol, respectively.

According to the second trichotomy, a Sign may be termed an Icon, an Index, or a Symbol.

An Icon is a sign which refers to the Object that it denotes merely by virtue of characters of its own, and which it possesses, just the same, whether any such Object actually exists or not. It is true that unless there really is such an Object, the Icon does not act as a sign; but this has nothing to do with its character as a sign. Anything whatever, be it quality, existent individual, or law, is an Icon of anything, in so far as it is like that thing and used as a sign of it.

An Index is a sign which refers to the Object that it denotes by virtue of being really affected by that Object. It cannot, therefore, be a Qualisign, because qualities are whatever they are independently of anything else. In so far as the Index is affected by the Object, it necessarily has some Quality in common with the Object, and it is in respect to these that it refers to the Object. It does, therefore, involve a sort of Icon, although an Icon of a peculiar kind; and it is not the mere resemblance of its Object, even in these respects which makes it a sign, but it is the actual modification of it by the Object. 

A Symbol is a sign which refers to the Object that it denotes by virtue of a law, usually an association of general ideas, which operates to cause the Symbol to be interpreted as referring to that Object. It is thus itself a general type or law, that is, a Legisign. As such it acts through a Replica. Not only is it general in itself, but the Object to which it refers is of general nature. Now that which is general has its being in the instances it will determine. There must, therefore, be existent instances of what the Symbol denotes, although we must here understand by ‘existent’, existent in the possibly imaginary universe to which the Symbol refers. The Symbol will indirectly, through the association or other law, be affected by those instances; and thus the Symbol will involve a sort of Index, although an Index of a peculiar kind. It will not, however, be by any means true that the slight effect upon the Symbol of those instances accounts for the significant character of the Symbol.

The icon refers to its object solely by means of its own properties. This implies that an icon potentially refers to an indefinite class of objects, namely all those objects which have, in some respect, a relation of similarity to it. In recent semiotics, it has often been remarked by someone like Nelson Goodman that any phenomenon can be said to be like any other phenomenon in some respect, if the criterion of similarity is chosen sufficiently general, just like the establishment of any convention immediately implies a similarity relation. If Nelson Goodman picks out two otherwise very different objects, then they are immediately similar to the extent that they now have the same relation to Nelson Goodman. Goodman and others have for this reason deemed the similarity relation insignificant – and consequently put the whole burden of semiotics on the shoulders of conventional signs only. But the counterargument against this rejection of the relevance of the icon lies close at hand. Given a tertium comparationis, a measuring stick, it is no longer possible to make anything be like anything else. This lies in Peirce’s observation that ‘It is true that unless there really is such an Object, the Icon does not act as a sign ’ The icon only functions as a sign to the extent that it is, in fact, used to refer to some object – and when it does that, some criterion for similarity, a measuring stick (or, at least, a delimited bundle of possible measuring sticks) are given in and with the comparison. In the quote just given, it is of course the immediate object Peirce refers to – it is no claim that there should in fact exist such an object as the icon refers to. Goodman and others are of course right in claiming that as ‘Anything whatever ( ) is an Icon of anything ’, then the universe is pervaded by a continuum of possible similarity relations back and forth, but as soon as some phenomenon is in fact used as an icon for an object, then a specific bundle of similarity relations are picked out: ‘ in so far as it is like that thing.’

Just like the qualisign, the icon is a limit category. ‘A possibility alone is an Icon purely by virtue of its quality; and its object can only be a Firstness.’ (Charles S. PeirceThe Essential Peirce_ Selected Philosophical Writings). Strictly speaking, a pure icon may only refer one possible Firstness to another. The pure icon would be an identity relation between possibilities. Consequently, the icon must, as soon as it functions as a sign, be more than iconic. The icon is typically an aspect of a more complicated sign, even if very often a most important aspect, because providing the predicative aspect of that sign. This Peirce records by his notion of ‘hypoicon’: ‘But a sign may be iconic, that is, may represent its object mainly by its similarity, no matter what its mode of being. If a substantive is wanted, an iconic representamen may be termed a hypoicon’. Hypoicons are signs which to a large extent makes use of iconical means as meaning-givers: images, paintings, photos, diagrams, etc. But the iconic meaning realized in hypoicons have an immensely fundamental role in Peirce’s semiotics. As icons are the only signs that look-like, then they are at the same time the only signs realizing meaning. Thus any higher sign, index and symbol alike, must contain, or, by association or inference terminate in, an icon. If a symbol can not give an iconic interpretant as a result, it is empty. In that respect, Peirce’s doctrine parallels that of Husserl where merely signitive acts require fulfillment by intuitive (‘anschauliche’) acts. This is actually Peirce’s continuation of Kant’s famous claim that intuitions without concepts are blind, while concepts without intuitions are empty. When Peirce observes that ‘With the exception of knowledge, in the present instant, of the contents of consciousness in that instant (the existence of which knowledge is open to doubt) all our thought and knowledge is by signs’ (Letters to Lady Welby), then these signs necessarily involve iconic components. Peirce has often been attacked for his tendency towards a pan-semiotism which lets all mental and physical processes take place via signs – in the quote just given, he, analogous to Husserl, claims there must be a basic evidence anterior to the sign – just like Husserl this evidence before the sign must be based on a ‘metaphysics of presence’ – the ‘present instant’ provides what is not yet mediated by signs. But icons provide the connection of signs, logic and science to this foundation for Peirce’s phenomenology: the icon is the only sign providing evidence (Charles S. Peirce The New Elements of Mathematics Vol. 4). The icon is, through its timeless similarity, apt to communicate aspects of an experience ‘in the present instant’. Thus, the typical index contains an icon (more or less elaborated, it is true): any symbol intends an iconic interpretant. Continuity is at stake in relation to the icon to the extent that the icon, while not in itself general, is the bearer of a potential generality. The infinitesimal generality is decisive for the higher sign types’ possibility to give rise to thought: the symbol thus contains a bundle of general icons defining its meaning. A special icon providing the condition of possibility for general and rigorous thought is, of course, the diagram.

The index connects the sign directly with its object via connection in space and time; as an actual sign connected to its object, the index is turned towards the past: the action which has left the index as a mark must be located in time earlier than the sign, so that the index presupposes, at least, the continuity of time and space without which an index might occur spontaneously and without any connection to a preceding action. Maybe surprisingly, in the Peircean doctrine, the index falls in two subtypes: designators vs. reagents. Reagents are the simplest – here the sign is caused by its object in one way or another. Designators, on the other hand, are more complex: the index finger as pointing to an object or the demonstrative pronoun as the subject of a proposition are prototypical examples. Here, the index presupposes an intention – the will to point out the object for some receiver. Designators, it must be argued, presuppose reagents: it is only possible to designate an object if you have already been in reagent contact (simulated or not) with it (this forming the rational kernel of causal reference theories of meaning). The closer determination of the object of an index, however, invariably involves selection on the background of continuities.

On the level of the symbol, continuity and generality play a main role – as always when approaching issues defined by Thirdness. The symbol is, in itself a legisign, that is, it is a general object which exists only due to its actual instantiations. The symbol itself is a real and general recipe for the production of similar instantiations in the future. But apart from thus being a legisign, it is connected to its object thanks to a habit, or regularity. Sometimes, this is taken to mean ‘due to a convention’ – in an attempt to distinguish conventional as opposed to motivated sign types. This, however, rests on a misunderstanding of Peirce’s doctrine in which the trichotomies record aspects of sign, not mutually exclusive, independent classes of signs: symbols and icons do not form opposed, autonomous sign classes; rather, the content of the symbol is constructed from indices and general icons. The habit realized by a symbol connects it, as a legisign, to an object which is also general – an object which just like the symbol itself exists in instantiations, be they real or imagined. The symbol is thus a connection between two general objects, each of them being actualized through replicas, tokens – a connection between two continua, that is:

Definition 1. Any Blank is a symbol which could not be vaguer than it is (although it may be so connected with a definite symbol as to form with it, a part of another partially definite symbol), yet which has a purpose.

Axiom 1. It is the nature of every symbol to blank in part. [ ]

Definition 2. Any Sheet would be that element of an entire symbol which is the subject of whatever definiteness it may have, and any such element of an entire symbol would be a Sheet. (‘Sketch of Dichotomic Mathematics’ (The New Elements of Mathematics Vol. 4 Mathematical Philosophy)

The symbol’s generality can be described as it having always blanks having the character of being indefinite parts of its continuous sheet. Thus, the continuity of its blank parts is what grants its generality. The symbol determines its object according to some rule, granting the object satisfies that rule – but leaving the object indeterminate in all other respects. It is tempting to take the typical symbol to be a word, but it should rather be taken as the argument – the predicate and the proposition being degenerate versions of arguments with further continuous blanks inserted by erasure, so to speak, forming the third trichotomy of term, proposition, argument.

The First Trichotomy. Thought of the Day 119.0

sign_aspects

As the sign consists of three components it comes hardly as a surprise that it may be analyzed in nine aspects – every one of the sign’s three components may be viewed under each of the three fundamental phenomenological categories. The least discussed of these so-called trichotomies is probably the first, concerning which property in the sign it is that functions, in fact, to make it a sign. It gives rise to the trichotomy qualisign, sinsign, legisign, or, in a little more sexy terminology, tone, token, type.

The oftenmost quoted definition is from ‘Syllabus’ (Charles S. Peirce, The Essential Peirce Selected Philosophical Writings, Volume 2):

According to the first division, a Sign may be termed a Qualisign, a Sinsign, or a Legisign.

A Qualisign is a quality which is a Sign. It cannot actually act as a sign until it is embodied; but the embodiment has nothing to do with its character as a sign.

A Sinsign (where the syllable sin is taken as meaning ‘being only once’, as in single, simple, Latin semel, etc.) is an actual existent thing or event which is a sign. It can only be so through its qualities; so that it involves a qualisign, or rather, several qualisigns. But these qualisigns are of a peculiar kind and only form a sign through being actually embodied.

A Legisign is a law that is a Sign. This law is usually [sic] established by men. Every conventional sign is a legisign. It is not a single object, but a general type which, it has been agreed, shall be significant. Every legisign signifies through an instance of its application, which may be termed a Replica of it. Thus, the word ‘the’ will usually occur from fifteen to twenty-five times on a page. It is in all these occurrences one and the same word, the same legisign. Each single instance of it is a Replica. The Replica is a Sinsign. Thus, every Legisign requires Sinsigns. But these are not ordinary Sinsigns, such as are peculiar occurrences that are regarded as significant. Nor would the Replica be significant if it were not for the law which renders it so.

In some sense, it is a strange fact that this first and basic trichotomy has not been widely discussed in relation to the continuity concept in Peirce, because it is crucial. It is evident from the second noticeable locus where this trichotomy is discussed, the letters to Lady Welby – here Peirce continues (after an introduction which brings less news):

The difference between a legisign and a qualisign, neither of which is an individual thing, is that a legisign has a definite identity, though usually admitting a great variety of appearances. Thus, &, and, and the sound are all one word. The qualisign, on the other hand, has no identity. It is the mere quality of an appearance and is not exactly the same throughout a second. Instead of identity, it has great similarity, and cannot differ much without being called quite another qualisign.

The legisign or type is distinguished as being general which is, in turn, defined by continuity: the type has a ‘great variety of appearances’; as a matter of fact, a continuous variation of appearances. In many cases even several continua of appearances (as &, and, and the spoken sound of ‘and’). Each continuity of appearances is gathered into one identity thanks to the type, making possible the repetition of identical signs. Reference is not yet discussed (it concerns the sign’s relation to its object), nor is meaning (referring to its relation to its interpretant) – what is at stake is merely the possibility for a type to incarnate a continuum of possible actualizations, however this be possible, and so repeatedly appear as one and the same sign despite other differences. Thus the reality of the type is the very foundation for Peirce’s ‘extreme realism’, and this for two reasons. First, seen from the side of the sign, the type provides the possibility of stable, repeatable signs: the type may – opposed to qualisigns and those sinsigns not being replicas of a type – be repeated as a self-identical occurrence, and this is what in the first place provides the stability which renders repeated sign use possible. Second, seen from the side of reality: because types, legisigns, are realized without reference to human subjectivity, the existence of types is the condition of possibility for a sign, in turn, to stably refer to stably occurring entities and objects. Here, the importance of the irreducible continuity in philosophy of mathematics appears for semiotics: it is that which grants the possibility of collecting a continuum in one identity, the special characteristic of the type concept. The opposition to the type is the qualisign or tone lacking the stability of the type – they are not self-identical even through a second, as Peirce says – they have, of course, the character of being infinitesimal entities, about which the principle of contradiction does not hold. The transformation from tone to type is thus the transformation from unstable pre-logic to stable logic – it covers, to phrase it in a Husserlian way, the phenomenology of logic. The legisign thus exerts its law over specific qualisigns and sinsigns – like in all Peirce’s trichotomies the higher sign types contain and govern specific instances of the lower types. The legisign is incarnated in singular, actual sinsigns representing the type – they are tokens of the type – and what they have in common are certain sets of qualities or qualisigns – tones – selected from continua delimited by the legisign. The amount of possible sinsigns, tokens, are summed up by a type, a stable and self-identical sign. Peirce’s despised nominalists would to some degree agree here: the universal as a type is indeed a ‘mere word’ – but the strong counterargument which Peirce’s position makes possible says that if ‘mere words’ may possess universality, then the world must contain it as well, because words are worldly phenomena like everything else. Here, nominalists will typically exclude words from the world and make them privileges of the subject, but for Peirce’s welding of idealism and naturalism nothing can be truly separated from the world – all what basically is in the mind must also exist in the world. Thus the synthetical continuum, which may, in some respects, be treated as one entity, becomes the very condition of possibility for the existence of types.

Whether some types or legisigns now refer to existing general objects or not is not a matter for the first trichotomy to decide; legisigns may be part of any number of false or nonsensical propositions, and not all legisigns are symbols, just like arguments, in turn, are only a subset of symbols – but all of them are legisigns because they must in themselves be general in order to provide the condition of possibility of identical repetition, of reference to general objects and of signifying general interpretants.