The Biological Kant. Note Quote.

Nb3O7(OH)_self-organization2

The biological treatise takes as its object the realm of physics left out of Kant’s critical demarcations of scientific, that is, mathematical and mechanistic, physics. Here, the main idea was that scientifically understandable Nature was defined by lawfulness. In his Metaphysical Foundations of Natural Science, this idea was taken further in the following claim:

I claim, however, that there is only as much proper science to be found in any special doctrine on nature as there is mathematics therein, and further ‘a pure doctrine on nature about certain things in nature (doctrine on bodies and doctrine on minds) is only possible by means of mathematics’.

The basic idea is thus to identify Nature’s lawfulness with its ability to be studied by means of mathematical schemata uniting understanding and intuition. The central schema, to Kant, was numbers, so apt to be used in the understanding of mechanically caused movement. But already here, Kant is very well aware of a whole series of aspects of spontaneuosly experienced Nature is left out of sight by the concentration on matter in movement, and he calls for these further realms of Nature to be studied by a continuation of the Copernican turn, by the mind’s further study of the utmost limits of itself. Why do we spontaneously see natural purposes, in Nature? Purposiveness is wholly different from necessity, crucial to Kant’s definition of Nature. There is no reason in the general concept of Nature (as lawful) to assume that nature’s objects may serve each other as purposes. Nevertheless, we do not stop assuming just that. But what we do when we ascribe purposes to Nature is using the faculties of mind in another way than in science, much closer to the way we use them in the appreciation of beauty and art, the object of the first part of the book immediately before the treatment of teleological judgment. This judgment is characterized by a central distinction, already widely argued in this first part of the book: the difference between determinative and reflective judgments, respectively. While the judgment used scientifically to decide whether a specific case follows a certain rule in explanation by means of a derivation from a principle, and thus constitutes the objectivity of the object in question – the judgment which is reflective lacks all these features. It does not proceed by means of explanation, but by mere analogy; it is not constitutive, but merely regulative; it does not prove anything but merely judges, and it has no principle of reason to rest its head upon but the very act of judging itself. These ideas are now elaborated throughout the critic of teleological judgment.

nrm2357-i1

In the section Analytik der teleologischen Urteilskraft, Kant gradually approaches the question: first is treated the merely formal expediency: We may ascribe purposes to geometry in so far as it is useful to us, just like rivers carrying fertile soils with them for trees to grow in may be ascribed purposes; these are, however, merely contingent purposes, dependent on an external telos. The crucial point is the existence of objects which are only possible as such in so far as defined by purposes:

That its form is not possible after mere natural laws, that is, such things which may not be known by us through understanding applied to objects of the senses; on the contrary that even the empirical knowledge about them, regarding their cause and effect, presupposes concepts of reason.

The idea here is that in order to conceive of objects which may not be explained with reference to understanding and its (in this case, mechanical) concepts only, these must be grasped by the non-empirical ideas of reason itself. If causes are perceived as being interlinked in chains, then such contingencies are to be thought of only as small causal circles on the chain, that is, as things being their own cause. Hence Kant’s definition of the Idea of a natural purpose:

an object exists as natural purpose, when it is cause and effect of itself.

This can be thought as an idea without contradiction, Kant maintains, but not conceived. This circularity (the small causal circles) is a very important feature in Kant’s tentative schematization of purposiveness. Another way of coining this Idea is – things as natural purposes are organized beings. This entails that naturally purposeful objects must possess a certain spatio-temporal construction: the parts of such a thing must be possible only through their relation to the whole – and, conversely, the parts must actively connect themselves to this whole. Thus, the corresponding idea can be summed up as the Idea of the Whole which is necessary to pass judgment on any empirical organism, and it is very interesting to note that Kant sums up the determination of any part of a Whole by all other parts in the phrase that a natural purpose is possible only as an organized and self-organizing being. This is probably the very birth certificate of the metaphysics of self-organization. It is important to keep in mind that Kant does not feel any vitalist temptation at supposing any organizing power or any autonomy on the part of the whole which may come into being only by this process of self-organization between its parts. When Kant talks about the forming power in the formation of the Whole, it is thus nothing outside of this self-organization of its parts.

This leads to Kant’s final definition: an organized being is that in which all that is alternating is ends and means. This idea is extremely important as a formalization of the idea of teleology: the natural purposes do not imply that there exists given, stable ends for nature to pursue, on the contrary, they are locally defined by causal cycles, in which every part interchangeably assumes the role of ends and means. Thus, there is no absolute end in this construal of nature’s teleology; it analyzes teleology formally at the same time as it relativizes it with respect to substance. Kant takes care to note that this maxim needs not be restricted to the beings – animals – which we spontaneously tend to judge as purposeful. The idea of natural purposes thus entails that there might exist a plan in nature rendering processes which we have all reasons to disgust purposeful for us. In this vision, teleology might embrace causality – and even aesthetics:

Also natural beauty, that is, its harmony with the free play of our epistemological faculties in the experience and judgment of its appearance can be seen in the way of objective purposivity of nature in its totality as system, in which man is a member.

An important consequence of Kant’s doctrine is that their teleology is so to speak secularized in two ways: (1) it is formal, and (2) it is local. It is formal because self-organization does not ascribe any special, substantial goal for organisms to pursue – other than the sustainment of self-organization. Thus teleology is merely a formal property in certain types of systems. This is why teleology is also local – it is to be found in certain systems when the causal chain form loops, as Kant metaphorically describes the cycles involved in self-organization – it is no overarching goal governing organisms from the outside. Teleology is a local, bottom-up, process only.

Kant does not in any way doubt the existence of organized beings, what is at stake is the possibility of dealing with them scientifically in terms of mechanics. Even if they exist as a given thing in experience, natural purposes can not receive any concept. This implies that biology is evident in so far as the existence of organisms cannot be doubted. Biology will never rise to the heights of science, its attempts at doing so are beforehand delimited, all scientific explanations of organisms being bound to be mechanical. Following this line of argument, it corresponds very well to present-day reductionism in biology, trying to take all problems of phenotypical characters, organization, morphogenesis, behavior, ecology, etc. back to the biochemistry of genetics. But the other side of the argument is that no matter how successful this reduction may prove, it will never be able to reduce or replace the teleological point of view necessary in order to understand the organism as such in the first place.

Evidently, there is something deeply unsatisfactory in this conclusion which is why most biologists have hesitated at adopting it and cling to either full-blown reductionism or to some brand of vitalism, subjecting themselves to the dangers of ‘transcendental illusion’ and allowing for some Goethe-like intuitive idea without any schematization. Kant tries to soften up the question by philosophical means by establishing an crossing over from metaphysics to physics, or, from the metaphysical constraints on mechanical physics and to physics in its empirical totality, including the organized beings of biology. Pure mechanics leaves physics as a whole unorganized, and this organization is sought to be established by means of mediating concepts’. Among them is the formative power, which is not conceived of in a vitalist substantialist manner, but rather a notion referring to the means by which matter manages to self-organize. It thus comprehends not only biological organization, but macrophysic solid matter physics as well. Here, he adds an important argument to the critic of judgment:

Because man is conscious of himself as a self-moving machine, without being able to further understand such a possibility, he can, and is entitled to, introduce a priori organic-moving forces of bodies into the classification of bodies in general and thus to distinguish mere mechanical bodies from self-propelled organic bodies.

Metaphysical Would-Be(s). Drunken Risibility.

2483194094_be8c241308_b

If one were to look at Quine’s commitment to similarity, natural kinds, dispositions, causal statements, etc., it is evident, that it takes him close to Peirce’s conception of Thirdness – even if Quine in an utopian vision imagines that all such concepts in a remote future will dissolve and vanish in favor of purely microstructural descriptions.

A crucial difference remains, however, which becomes evident when one looks at Quine’s brief formula for ontological commitment, the famous idea that ‘to be is to be the value of a bound variable’. For even if this motto is stated exactly to avoid commitment to several different types of being, it immediately prompts the question: the equation, in which the variable is presumably bound, which status does it have? Governing the behavior of existing variable values, is that not in some sense being real?

This will be Peirce’s realist idea – that regularities, tendencies, dispositions, patterns, may possess real existence, independent of any observer. In Peirce, this description of Thirdness is concentrated in the expression ‘real possibility’, and even it may sound exceedingly metaphysical at a first glance, it amounts, at a closer look, to regularities charted by science that are not mere shorthands for collections of single events but do possess reality status. In Peirce, the idea of real possibilities thus springs from his philosophy of science – he observes that science, contrary to philosophy, is spontaneously realist, and is right in being so. Real possibilities are thus counterposed to mere subjective possibilities due to lack of knowledge on the part of the subject speaking: the possibility of ‘not known not to be true’.

In a famous piece of self-critique from his late, realist period, Peirce attacks his earlier arguments (from ‘How to Make Our Ideas Clear’, in the late 1890s considered by himself the birth certificate of pragmatism after James’s reference to Peirce as pragmatism’s inventor). Then, he wrote

let us ask what we mean by calling a thing hard. Evidently that it will not be scratched by many other substances. The whole conception of this quality, as of every other, lies in its conceived effects. There is absolutely no difference between a hard thing and a soft thing so long as they are not brought to the test. Suppose, then, that a diamond could be crystallized in the midst of a cushion of soft cotton, and should remain there until it was finally burned up. Would it be false to say that that diamond was soft? […] Reflection will show that the reply is this: there would be no falsity in such modes of speech.

More than twenty-five years later, however, he attacks this argument as bearing witness to the nominalism of his youth. Now instead he supports the

scholastic doctrine of realism. This is usually defined as the opinion that there are real objects that are general, among the number being the modes of determination of existent singulars, if, indeed, these be not the only such objects. But the belief in this can hardly escape being accompanied by the acknowledgment that there are, besides, real vagues, and especially real possibilities. For possibility being the denial of a necessity, which is a kind of generality, is vague like any other contradiction of a general. Indeed, it is the reality of some possibilities that pragmaticism is most concerned to insist upon. The article of January 1878 endeavored to gloze over this point as unsuited to the exoteric public addressed; or perhaps the writer wavered in his own mind. He said that if a diamond were to be formed in a bed of cotton-wool, and were to be consumed there without ever having been pressed upon by any hard edge or point, it would be merely a question of nomenclature whether that diamond should be said to have been hard or not. No doubt this is true, except for the abominable falsehood in the word MERELY, implying that symbols are unreal. Nomenclature involves classification; and classification is true or false, and the generals to which it refers are either reals in the one case, or figments in the other. For if the reader will turn to the original maxim of pragmaticism at the beginning of this article, he will see that the question is, not what did happen, but whether it would have been well to engage in any line of conduct whose successful issue depended upon whether that diamond would resist an attempt to scratch it, or whether all other logical means of determining how it ought to be classed would lead to the conclusion which, to quote the very words of that article, would be ‘the belief which alone could be the result of investigation carried sufficiently far.’ Pragmaticism makes the ultimate intellectual purport of what you please to consist in conceived conditional resolutions, or their substance; and therefore, the conditional propositions, with their hypothetical antecedents, in which such resolutions consist, being of the ultimate nature of meaning, must be capable of being true, that is, of expressing whatever there be which is such as the proposition expresses, independently of being thought to be so in any judgment, or being represented to be so in any other symbol of any man or men. But that amounts to saying that possibility is sometimes of a real kind. (The Essential Peirce Selected Philosophical Writings, Volume 2)

In the same year, he states, in a letter to the Italian pragmatist Signor Calderoni:

I myself went too far in the direction of nominalism when I said that it was a mere question of the convenience of speech whether we say that a diamond is hard when it is not pressed upon, or whether we say that it is soft until it is pressed upon. I now say that experiment will prove that the diamond is hard, as a positive fact. That is, it is a real fact that it would resist pressure, which amounts to extreme scholastic realism. I deny that pragmaticism as originally defined by me made the intellectual purport of symbols to consist in our conduct. On the contrary, I was most careful to say that it consists in our concept of what our conduct would be upon conceivable occasions. For I had long before declared that absolute individuals were entia rationis, and not realities. A concept determinate in all respects is as fictitious as a concept definite in all respects. I do not think we can ever have a logical right to infer, even as probable, the existence of anything entirely contrary in its nature to all that we can experience or imagine. 

Here lies the core of Peirce’s metaphysical insistence on the reality of ‘would-be’s. Real possibilities, or would-bes, are vague to the extent that they describe certain tendential, conditional behaviors only, while they do not prescribe any other aspect of the single objects they subsume. They are, furthermore, represented in rationally interrelated clusters of concepts: the fact that the diamond is in fact hard, no matter if it scratches anything or not, lies in the fact that the diamond’s carbon structure displays a certain spatial arrangement – so it is an aspect of the very concept of diamond. And this is why the old pragmatic maxim may not work without real possibilities: it is they that the very maxim rests upon, because it is they that provide us with the ‘conceived consequences’ of accepting a concept. The maxim remains a test to weed out empty concepts with no conceived consequences – that is, empty a priori reasoning and superfluous metaphysical assumptions. But what remains after the maxim has been put to use, is real possibilities. Real possibilities thus connect epistemology, expressed in the pragmatic maxim, to ontology: real possibilities are what science may grasp in conditional hypotheses.

The question is whether Peirce’s revision of his old ‘nominalist’ beliefs form part of a more general development in Peirce from nominalism to realism. The locus classicus of this idea is Max Fisch (Peirce, Semeiotic and Pragmatism) where Fisch outlines a development from an initial nominalism (albeit of a strange kind, refusing, as always in Peirce, the existence of individuals determinate in all respects) via a series of steps towards realism, culminating after the turn of the century. Fisch’s first step is then Peirce’s theory of the real as that which reasoning would finally have as its result; the second step his Berkeley review with its anti-nominalism and the idea that the real is what is unaffected by what we may think of it; the third step is his pragmatist idea that beliefs are conceived habits of action, even if he here clings to the idea that the conditionals in which habits are expressed are material implications only – like the definition of ‘hard’; the fourth step his reading of Abbott’s realist Scientific Theism (which later influenced his conception of scientific universals) and his introduction of the index in his theory of signs; the fifth step his acceptance of the reality of continuity; the sixth the introduction of real possibilities, accompanied by the development of existential graphs, topology and Peirce’s changing view of Hegelianism; the seventh, the identification of pragmatism with realism; the eighth ‘his last stronghold, that of Philonian or material implication’. A further realist development exchanging Peirce’s early frequentist idea of probability for a dispositional theory of probability was, according to Fisch, never finished.

The issue of implication concerns the old discussion quoted by Cicero between the Hellenistic logicians Philo and Diodorus. The former formulated what we know today as material implication, while the latter objected on common-sense ground that material implication does not capture implication in everyday language and thought and another implication type should be sought. As is well known, material implication says that p ⇒ q is equivalent to the claim that either p is false or q is true – so that p ⇒ q is false only when p is true and q is false. The problems arise when p is false, for any false p makes the implication true, and this leads to strange possibilities of true inferences. The two parts of the implication have no connection with each other at all, such as would be the spontaneous idea in everyday thought. It is true that Peirce as a logician generally supports material (‘Philonian’) implication – but it is also true that he does express some second thoughts at around the same time as the afterthoughts on the diamond example.

Peirce is a forerunner of the attempts to construct alternatives such as strict implication, and the reason why is, of course, that real possibilities are not adequately depicted by material implication. Peirce is in need of an implication which may somehow picture the causal dependency of q on p. The basic reason for the mature Peirce’s problems with the representation of real possibilities is not primarily logical, however. It is scientific. Peirce realizes that the scientific charting of anything but singular, actual events necessitates the real existence of tendencies and relations connecting singular events. Now, what kinds are those tendencies and relations? The hard diamond example seems to emphasize causality, but this probably depends on the point of view chosen. The ‘conceived consequences’ of the pragmatic maxim may be causal indeed: if we accept gravity as a real concept, then masses will attract one another – but they may all the same be structural: if we accept horse riders as a real concept, then we should expect horses, persons, the taming of horses, etc. to exist, or they may be teleological. In any case, the interpretation of the pragmatic maxim in terms of real possibilities paves the way for a distinction between empty a priori suppositions and real a priori structures.

Knowledge Limited for Dummies….Didactics.

header_Pipes

Bertrand Russell with Alfred North Whitehead, in the Principia Mathematica aimed to demonstrate that “all pure mathematics follows from purely logical premises and uses only concepts defined in logical terms.” Its goal was to provide a formalized logic for all mathematics, to develop the full structure of mathematics where every premise could be proved from a clear set of initial axioms.

Russell observed of the dense and demanding work, “I used to know of only six people who had read the later parts of the book. Three of those were Poles, subsequently (I believe) liquidated by Hitler. The other three were Texans, subsequently successfully assimilated.” The complex mathematical symbols of the manuscript required it to be written by hand, and its sheer size – when it was finally ready for the publisher, Russell had to hire a panel truck to send it off – made it impossible to copy. Russell recounted that “every time that I went out for a walk I used to be afraid that the house would catch fire and the manuscript get burnt up.”

Momentous though it was, the greatest achievement of Principia Mathematica was realized two decades after its completion when it provided the fodder for the metamathematical enterprises of an Austrian, Kurt Gödel. Although Gödel did face the risk of being liquidated by Hitler (therefore fleeing to the Institute of Advanced Studies at Princeton), he was neither a Pole nor a Texan. In 1931, he wrote a treatise entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which demonstrated that the goal Russell and Whitehead had so single-mindedly pursued was unattainable.

The flavor of Gödel’s basic argument can be captured in the contradictions contained in a schoolboy’s brainteaser. A sheet of paper has the words “The statement on the other side of this paper is true” written on one side and “The statement on the other side of this paper is false” on the reverse. The conflict isn’t resolvable. Or, even more trivially, a statement like; “This statement is unprovable.” You cannot prove the statement is true, because doing so would contradict it. If you prove the statement is false, then that means its converse is true – it is provable – which again is a contradiction.

The key point of contradiction for these two examples is that they are self-referential. This same sort of self-referentiality is the keystone of Gödel’s proof, where he uses statements that imbed other statements within them. This problem did not totally escape Russell and Whitehead. By the end of 1901, Russell had completed the first round of writing Principia Mathematica and thought he was in the homestretch, but was increasingly beset by these sorts of apparently simple-minded contradictions falling in the path of his goal. He wrote that “it seemed unworthy of a grown man to spend his time on such trivialities, but . . . trivial or not, the matter was a challenge.” Attempts to address the challenge extended the development of Principia Mathematica by nearly a decade.

Yet Russell and Whitehead had, after all that effort, missed the central point. Like granite outcroppings piercing through a bed of moss, these apparently trivial contradictions were rooted in the core of mathematics and logic, and were only the most readily manifest examples of a limit to our ability to structure formal mathematical systems. Just four years before Gödel had defined the limits of our ability to conquer the intellectual world of mathematics and logic with the publication of his Undecidability Theorem, the German physicist Werner Heisenberg’s celebrated Uncertainty Principle had delineated the limits of inquiry into the physical world, thereby undoing the efforts of another celebrated intellect, the great mathematician Pierre-Simon Laplace. In the early 1800s Laplace had worked extensively to demonstrate the purely mechanical and predictable nature of planetary motion. He later extended this theory to the interaction of molecules. In the Laplacean view, molecules are just as subject to the laws of physical mechanics as the planets are. In theory, if we knew the position and velocity of each molecule, we could trace its path as it interacted with other molecules, and trace the course of the physical universe at the most fundamental level. Laplace envisioned a world of ever more precise prediction, where the laws of physical mechanics would be able to forecast nature in increasing detail and ever further into the future, a world where “the phenomena of nature can be reduced in the last analysis to actions at a distance between molecule and molecule.”

What Gödel did to the work of Russell and Whitehead, Heisenberg did to Laplace’s concept of causality. The Uncertainty Principle, though broadly applied and draped in metaphysical context, is a well-defined and elegantly simple statement of physical reality – namely, the combined accuracy of a measurement of an electron’s location and its momentum cannot vary far from a fixed value. The reason for this, viewed from the standpoint of classical physics, is that accurately measuring the position of an electron requires illuminating the electron with light of a very short wavelength. But the shorter the wavelength the greater the amount of energy that hits the electron, and the greater the energy hitting the electron the greater the impact on its velocity.

What is true in the subatomic sphere ends up being true – though with rapidly diminishing significance – for the macroscopic. Nothing can be measured with complete precision as to both location and velocity because the act of measuring alters the physical properties. The idea that if we know the present we can calculate the future was proven invalid – not because of a shortcoming in our knowledge of mechanics, but because the premise that we can perfectly know the present was proven wrong. These limits to measurement imply limits to prediction. After all, if we cannot know even the present with complete certainty, we cannot unfailingly predict the future. It was with this in mind that Heisenberg, ecstatic about his yet-to-be-published paper, exclaimed, “I think I have refuted the law of causality.”

The epistemological extrapolation of Heisenberg’s work was that the root of the problem was man – or, more precisely, man’s examination of nature, which inevitably impacts the natural phenomena under examination so that the phenomena cannot be objectively understood. Heisenberg’s principle was not something that was inherent in nature; it came from man’s examination of nature, from man becoming part of the experiment. (So in a way the Uncertainty Principle, like Gödel’s Undecidability Proposition, rested on self-referentiality.) While it did not directly refute Einstein’s assertion against the statistical nature of the predictions of quantum mechanics that “God does not play dice with the universe,” it did show that if there were a law of causality in nature, no one but God would ever be able to apply it. The implications of Heisenberg’s Uncertainty Principle were recognized immediately, and it became a simple metaphor reaching beyond quantum mechanics to the broader world.

This metaphor extends neatly into the world of financial markets. In the purely mechanistic universe of classical physics, we could apply Newtonian laws to project the future course of nature, if only we knew the location and velocity of every particle. In the world of finance, the elementary particles are the financial assets. In a purely mechanistic financial world, if we knew the position each investor has in each asset and the ability and willingness of liquidity providers to take on those assets in the event of a forced liquidation, we would be able to understand the market’s vulnerability. We would have an early-warning system for crises. We would know which firms are subject to a liquidity cycle, and which events might trigger that cycle. We would know which markets are being overrun by speculative traders, and thereby anticipate tactical correlations and shifts in the financial habitat. The randomness of nature and economic cycles might remain beyond our grasp, but the primary cause of market crisis, and the part of market crisis that is of our own making, would be firmly in hand.

The first step toward the Laplacean goal of complete knowledge is the advocacy by certain financial market regulators to increase the transparency of positions. Politically, that would be a difficult sell – as would any kind of increase in regulatory control. Practically, it wouldn’t work. Just as the atomic world turned out to be more complex than Laplace conceived, the financial world may be similarly complex and not reducible to a simple causality. The problems with position disclosure are many. Some financial instruments are complex and difficult to price, so it is impossible to measure precisely the risk exposure. Similarly, in hedge positions a slight error in the transmission of one part, or asynchronous pricing of the various legs of the strategy, will grossly misstate the total exposure. Indeed, the problems and inaccuracies in using position information to assess risk are exemplified by the fact that major investment banking firms choose to use summary statistics rather than position-by-position analysis for their firmwide risk management despite having enormous resources and computational power at their disposal.

Perhaps more importantly, position transparency also has implications for the efficient functioning of the financial markets beyond the practical problems involved in its implementation. The problems in the examination of elementary particles in the financial world are the same as in the physical world: Beyond the inherent randomness and complexity of the systems, there are simply limits to what we can know. To say that we do not know something is as much a challenge as it is a statement of the state of our knowledge. If we do not know something, that presumes that either it is not worth knowing or it is something that will be studied and eventually revealed. It is the hubris of man that all things are discoverable. But for all the progress that has been made, perhaps even more exciting than the rolling back of the boundaries of our knowledge is the identification of realms that can never be explored. A sign in Einstein’s Princeton office read, “Not everything that counts can be counted, and not everything that can be counted counts.”

The behavioral analogue to the Uncertainty Principle is obvious. There are many psychological inhibitions that lead people to behave differently when they are observed than when they are not. For traders it is a simple matter of dollars and cents that will lead them to behave differently when their trades are open to scrutiny. Beneficial though it may be for the liquidity demander and the investor, for the liquidity supplier trans- parency is bad. The liquidity supplier does not intend to hold the position for a long time, like the typical liquidity demander might. Like a market maker, the liquidity supplier will come back to the market to sell off the position – ideally when there is another investor who needs liquidity on the other side of the market. If other traders know the liquidity supplier’s positions, they will logically infer that there is a good likelihood these positions shortly will be put into the market. The other traders will be loath to be the first ones on the other side of these trades, or will demand more of a price concession if they do trade, knowing the overhang that remains in the market.

This means that increased transparency will reduce the amount of liquidity provided for any given change in prices. This is by no means a hypothetical argument. Frequently, even in the most liquid markets, broker-dealer market makers (liquidity providers) use brokers to enter their market bids rather than entering the market directly in order to preserve their anonymity.

The more information we extract to divine the behavior of traders and the resulting implications for the markets, the more the traders will alter their behavior. The paradox is that to understand and anticipate market crises, we must know positions, but knowing and acting on positions will itself generate a feedback into the market. This feedback often will reduce liquidity, making our observations less valuable and possibly contributing to a market crisis. Or, in rare instances, the observer/feedback loop could be manipulated to amass fortunes.

One might argue that the physical limits of knowledge asserted by Heisenberg’s Uncertainty Principle are critical for subatomic physics, but perhaps they are really just a curiosity for those dwelling in the macroscopic realm of the financial markets. We cannot measure an electron precisely, but certainly we still can “kind of know” the present, and if so, then we should be able to “pretty much” predict the future. Causality might be approximate, but if we can get it right to within a few wavelengths of light, that still ought to do the trick. The mathematical system may be demonstrably incomplete, and the world might not be pinned down on the fringes, but for all practical purposes the world can be known.

Unfortunately, while “almost” might work for horseshoes and hand grenades, 30 years after Gödel and Heisenberg yet a third limitation of our knowledge was in the wings, a limitation that would close the door on any attempt to block out the implications of microscopic uncertainty on predictability in our macroscopic world. Based on observations made by Edward Lorenz in the early 1960s and popularized by the so-called butterfly effect – the fanciful notion that the beating wings of a butterfly could change the predictions of an otherwise perfect weather forecasting system – this limitation arises because in some important cases immeasurably small errors can compound over time to limit prediction in the larger scale. Half a century after the limits of measurement and thus of physical knowledge were demonstrated by Heisenberg in the world of quantum mechanics, Lorenz piled on a result that showed how microscopic errors could propagate to have a stultifying impact in nonlinear dynamic systems. This limitation could come into the forefront only with the dawning of the computer age, because it is manifested in the subtle errors of computational accuracy.

The essence of the butterfly effect is that small perturbations can have large repercussions in massive, random forces such as weather. Edward Lorenz was testing and tweaking a model of weather dynamics on a rudimentary vacuum-tube computer. The program was based on a small system of simultaneous equations, but seemed to provide an inkling into the variability of weather patterns. At one point in his work, Lorenz decided to examine in more detail one of the solutions he had generated. To save time, rather than starting the run over from the beginning, he picked some intermediate conditions that had been printed out by the computer and used those as the new starting point. The values he typed in were the same as the values held in the original simulation at that point, so the results the simulation generated from that point forward should have been the same as in the original; after all, the computer was doing exactly the same operations. What he found was that as the simulated weather pattern progressed, the results of the new run diverged, first very slightly and then more and more markedly, from those of the first run. After a point, the new path followed a course that appeared totally unrelated to the original one, even though they had started at the same place.

Lorenz at first thought there was a computer glitch, but as he investigated further, he discovered the basis of a limit to knowledge that rivaled that of Heisenberg and Gödel. The problem was that the numbers he had used to restart the simulation had been reentered based on his printout from the earlier run, and the printout rounded the values to three decimal places while the computer carried the values to six decimal places. This rounding, clearly insignificant at first, promulgated a slight error in the next-round results, and this error grew with each new iteration of the program as it moved the simulation of the weather forward in time. The error doubled every four simulated days, so that after a few months the solutions were going their own separate ways. The slightest of changes in the initial conditions had traced out a wholly different pattern of weather.

Intrigued by his chance observation, Lorenz wrote an article entitled “Deterministic Nonperiodic Flow,” which stated that “nonperiodic solutions are ordinarily unstable with respect to small modifications, so that slightly differing initial states can evolve into considerably different states.” Translation: Long-range weather forecasting is worthless. For his application in the narrow scientific discipline of weather prediction, this meant that no matter how precise the starting measurements of weather conditions, there was a limit after which the residual imprecision would lead to unpredictable results, so that “long-range forecasting of specific weather conditions would be impossible.” And since this occurred in a very simple laboratory model of weather dynamics, it could only be worse in the more complex equations that would be needed to properly reflect the weather. Lorenz discovered the principle that would emerge over time into the field of chaos theory, where a deterministic system generated with simple nonlinear dynamics unravels into an unrepeated and apparently random path.

The simplicity of the dynamic system Lorenz had used suggests a far-reaching result: Because we cannot measure without some error (harking back to Heisenberg), for many dynamic systems our forecast errors will grow to the point that even an approximation will be out of our hands. We can run a purely mechanistic system that is designed with well-defined and apparently well-behaved equations, and it will move over time in ways that cannot be predicted and, indeed, that appear to be random. This gets us to Santa Fe.

The principal conceptual thread running through the Santa Fe research asks how apparently simple systems, like that discovered by Lorenz, can produce rich and complex results. Its method of analysis in some respects runs in the opposite direction of the usual path of scientific inquiry. Rather than taking the complexity of the world and distilling simplifying truths from it, the Santa Fe Institute builds a virtual world governed by simple equations that when unleashed explode into results that generate unexpected levels of complexity.

In economics and finance, institute’s agenda was to create artificial markets with traders and investors who followed simple and reasonable rules of behavior and to see what would happen. Some of the traders built into the model were trend followers, others bought or sold based on the difference between the market price and perceived value, and yet others traded at random times in response to liquidity needs. The simulations then printed out the paths of prices for the various market instruments. Qualitatively, these paths displayed all the richness and variation we observe in actual markets, replete with occasional bubbles and crashes. The exercises did not produce positive results for predicting or explaining market behavior, but they did illustrate that it is not hard to create a market that looks on the surface an awful lot like a real one, and to do so with actors who are following very simple rules. The mantra is that simple systems can give rise to complex, even unpredictable dynamics, an interesting converse to the point that much of the complexity of our world can – with suitable assumptions – be made to appear simple, summarized with concise physical laws and equations.

The systems explored by Lorenz were deterministic. They were governed definitively and exclusively by a set of equations where the value in every period could be unambiguously and precisely determined based on the values of the previous period. And the systems were not very complex. By contrast, whatever the set of equations are that might be divined to govern the financial world, they are not simple and, furthermore, they are not deterministic. There are random shocks from political and economic events and from the shifting preferences and attitudes of the actors. If we cannot hope to know the course of the deterministic systems like fluid mechanics, then no level of detail will allow us to forecast the long-term course of the financial world, buffeted as it is by the vagaries of the economy and the whims of psychology.

Ideological Morphology. Thought of the Day 105.1

34-bundle-of-sticks-fasce-logo-640x427

When applied to generic fascism, the combined concepts of ideal type and ideological morphology have profound implications for both the traditional liberal and Marxist definitions of fascism. For one thing it means that fascism is no longer defined in terms of style, for e.g. spectacular politics, uniformed paramilitary forces, the pervasive use of symbols like fasces and Swastika, or organizational structure, but in terms of ideology. Moreover, the ideology is not seen as essentially nihilistic or negative (anti-liberalism, anti-Marxism, resistance to transcendence etc.), or as the mystification and aestheticization of capitalist power. Instead, it is constructed in the positive, but not apologetic or revisionist terms of the fascists’ own diagnosis of society’s structural crisis and the remedies they propose to solve it, paying particular attention to the need to separate out the ineliminable, definitional conceptions from time- or place- specific adjacent or peripheral ones. However, for decades the state of fascist studies would have made Michael Freeden’s analysis well-nigh impossible to apply to generic fascism, because precisely what was lacking was any conventional wisdom embedded in common-sense usage of the term about what constituted the ineliminable cluster of concepts at its non-essentialist core. Despite a handful of attempts to establish its definitional constituents that combined deep comparative historiographical knowledge of the subject with a high degree of conceptual sophistication, there was a conspicuous lack of scholarly consensus over what constituted the fascist minimum. Whether there was such an entity as generic fascism even was a question to think through. Or whether Nazism’s eugenic racism and the euthanasia campaign it led to, combined with a policy of physically eliminating racial enemies that led to the systematic persecution and mass murder, was simply unique, and too exceptional to be located within the generic category was another question to think through. Both these positions suggest a naivety about the epistemological and ontological status of generic concepts most regrettable among professional intellectuals, since every generic entity is a utopian heuristic construct, not a real thing and every historically singularity is by definition unique no matter how many generic terms can be applied to it. Other common positions that implied considerable naivety were the ones that dismissed fascism’s ideology as too irrational or nihilistic to be part of the fascist minimum, or generalized about its generic traits by blending fascism and nazism.

Constructivism. Note Quote.

f110f2532724a581461f7024fdde344c

Constructivism, as portrayed by its adherents, “is the idea that we construct our own world rather than it being determined by an outside reality”. Indeed, a common ground among constructivists of different persuasion lies in a commitment to the idea that knowledge is actively built up by the cognizing subject. But, whereas individualistic constructivism (which is most clearly enunciated by radical constructivism) focuses on the biological/psychological mechanisms that lead to knowledge construction, sociological constructivism focuses on the social factors that influence learning.

Let us briefly consider certain fundamental assumptions of individualistic constructivism. The first issue a constructivist theory of cognition ought to elucidate concerns of course the raw materials on which knowledge is constructed. On this issue, von Glaserfeld, an eminent representative of radical constructivism, gives a categorical answer: “from the constructivist point of view, the subject cannot transcend the limits of individual experience” (Michael R. Matthews Constructivism in Science Education_ A Philosophical Examination). This statement presents the keystone of constructivist epistemology, which conclusively asserts that “the only tools available to a ‘knower’ are the senses … [through which] the individual builds a picture of the world”. What is more, the so formed mental pictures do not shape an ‘external’ to the subject world, but the distinct personal reality of each individual. And this of course entails, in its turn, that the responsibility for the gained knowledge lies with the constructor; it cannot be shifted to a pre-existing world. As Ranulph Glanville confesses, “reality is what I sense, as I sense it, when I’m being honest about it” .

In this way, individualistic constructivism estranges the cognizing subject from the external world. Cognition is not considered as aiming at the discovery and investigation of an ‘independent’ world; it is viewed as a ‘tool’ that exclusively serves the adaptation of the subject to the world as it is experienced. From this perspective, ‘knowledge’ acquires an entirely new meaning. In the expression of von Glaserfeld,

the word ‘knowledge’ refers to conceptual structures that epistemic agents, given the range of present experience, within their tradition of thought and language, consider viable….[Furthermore] concepts have to be individually built up by reflective abstraction; and reflective abstraction is not a matter of looking closer but at operating mentally in a way that happens to be compatible with the perceptual material at hand.

To say it briefly, ‘knowledge’ signifies nothing more than an adequate organization of the experiential world, which makes the cognizing subject capable to effectively manipulate its perceptual experience.

It is evident that such insights, precluding any external point of reference, have impacts on knowledge evaluation. Indeed, the ascertainment that “for constructivists there are no structures other than those which the knower forms by its own activity” (Michael R. MatthewsConstructivism in Science Education A Philosophical Examination) yields unavoidably the conclusion drawn by Gerard De Zeeuw that “there is no mind-independent yardstick against which to measure the quality of any solution”. Hence, knowledge claims should not be evaluated by reference to a supposed ‘external’ world, but only by reference to their internal consistency and personal utility. This is precisely the reason that leads von Glaserfeld to suggest the substitution of the notion of “truth” by the notion of “viability” or “functional fit”: knowledge claims are appraised as “true”, if they “functionally fit” into the subject’s experiential world; and to find a “fit” simply means not to notice any discrepancies. This functional adaptation of ‘knowledge’ to experience is what finally secures the intended “viability”.

In accordance with the constructivist view, the notion of ‘object’, far from indicating any kind of ‘existence’, it explicitly refers to a strictly personal construction of the cognizing subject. Specifically, “any item of the furniture of someone’s experiential world can be called an ‘object’” (von Glaserfeld). From this point of view, the supposition that “the objects one has isolated in his experience are identical with those others have formed … is an illusion”. This of course deprives language of any rigorous criterion of objectivity; its physical-object statements, being dependent upon elements that are derived from personal experience, cannot be considered to reveal attributes of the objects as they factually are. Incorporating concepts whose meaning is highly associated with the individual experience of the cognizing subject, these statements form at the end a personal-specific description of the world. Conclusively, for constructivists the term ‘objectivity’ “shows no more than a relative compatibility of concepts” in situations where individuals have had occasion to compare their “individual uses of the particular words”.

From the viewpoint of radical constructivism, science, being a human enterprise, is amenable, by its very nature, to human limitations. It is then naturally inferred on constructivist grounds that “science cannot transcend [just as individuals cannot] the domain of experience” (von Glaserfeld). This statement, indicating that there is no essential differentiation between personal and scientific knowledge, permits, for instance, John Staver to assert that “for constructivists, observations, objects, events, data, laws and theory do not exist independent of observers. The lawful and certain nature of natural phenomena is a property of us, those who describe, not of nature, what is described”. Accordingly, by virtue of the preceding premise, one may argue that “scientific theories are derived from human experience and formulated in terms of human concepts” (von Glaserfeld).

In the framework now of social constructivism, if one accepts that the term ‘knowledge’ means no more than “what is collectively endorsed” (David Bloor Knowledge and Social Imagery), he will probably come to the conclusion that “the natural world has a small or non-existent role in the construction of scientific knowledge” (Collins). Or, in a weaker form, one can postulate that “scientific knowledge is symbolic in nature and socially negotiated. The objects of science are not the phenomena of nature but constructs advanced by the scientific community to interpret nature” (Rosalind Driver et al.). It is worth remarking that both views of constructivism eliminate, or at least downplay, the role of the natural world in the construction of scientific knowledge.

It is evident that the foregoing considerations lead most versions of constructivism to ultimately conclude that the very word ‘existence’ has no meaning in itself. It does acquire meaning only by referring to individuals or human communities. The acknowledgement of this fact renders subsequently the notion of ‘external’ physical reality useless and therefore redundant. As Riegler puts it, within the constructivist framework, “an external reality is neither rejected nor confirmed, it must be irrelevant”.

Transcendentally Realist Modality. Thought of the Day 78.1

22jun

Let us start at the beginning first! Though the fact is not mentioned in Genesis, the first thing God said on the first day of creation was ‘Let there be necessity’. And there was necessity. And God saw necessity, that it was good. And God divided necessity from contingency. And only then did He say ‘Let there be light’. Several days later, Adam and Eve were introducing names for the animals into their language, and during a break between the fish and the birds, introduced also into their language modal auxiliary verbs, or devices that would be translated into English using modal auxiliary verbs, and rules for their use, rules according to which it can be said of some things that they ‘could’ have been otherwise, and of other things that they ‘could not’. In so doing they were merely putting labels on a distinction that was no more their creation than were the fishes of the sea or the beasts of the field or the birds of the air.

And here is the rival view. The failure of Genesis to mention any command ‘Let there be necessity’ is to be explained simply by the fact that no such command was issued. We have no reason to suppose that the language in which God speaks to the angels contains modal auxiliary verbs or any equivalent device. Sometime after the Tower of Babel some tribes found that their purposes would be better served by introducing into their language certain modal auxiliary verbs, and fixing certain rules for their use. When we say that this is necessary while that is contingent, we are applying such rules, rules that are products of human, not divine intelligence.

This theological language would have been the natural way for seventeenth or eighteenth century philosophers, who nearly all were or professed to be theists or deists, to discuss the matter. For many today, such language cannot be literally accepted, and if it is only taken metaphorically, then at least better than those who speak figuratively and frame the question as that of whether the ‘origin’ of necessity lies outside us or within us. So let us drop the theological language, and try again.

Well, here the first view: Ultimately reality as it is in itself, independently of our attempts to conceptualize and comprehend it, contains both facts about what is, and superfacts about what not only is but had to have been. Our modal usages, for instance, the distinction between the simple indicative ‘is’ and the construction ‘had to have been’, simply reflect this fundamental distinction in the world, a distinction that is and from the beginning always was there, independently of us and our concerns.

And here is the second view: We have reasons, connected with our various purposes in life, to use certain words, including ‘would’ and ‘might’, in certain ways, and thereby to make certain distinctions. The distinction between those things in the world that would have been no matter what and those that might have failed to be if only is a projection of the distinctions made in our language. Our saying there were necessities there before us is a retroactive application to the pre-human world of a way of speaking invented and created by human beings in order to solve human problems.

Well, that’s the second try. With it even if one has gotten rid of theology, unfortunately one has not gotten rid of all metaphors. The key remaining metaphor is the optical one: reflection vs projection. Perhaps the attempt should be to get rid of all metaphors, and admit that the two views are not so much philosophical theses or doctrines as ‘metaphilosophical’ attitudes or orientations: a stance that finds the ‘reflection’ metaphor congenial, and the stance that finds the ‘projection’ metaphor congenial. So, lets try a third time to describe the distinction between the two outlooks in literal terms, avoiding optics as well as theology.

To begin with, both sides grant that there is a correspondence or parallelism between two items. On the one hand, there are facts about the contrast between what is necessary and what is contingent. On the other hand, there are facts about our usage of modal auxiliary verbs such as ‘would’ and ‘might’, and these include, for instance, the fact that we have no use for questions of the form ‘Would 29 still have been a prime number if such-and- such?’ but may have use for questions of the form ‘Would 29 still have been the number of years it takes for Saturn to orbit the sun if such-and-such?’ The difference between the two sides concerns the order of explanation of the relation between the two parallel ranges of facts.

And what is meant by that? Well, both sides grant that ‘29 is necessarily prime’, for instance, is a proper thing to say, but they differ in the explanation why it is a proper thing to say. Asked why, the first side will say that ultimately it is simply because 29 is necessarily prime. That makes the proposition that 29 is necessarily prime true, and since the sentence ‘29 is necessarily prime’ expresses that proposition, it is true also, and a proper thing to say. The second side will say instead that ‘29 is necessarily prime’ is a proper thing to say because there is a rule of our language according to which it is a proper thing to say. This formulation of the difference between the two sides gets rid of metaphor, though it does put an awful lot of weight on the perhaps fragile ‘why’ and ‘because’.

Note that the adherents of the second view need not deny that 29 is necessarily prime. On the contrary, having said that the sentence ‘29 is necessarily prime’ is, per rules of our language, a proper thing to say, they will go on to say it. Nor need the adherents of the first view deny that recognition of the propriety of saying ‘29 is necessarily prime’ is enshrined in a rule of our language. The adherents of the first view need not even deny that proximately, as individuals, we learn that ‘29 is necessarily prime’ is a proper thing to say by picking up the pertinent rule in the course of learning our language. But the adherents of the first view will maintain that the rule itself is only proper because collectively, as the creators of the language, we or our remote answers have, in setting up the rule, managed to achieve correspondence with a pre-existing fact, or rather, a pre-existing superfact, the superfact that 29 is necessarily prime. The difference between the two views is, in the order of explanation.

The adherents regarding labels for the two sides, or ‘metaphilosophical’ stances, rather than inventing new ones, will simply take two of the most overworked terms in the philosophical lexicon and give them one more job to do, calling the reflection view ‘realism’ about modality, and the projection view ‘pragmatism’. That at least will be easy to remember, since ‘realism’ and ‘reflection’ begin with the same first two letters, as do ‘pragmatism’ and ‘projection’. The realist/pragmatist distinction has bearing across a range of issues and problems, and above all it has bearing on the meta-issue of which issues are significant. For the two sides will, or ought to, recognize quite different questions as the central unsolved problems in the theory of modality.

For those on the realist side, the old problem of the ultimate source of our knowledge of modality remains, even if it is granted that the proximate source lies in knowledge of linguistic conventions. For knowledge of linguistic conventions constitutes knowledge of a reality independent of us only insofar as our linguistic conventions reflect, at least to some degree, such an ultimate reality. So for the realist the problem remains of explaining how such degree of correspondence as there is between distinctions in language and distinctions in the world comes about. If the distinction in the world is something primary and independent, and not a mere projection of the distinction in language, then how the distinction in language comes to be even imperfectly aligned with the distinction in the world remains to be explained. For it cannot be said that we have faculties responsive to modal facts independent of us – not in any sense of ‘responsive’ implying that if the facts had been different, then our language would have been different, since modal facts couldn’t have been different. What then is the explanation? This is the problem of the epistemology of modality as it confronts the realist, and addressing it is or ought to be at the top of the realist agenda.

As for the pragmatist side, a chief argument of thinkers from Kant to Ayer and Strawson and beyond for their anti-realist stance has been precisely that if the distinction we perceive in reality is taken to be merely a projection of a distinction created by ourselves, then the epistemological problem dissolves. That seems more like a reason for hoping the Kantian or Ayerite or Strawsonian view is the right one, than for believing that it is; but in any case, even supposing the pragmatist view is the right one, and the problems of the epistemology of modality are dissolved, still the pragmatist side has an important unanswered question of its own to address. The pragmatist account, begins by saying that we have certain reasons, connected with our various purposes in life, to use certain words, including ‘would’ and ‘might’, in certain ways, and thereby to make certain distinctions. What the pragmatist owes us is an account of what these purposes are, and how the rules of our language help us to achieve them. Addressing that issue is or ought to be at the top of the pragmatists’ to-do list.

While the positivist Ayer dismisses all metaphysics, the ordinary-language philosopher Strawson distinguishes good metaphysics, which he calls ‘descriptive’, from bad metaphysics, which he calls ‘revisionary’, but which rather be called ‘transcendental’ (without intending any specifically Kantian connotations). Descriptive metaphysics aims to provide an explicit account of our ‘conceptual scheme’, of the most general categories of commonsense thought, as embodied in ordinary language. Transcendental metaphysics aims to get beyond or behind all merely human conceptual schemes and representations to ultimate reality as it is in itself, an aim that Ayer and Strawson agree is infeasible and probably unintelligible. The descriptive/transcendental divide in metaphysics is a paradigmatically ‘metaphilosophical’ issue, one about what philosophy is about. Realists about modality are paradigmatic transcendental metaphysicians. Pragmatists must in the first instance be descriptive metaphysicians, since we must to begin with understand much better than we currently do how our modal distinctions work and what work they do for us, before proposing any revisions or reforms. And so the difference between realists and pragmatists goes beyond the question of what issue should come first on the philosopher’s agenda, being as it is an issue about what philosophical agendas are about.

The Mystery of Modality. Thought of the Day 78.0

sixdimensionquantificationalmodallogic.01

The ‘metaphysical’ notion of what would have been no matter what (the necessary) was conflated with the epistemological notion of what independently of sense-experience can be known to be (the a priori), which in turn was identified with the semantical notion of what is true by virtue of meaning (the analytic), which in turn was reduced to a mere product of human convention. And what motivated these reductions?

The mystery of modality, for early modern philosophers, was how we can have any knowledge of it. Here is how the question arises. We think that when things are some way, in some cases they could have been otherwise, and in other cases they couldn’t. That is the modal distinction between the contingent and the necessary.

How do we know that the examples are examples of that of which they are supposed to be examples? And why should this question be considered a difficult problem, a kind of mystery? Well, that is because, on the one hand, when we ask about most other items of purported knowledge how it is we can know them, sense-experience seems to be the source, or anyhow the chief source of our knowledge, but, on the other hand, sense-experience seems able only to provide knowledge about what is or isn’t, not what could have been or couldn’t have been. How do we bridge the gap between ‘is’ and ‘could’? The classic statement of the problem was given by Immanuel Kant, in the introduction to the second or B edition of his first critique, The Critique of Pure Reason: ‘Experience teaches us that a thing is so, but not that it cannot be otherwise.’

Note that this formulation allows that experience can teach us that a necessary truth is true; what it is not supposed to be able to teach is that it is necessary. The problem becomes more vivid if one adopts the language that was once used by Leibniz, and much later re-popularized by Saul Kripke in his famous work on model theory for formal modal systems, the usage according to which the necessary is that which is ‘true in all possible worlds’. In these terms the problem is that the senses only show us this world, the world we live in, the actual world as it is called, whereas when we claim to know about what could or couldn’t have been, we are claiming knowledge of what is going on in some or all other worlds. For that kind of knowledge, it seems, we would need a kind of sixth sense, or extrasensory perception, or nonperceptual mode of apprehension, to see beyond the world in which we live to these various other worlds.

Kant concludes, that our knowledge of necessity must be what he calls a priori knowledge or knowledge that is ‘prior to’ or before or independent of experience, rather than what he calls a posteriori knowledge or knowledge that is ‘posterior to’ or after or dependant on experience. And so the problem of the origin of our knowledge of necessity becomes for Kant the problem of the origin of our a priori knowledge.

Well, that is not quite the right way to describe Kant’s position, since there is one special class of cases where Kant thinks it isn’t really so hard to understand how we can have a priori knowledge. He doesn’t think all of our a priori knowledge is mysterious, but only most of it. He distinguishes what he calls analytic from what he calls synthetic judgments, and holds that a priori knowledge of the former is unproblematic, since it is not really knowledge of external objects, but only knowledge of the content of our own concepts, a form of self-knowledge.

We can generate any number of examples of analytic truths by the following three-step process. First, take a simple logical truth of the form ‘Anything that is both an A and a B is a B’, for instance, ‘Anyone who is both a man and unmarried is unmarried’. Second, find a synonym C for the phrase ‘thing that is both an A and a B’, for instance, ‘bachelor’ for ‘one who is both a man and unmarried’. Third, substitute the shorter synonym for the longer phrase in the original logical truth to get the truth ‘Any C is a B’, or in our example, the truth ‘Any bachelor is unmarried’. Our knowledge of such a truth seems unproblematic because it seems to reduce to our knowledge of the meanings of our own words.

So the problem for Kant is not exactly how knowledge a priori is possible, but more precisely how synthetic knowledge a priori is possible. Kant thought we do have examples of such knowledge. Arithmetic, according to Kant, was supposed to be synthetic a priori, and geometry, too – all of pure mathematics. In his Prolegomena to Any Future Metaphysics, Kant listed ‘How is pure mathematics possible?’ as the first question for metaphysics, for the branch of philosophy concerned with space, time, substance, cause, and other grand general concepts – including modality.

Kant offered an elaborate explanation of how synthetic a priori knowledge is supposed to be possible, an explanation reducing it to a form of self-knowledge, but later philosophers questioned whether there really were any examples of the synthetic a priori. Geometry, so far as it is about the physical space in which we live and move – and that was the original conception, and the one still prevailing in Kant’s day – came to be seen as, not synthetic a priori, but rather a posteriori. The mathematician Carl Friedrich Gauß had already come to suspect that geometry is a posteriori, like the rest of physics. Since the time of Einstein in the early twentieth century the a posteriori character of physical geometry has been the received view (whence the need for border-crossing from mathematics into physics if one is to pursue the original aim of geometry).

As for arithmetic, the logician Gottlob Frege in the late nineteenth century claimed that it was not synthetic a priori, but analytic – of the same status as ‘Any bachelor is unmarried’, except that to obtain something like ‘29 is a prime number’ one needs to substitute synonyms in a logical truth of a form much more complicated than ‘Anything that is both an A and a B is a B’. This view was subsequently adopted by many philosophers in the analytic tradition of which Frege was a forerunner, whether or not they immersed themselves in the details of Frege’s program for the reduction of arithmetic to logic.

Once Kant’s synthetic a priori has been rejected, the question of how we have knowledge of necessity reduces to the question of how we have knowledge of analyticity, which in turn resolves into a pair of questions: On the one hand, how do we have knowledge of synonymy, which is to say, how do we have knowledge of meaning? On the other hand how do we have knowledge of logical truths? As to the first question, presumably we acquire knowledge, explicit or implicit, conscious or unconscious, of meaning as we learn to speak, by the time we are able to ask the question whether this is a synonym of that, we have the answer. But what about knowledge of logic? That question didn’t loom large in Kant’s day, when only a very rudimentary logic existed, but after Frege vastly expanded the realm of logic – only by doing so could he find any prospect of reducing arithmetic to logic – the question loomed larger.

Many philosophers, however, convinced themselves that knowledge of logic also reduces to knowledge of meaning, namely, of the meanings of logical particles, words like ‘not’ and ‘and’ and ‘or’ and ‘all’ and ‘some’. To be sure, there are infinitely many logical truths, in Frege’s expanded logic. But they all follow from or are generated by a finite list of logical rules, and philosophers were tempted to identify knowledge of the meanings of logical particles with knowledge of rules for using them: Knowing the meaning of ‘or’, for instance, would be knowing that ‘A or B’ follows from A and follows from B, and that anything that follows both from A and from B follows from ‘A or B’. So in the end, knowledge of necessity reduces to conscious or unconscious knowledge of explicit or implicit semantical rules or linguistics conventions or whatever.

Such is the sort of picture that had become the received wisdom in philosophy departments in the English speaking world by the middle decades of the last century. For instance, A. J. Ayer, the notorious logical positivist, and P. F. Strawson, the notorious ordinary-language philosopher, disagreed with each other across a whole range of issues, and for many mid-century analytic philosophers such disagreements were considered the main issues in philosophy (though some observers would speak of the ‘narcissism of small differences’ here). And people like Ayer and Strawson in the 1920s through 1960s would sometimes go on to speak as if linguistic convention were the source not only of our knowledge of modality, but of modality itself, and go on further to speak of the source of language lying in ourselves. Individually, as children growing up in a linguistic community, or foreigners seeking to enter one, we must consciously or unconsciously learn the explicit or implicit rules of the communal language as something with a source outside us to which we must conform. But by contrast, collectively, as a speech community, we do not so much learn as create the language with its rules. And so if the origin of modality, of necessity and its distinction from contingency, lies in language, it therefore lies in a creation of ours, and so in us. ‘We, the makers and users of language’ are the ground and source and origin of necessity. Well, this is not a literal quotation from any one philosophical writer of the last century, but a pastiche of paraphrases of several.

Intuition

intuition-psychology

During his attempt to axiomatize the category of all categories, Lawvere says

Our intuition tells us that whenever two categories exist in our world, then so does the corresponding category of all natural transformations between the functors from the first category to the second (The Category of Categories as a Foundation).

However, if one tries to reduce categorial constructions to set theory, one faces some serious problems in the case of a category of functors. Lawvere (who, according to his aim of axiomatization, is not concerned by such a reduction) relies here on “intuition” to stress that those working with categorial concepts despite these problems have the feeling that the envisaged construction is clear, meaningful and legitimate. Not the reducibility to set theory, but an “intuition” to be specified answers for clarity, meaningfulness and legitimacy of a construction emerging in a mathematical working situation. In particular, Lawvere relies on a collective intuition, a common sense – for he explicitly says “our intuition”. Further, one obviously has to deal here with common sense on a technical level, for the “we” can only extend to a community used to the work with the concepts concerned.

In the tradition of philosophy, “intuition” means immediate, i.e., not conceptually mediated cognition. The use of the term in the context of validity (immediate insight in the truth of a proposition) is to be thoroughly distinguished from its use in the sensual context (the German Anschauung). Now, language is a manner of representation, too, but contrary to language, in the context of images the concept of validity is meaningless.

Obviously, the aspect of cognition guiding is touched on here. Especially the sensual intuition can take the guiding (or heuristic) function. There have been many working situations in history of mathematics in which making the objects of investigation accessible to a sensual intuition (by providing a Veranschaulichung) yielded considerable progress in the development of the knowledge concerning these objects. As an example, take the following account by Emil Artin of Emmy Noether’s contribution to the theory of algebras:

Emmy Noether introduced the concept of representation space – a vector space upon which the elements of the algebra operate as linear transformations, the composition of the linear transformation reflecting the multiplication in the algebra. By doing so she enables us to use our geometric intuition.

Similarly, Fréchet thinks to have really “powered” research in the theory of functions and functionals by the introduction of a “geometrical” terminology:

One can [ …] consider the numbers of the sequence [of coefficients of a Taylor series] as coordinates of a point in a space [ …] of infinitely many dimensions. There are several advantages to proceeding thus, for instance the advantage which is always present when geometrical language is employed, since this language is so appropriate to intuition due to the analogies it gives birth to.

Mathematical terminology often stems from a current language usage whose (intuitive, sensual) connotation is welcomed and serves to give the user an “intuition” of what is intended. While Category Theory is often classified as a highly abstract matter quite remote from intuition, in reality it yields, together with its applications, a multitude of examples for the role of current language in mathematical conceptualization.

This notwithstanding, there is naturally also a tendency in contemporary mathematics to eliminate as much as possible commitments to (sensual) intuition in the erection of a theory. It seems that algebraic geometry fulfills only in the language of schemes that essential requirement of all contemporary mathematics: to state its definitions and theorems in their natural abstract and formal setting in which they can be considered independent of geometric intuition (Mumford D., Fogarty J. Geometric Invariant Theory).

In the pragmatist approach, intuition is seen as a relation. This means: one uses a piece of language in an intuitive manner (or not); intuitive use depends on the situation of utterance, and it can be learned and transformed. The reason for this relational point of view, consists in the pragmatist conviction that each cognition of an object depends on the means of cognition employed – this means that for pragmatism there is no intuitive (in the sense of “immediate”) cognition; the term “intuitive” has to be given a new meaning.

What does it mean to use something intuitively? Heinzmann makes the following proposal: one uses language intuitively if one does not even have the idea to question validity. Hence, the term intuition in the Heinzmannian reading of pragmatism takes a different meaning, no longer signifies an immediate grasp. However, it is yet to be explained what it means for objects in general (and not only for propositions) to “question the validity of a use”. One uses an object intuitively, if one is not concerned with how the rules of constitution of the object have been arrived at, if one does not focus the materialization of these rules but only the benefits of an application of the object in the present context. “In principle”, the cognition of an object is determined by another cognition, and this determination finds its expression in the “rules of constitution”; one uses it intuitively (one does not bother about the being determined of its cognition), if one does not question the rules of constitution (does not focus the cognition which determines it). This is precisely what one does when using an object as a tool – because in doing so, one does not (yet) ask which cognition determines the object. When something is used as a tool, this constitutes an intuitive use, whereas the use of something as an object does not (this defines tool and object). Here, each concept in principle can play both roles; among two concepts, one may happen to be used intuitively before and the other after the progress of insight. Note that with respect to a given cognition, Peirce when saying “the cognition which determines it” always thinks of a previous cognition because he thinks of a determination of a cognition in our thought by previous thoughts. In conceptual history of mathematics, however, one most often introduced an object first as a tool and only after having done so did it come to one’s mind to ask for “the cognition which determines the cognition of this object” (that means, to ask how the use of this object can be legitimized).

The idea that it could depend on the situation whether validity is questioned or not has formerly been overlooked, perhaps because one always looked for a reductionist epistemology where the capacity called intuition is used exclusively at the last level of regression; in a pragmatist epistemology, to the contrary, intuition is used at every level in form of the not thematized tools. In classical systems, intuition was not simply conceived as a capacity; it was actually conceived as a capacity common to all human beings. “But the power of intuitively distinguishing intuitions from other cognitions has not prevented men from disputing very warmly as to which cognitions are intuitive”. Moreover, Peirce criticises strongly cartesian individualism (which has it that the individual has the capacity to find the truth). We could sum up this philosophy thus: we cannot reach definite truth, only provisional; significant progress is not made individually but only collectively; one cannot pretend that the history of thought did not take place and start from scratch, but every cognition is determined by a previous cognition (maybe by other individuals); one cannot uncover the ultimate foundation of our cognitions; rather, the fact that we sometimes reach a new level of insight, “deeper” than those thought of as fundamental before, merely indicates that there is no “deepest” level. The feeling that something is “intuitive” indicates a prejudice which can be philosophically criticised (even if this does not occur to us at the beginning).

In our approach, intuitive use is collectively determined: it depends on the particular usage of the community of users whether validity criteria are or are not questioned in a given situation of language use. However, it is acknowledged that for example scientific communities develop usages making them communities of language users on their own. Hence, situations of language use are not only partitioned into those where it comes to the users’ mind to question validity criteria and those where it does not, but moreover this partition is specific to a particular community (actually, the community of language users is established partly through a peculiar partition; this is a definition of the term “community of language users”). The existence of different communities with different common senses can lead to the following situation: something is used intuitively by one group, not intuitively by another. In this case, discussions inside the discipline occur; one has to cope with competing common senses (which are therefore not really “common”). This constitutes a task for the historian.

Mathematical Reductionism: As Case Via C. S. Peirce’s Hypothetical Realism.

mathematical-beauty

During the 20th century, the following epistemology of mathematics was predominant: a sufficient condition for the possibility of the cognition of objects is that these objects can be reduced to set theory. The conditions for the possibility of the cognition of the objects of set theory (the sets), in turn, can be given in various manners; in any event, the objects reduced to sets do not need an additional epistemological discussion – they “are” sets. Hence, such an epistemology relies ultimately on ontology. Frege conceived the axioms as descriptions of how we actually manipulate extensions of concepts in our thinking (and in this sense as inevitable and intuitive “laws of thought”). Hilbert admitted the use of intuition exclusively in metamathematics where the consistency proof is to be done (by which the appropriateness of the axioms would be established); Bourbaki takes the axioms as mere hypotheses. Hence, Bourbaki’s concept of justification is the weakest of the three: “it works as long as we encounter no contradiction”; nevertheless, it is still epistemology, because from this hypothetical-deductive point of view, one insists that at least a proof of relative consistency (i.e., a proof that the hypotheses are consistent with the frequently tested and approved framework of set theory) should be available.

Doing mathematics, one tries to give proofs for propositions, i.e., to deduce the propositions logically from other propositions (premisses). Now, in the reductionist perspective, a proof of a mathematical proposition yields an insight into the truth of the proposition, if the premisses are already established (if one has already an insight into their truth); this can be done by giving in turn proofs for them (in which new premisses will occur which ask again for an insight into their truth), or by agreeing to put them at the beginning (to consider them as axioms or postulates). The philosopher tries to understand how the decision about what propositions to take as axioms is arrived at, because he or she is dissatisfied with the reductionist claim that it is on these axioms that the insight into the truth of the deduced propositions rests. Actually, this epistemology might contain a short-coming since Poincaré (and Wittgenstein) stressed that to have a proof of a proposition is by no means the same as to have an insight into its truth.

Attempts to disclose the ontology of mathematical objects reveal the following tendency in epistemology of mathematics: Mathematics is seen as suffering from a lack of ontological “determinateness”, namely that this science (contrarily to many others) does not concern material data such that the concept of material truth is not available (especially in the case of the infinite). This tendency is embarrassing since on the other hand mathematical cognition is very often presented as cognition of the “greatest possible certainty” just because it seems not to be bound to material evidence, let alone experimental check.

The technical apparatus developed by the reductionist and set-theoretical approach nowadays serves other purposes, partly for the reason that tacit beliefs about sets were challenged; the explanations of the science which it provides are considered as irrelevant by the practitioners of this science. There is doubt that the above mentioned sufficient condition is also necessary; it is not even accepted throughout as a sufficient one. But what happens if some objects, as in the case of category theory, do not fulfill the condition? It seems that the reductionist approach, so to say, has been undocked from the historical development of the discipline in several respects; an alternative is required.

Anterior to Peirce, epistemology was dominated by the idea of a grasp of objects; since Descartes, intuition was considered throughout as a particular, innate capacity of cognition (even if idealists thought that it concerns the general, and empiricists that it concerns the particular). The task of this particular capacity was the foundation of epistemology; already from Aristotle’s first premisses of syllogism, what was aimed at was to go back to something first. In this traditional approach, it is by the ontology of the objects that one hopes to answer the fundamental question concerning the conditions for the possibility of the cognition of these objects. One hopes that there are simple “basic objects” to which the more complex objects can be reduced and whose cognition is possible by common sense – be this an innate or otherwise distinguished capacity of cognition common to all human beings. Here, epistemology is “wrapped up” in (or rests on) ontology; to do epistemology one has to do ontology first.

Peirce shares Kant’s opinion according to which the object depends on the subject; however, he does not agree that reason is the crucial means of cognition to be criticised. In his paper “Questions concerning certain faculties claimed for man”, he points out the basic assumption of pragmatist philosophy: every cognition is semiotically mediated. He says that there is no immediate cognition (a cognition which “refers immediately to its object”), but that every cognition “has been determined by a previous cognition” of the same object. Correspondingly, Peirce replaces critique of reason by critique of signs. He thinks that Kant’s distinction between the world of things per se (Dinge an sich) and the world of apparition (Erscheinungswelt) is not fruitful; he rather distinguishes the world of the subject and the world of the object, connected by signs; his position consequently is a “hypothetical realism” in which all cognitions are only valid with reservations. This position does not negate (nor assert) that the object per se (with the semiotical mediation stripped off) exists, since such assertions of “pure” existence are seen as necessarily hypothetical (that means, not withstanding philosophical criticism).

By his basic assumption, Peirce was led to reveal a problem concerning the subject matter of epistemology, since this assumption means in particular that there is no intuitive cognition in the classical sense (which is synonymous to “immediate”). Hence, one could no longer consider cognitions as objects; there is no intuitive cognition of an intuitive cognition. Intuition can be no more than a relation. “All the cognitive faculties we know of are relative, and consequently their products are relations”. According to this new point of view, intuition cannot any longer serve to found epistemology, in departure from the former reductionist attitude. A central argument of Peirce against reductionism or, as he puts it,

the reply to the argument that there must be a first is as follows: In retracing our way from our conclusions to premisses, or from determined cognitions to those which determine them, we finally reach, in all cases, a point beyond which the consciousness in the determined cognition is more lively than in the cognition which determines it.

Peirce gives some examples derived from physiological observations about perception, like the fact that the third dimension of space is inferred, and the blind spot of the retina. In this situation, the process of reduction loses its legitimacy since it no longer fulfills the function of cognition justification. At such a place, something happens which I would like to call an “exchange of levels”: the process of reduction is interrupted in that the things exchange the roles performed in the determination of a cognition: what was originally considered as determining is now determined by what was originally considered as asking for determination.

The idea that contents of cognition are necessarily provisional has an effect on the very concept of conditions for the possibility of cognitions. It seems that one can infer from Peirce’s words that what vouches for a cognition is not necessarily the cognition which determines it but the livelyness of our consciousness in the cognition. Here, “to vouch for a cognition” means no longer what it meant before (which was much the same as “to determine a cognition”), but it still means that the cognition is (provisionally) reliable. This conception of the livelyness of our consciousness roughly might be seen as a substitute for the capacity of intuition in Peirce’s epistemology – but only roughly, since it has a different coverage.

Task of the Philosopher. Thought of the Day 75.0

4578-004-B2A539B2

Poincaré in Science and Method discusses how “reasonable” axioms (theories) are chosen. In a section which is intended to cool down the expectations put in the “logistic” project, he points out the problem as follows:

Even admitting that it has been established that all theorems can be deduced by purely analytical processes, by simple logical combinations of a finite number of axioms, and that these axioms are nothing but conventions, the philosopher would still retain the right to seek the origin of these conventions, and to ask why they were judged preferable to the contrary conventions.

[ …] A selection must be made out of all the constructions that can be combined with the materials furnished by logic. the true geometrician makes this decision judiciously, because he is guided by a sure instinct, or by some vague consciousness of I know not what profounder and more hidden geometry, which alone gives a value to the constructed edifice.

Hence, Poincaré sees the task of the philosophers to be the explanation of how conventions came to be. At the end of the quotation, Poincaré tries to give such an explanation, namely in referring to an “instinct” (in the sequel, he mentions briefly that one can obviously ask where such an instinct comes from, but he gives no answer to this question). The pragmatist position to be developed will lead to an essentially similar, but more complete and clear point of view.

According to Poincaré’s definition, the task of the philosopher starts where that of the mathematician ends: for a mathematician, a result is right if he or she has a proof, that means, if the result can be logically deduced from the axioms; that one has to adopt some axioms is seen as a necessary evil, and one perhaps puts some energy in the project to minimize the number of axioms (this might have been how set theory become thought of as a foundation of mathematics). A philosopher, however, wants to understand why exactly these axioms and no other were chosen. In particular, the philosopher is concerned with the question whether the chosen axioms actually grasp the intended model. This question is justified since formal definitions are not automatically sufficient to grasp the intention of a concept; at the same time, the question is methodologically very hard, since ultimately a concept is available in mathematical proof only by a formal explication. At any rate, it becomes clear that the task of the philosopher is related to a criterion problem.

Georg Kreisel thinks that we do indeed have the capacity to decide whether a given model was intended or not:

many formal independence proofs consist in the construction of models which we recognize to be different from the intended notion. It is a fact of experience that one can be honest about such matters! When we are shown a ‘non-standard’ model we can honestly say that it was not intended. [ . . . ] If it so happens that the intended notion is not formally definable this may be a useful thing to know about the notion, but it does not cast doubt on its objectivity.

Poincaré could not yet know (but he was experienced enough a mathematician to “feel”) that axiom systems quite often fail to grasp the intended model. It was seldom the work of professional philosophers and often the byproduct of the actual mathematical work to point out such discrepancies.

Following Kant, one defines the task of epistemology thus: to determine the conditions of the possibility of the cognition of objects. Now, what is meant by “cognition of objects”? It is meant that we have an insight into (the truth of) propositions about the objects (we can then speak about the propositions as facts); and epistemology asks what are the conditions for the possibility of such an insight. Hence, epistemology is not concerned with what objects are (ontology), but with what (and how) we can know about them (ways of access). This notwithstanding, both things are intimately related, especially, in the Peircean stream of pragmatist philosophy. The 19th century (in particular Helmholtz) stressed against Kant the importance of physiological conditions for this access to objects. Nevertheless, epistemology is concerned with logic and not with the brain. Pragmatism puts the accent on the means of cognition – to which also the brain belongs.

Kant in his epistemology stressed that the object depends on the subject, or, more precisely, that the cognition of an object depends on the means of cognition used by the subject. For him, the decisive means of cognition was reason; thus, his epistemology was to a large degree critique of reason. Other philosophers disagreed about this special role of reason but shared the view that the task of philosophy is to criticise the means of cognition. For all of them, philosophy has to point out about what we can speak “legitimately”. Such a critical approach is implicitly contained in Poincaré’s description of the task of the philosopher.

Reichenbach decomposes the task of epistemology into different parts: guiding, justification and limitation of cognition. While justification is usually considered as the most important of the three aspects, the “task of the philosopher” as specified above following Poincaré is not limited to it. Indeed, the question why just certain axioms and no others were chosen is obviously a question concerning the guiding principles of cognition: which criteria are at work? Mathematics presents itself at its various historical stages as the result of a series of decisions on questions of the kind “Which objects should we consider? Which definitions should we make? Which theorems should we try to prove?” etc. – for short: instances of the “criterion problem”. Epistemology, has all the task to evoke these criteria – used but not evoked by the researchers themselves. For after all, these criteria cannot be without effect on the conditions for the possibility of cognition of the objects which one has decided to consider. (In turn, the conditions for this possibility in general determine the range of objects from which one has to choose.) However, such an epistemology has not the task to resolve the criterion problem normatively (that means to prescribe for the scientist which choices he has to make).