Adjacency of the Possible: Teleology of Autocatalysis. Thought of the Day 140.0

abiogenesisautocatalysis

Given a network of catalyzed chemical reactions, a (sub)set R of such reactions is called:

  1. Reflexively autocatalytic (RA) if every reaction in R is catalyzed by at least one molecule involved in any of the reactions in R;
  2. F-generated (F) if every reactant in R can be constructed from a small “food set” F by successive applications of reactions from R;
  3. Reflexively autocatalytic and F-generated (RAF) if it is both RA and F.

The food set F contains molecules that are assumed to be freely available in the environment. Thus, an RAF set formally captures the notion of “catalytic closure”, i.e., a self-sustaining set supported by a steady supply of (simple) molecules from some food set….

Stuart Kauffman begins with the Darwinian idea of the origin of life in a biological ‘primordial soup’ of organic chemicals and investigates the possibility of one chemical substance to catalyze the reaction of two others, forming new reagents in the soup. Such catalyses may, of course, form chains, so that one reagent catalyzes the formation of another catalyzing another, etc., and self-sustaining loops of reaction chains is an evident possibility in the appropriate chemical environment. A statistical analysis would reveal that such catalytic reactions may form interdependent networks when the rate of catalyzed reactions per molecule approaches one, creating a self-organizing chemical cycle which he calls an ‘autocatalytic set’. When the rate of catalyses per reagent is low, only small local reaction chains form, but as the rate approaches one, the reaction chains in the soup suddenly ‘freeze’ so that what was a group of chains or islands in the soup now connects into one large interdependent network, constituting an ‘autocatalytic set’. Such an interdependent reaction network constitutes the core of the body definition unfolding in Kauffman, and its cyclic character forms the basic precondition for self-sustainment. ‘Autonomous agent’ is an autocatalytic set able to reproduce and to undertake at least one thermodynamic work cycle.

This definition implies two things: reproduction possibility, and the appearance of completely new, interdependent goals in work cycles. The latter idea requires the ability of the autocatalytic set to save energy in order to spend it in its own self-organization, in its search for reagents necessary to uphold the network. These goals evidently introduce a – restricted, to be sure – teleology defined simply by the survival of the autocatalytic set itself: actions supporting this have a local teleological character. Thus, the autocatalytic set may, as it evolves, enlarge its cyclic network by recruiting new subcycles supporting and enhancing it in a developing structure of subcycles and sub-sub-cycles. 

Kauffman proposes that the concept of ‘autonomous agent’ implies a whole new cluster of interdependent concepts. Thus, the autonomy of the agent is defined by ‘catalytic closure’ (any reaction in the network demanding catalysis will get it) which is a genuine Gestalt property in the molecular system as a whole – and thus not in any way derivable from the chemistry of single chemical reactions alone.

Kauffman’s definitions on the basis of speculative chemistry thus entail not only the Kantian cyclic structure, but also the primitive perception and action phases of Uexküll’s functional circle. Thus, Kauffman’s definition of the organism in terms of an ‘autonomous agent’ basically builds on an Uexküllian intuition, namely the idea that the most basic property in a body is metabolism: the constrained, organizing processing of high-energy chemical material and the correlated perception and action performed to localize and utilize it – all of this constituting a metabolic cycle coordinating the organism’s in- and outside, defining teleological action. Perception and action phases are so to speak the extension of the cyclical structure of the closed catalytical set to encompass parts of its surroundings, so that the circle of metabolism may only be completed by means of successful perception and action parts.

The evolution of autonomous agents is taken as the empirical basis for the hypothesis of a general thermodynamic regularity based on non-ergodicity: the Big Bang universe (and, consequently, the biosphere) is not at equilibrium and will not reach equilibrium during the life-time of the universe. This gives rise to Kauffman’s idea of the ‘adjacent possible’. At a given point in evolution, one can define the set of chemical substances which do not exist in the universe – but which is at a distance of one chemical reaction only from a substance already existing in the universe. Biological evolution has, evidently, led to an enormous growth of types of organic macromolecules, and new such substances come into being every day. Maybe there is a sort of chemical potential leading from the actually realized substances and into the adjacent possible which is in some sense driving the evolution? In any case, Kauffman claims the hypothesis that the biosphere as such is supercritical in the sense that there is, in general, more than one action catalyzed by each reagent. Cells, in order not to be destroyed by this chemical storm, must be internally subcritical (even if close to the critical boundary). But if the biosphere as such is, in fact, supercritical, then this distinction seemingly a priori necessitates the existence of a boundary of the agent, protecting it against the environment.

The Biological Kant. Note Quote.

Nb3O7(OH)_self-organization2

The biological treatise takes as its object the realm of physics left out of Kant’s critical demarcations of scientific, that is, mathematical and mechanistic, physics. Here, the main idea was that scientifically understandable Nature was defined by lawfulness. In his Metaphysical Foundations of Natural Science, this idea was taken further in the following claim:

I claim, however, that there is only as much proper science to be found in any special doctrine on nature as there is mathematics therein, and further ‘a pure doctrine on nature about certain things in nature (doctrine on bodies and doctrine on minds) is only possible by means of mathematics’.

The basic idea is thus to identify Nature’s lawfulness with its ability to be studied by means of mathematical schemata uniting understanding and intuition. The central schema, to Kant, was numbers, so apt to be used in the understanding of mechanically caused movement. But already here, Kant is very well aware of a whole series of aspects of spontaneuosly experienced Nature is left out of sight by the concentration on matter in movement, and he calls for these further realms of Nature to be studied by a continuation of the Copernican turn, by the mind’s further study of the utmost limits of itself. Why do we spontaneously see natural purposes, in Nature? Purposiveness is wholly different from necessity, crucial to Kant’s definition of Nature. There is no reason in the general concept of Nature (as lawful) to assume that nature’s objects may serve each other as purposes. Nevertheless, we do not stop assuming just that. But what we do when we ascribe purposes to Nature is using the faculties of mind in another way than in science, much closer to the way we use them in the appreciation of beauty and art, the object of the first part of the book immediately before the treatment of teleological judgment. This judgment is characterized by a central distinction, already widely argued in this first part of the book: the difference between determinative and reflective judgments, respectively. While the judgment used scientifically to decide whether a specific case follows a certain rule in explanation by means of a derivation from a principle, and thus constitutes the objectivity of the object in question – the judgment which is reflective lacks all these features. It does not proceed by means of explanation, but by mere analogy; it is not constitutive, but merely regulative; it does not prove anything but merely judges, and it has no principle of reason to rest its head upon but the very act of judging itself. These ideas are now elaborated throughout the critic of teleological judgment.

nrm2357-i1

In the section Analytik der teleologischen Urteilskraft, Kant gradually approaches the question: first is treated the merely formal expediency: We may ascribe purposes to geometry in so far as it is useful to us, just like rivers carrying fertile soils with them for trees to grow in may be ascribed purposes; these are, however, merely contingent purposes, dependent on an external telos. The crucial point is the existence of objects which are only possible as such in so far as defined by purposes:

That its form is not possible after mere natural laws, that is, such things which may not be known by us through understanding applied to objects of the senses; on the contrary that even the empirical knowledge about them, regarding their cause and effect, presupposes concepts of reason.

The idea here is that in order to conceive of objects which may not be explained with reference to understanding and its (in this case, mechanical) concepts only, these must be grasped by the non-empirical ideas of reason itself. If causes are perceived as being interlinked in chains, then such contingencies are to be thought of only as small causal circles on the chain, that is, as things being their own cause. Hence Kant’s definition of the Idea of a natural purpose:

an object exists as natural purpose, when it is cause and effect of itself.

This can be thought as an idea without contradiction, Kant maintains, but not conceived. This circularity (the small causal circles) is a very important feature in Kant’s tentative schematization of purposiveness. Another way of coining this Idea is – things as natural purposes are organized beings. This entails that naturally purposeful objects must possess a certain spatio-temporal construction: the parts of such a thing must be possible only through their relation to the whole – and, conversely, the parts must actively connect themselves to this whole. Thus, the corresponding idea can be summed up as the Idea of the Whole which is necessary to pass judgment on any empirical organism, and it is very interesting to note that Kant sums up the determination of any part of a Whole by all other parts in the phrase that a natural purpose is possible only as an organized and self-organizing being. This is probably the very birth certificate of the metaphysics of self-organization. It is important to keep in mind that Kant does not feel any vitalist temptation at supposing any organizing power or any autonomy on the part of the whole which may come into being only by this process of self-organization between its parts. When Kant talks about the forming power in the formation of the Whole, it is thus nothing outside of this self-organization of its parts.

This leads to Kant’s final definition: an organized being is that in which all that is alternating is ends and means. This idea is extremely important as a formalization of the idea of teleology: the natural purposes do not imply that there exists given, stable ends for nature to pursue, on the contrary, they are locally defined by causal cycles, in which every part interchangeably assumes the role of ends and means. Thus, there is no absolute end in this construal of nature’s teleology; it analyzes teleology formally at the same time as it relativizes it with respect to substance. Kant takes care to note that this maxim needs not be restricted to the beings – animals – which we spontaneously tend to judge as purposeful. The idea of natural purposes thus entails that there might exist a plan in nature rendering processes which we have all reasons to disgust purposeful for us. In this vision, teleology might embrace causality – and even aesthetics:

Also natural beauty, that is, its harmony with the free play of our epistemological faculties in the experience and judgment of its appearance can be seen in the way of objective purposivity of nature in its totality as system, in which man is a member.

An important consequence of Kant’s doctrine is that their teleology is so to speak secularized in two ways: (1) it is formal, and (2) it is local. It is formal because self-organization does not ascribe any special, substantial goal for organisms to pursue – other than the sustainment of self-organization. Thus teleology is merely a formal property in certain types of systems. This is why teleology is also local – it is to be found in certain systems when the causal chain form loops, as Kant metaphorically describes the cycles involved in self-organization – it is no overarching goal governing organisms from the outside. Teleology is a local, bottom-up, process only.

Kant does not in any way doubt the existence of organized beings, what is at stake is the possibility of dealing with them scientifically in terms of mechanics. Even if they exist as a given thing in experience, natural purposes can not receive any concept. This implies that biology is evident in so far as the existence of organisms cannot be doubted. Biology will never rise to the heights of science, its attempts at doing so are beforehand delimited, all scientific explanations of organisms being bound to be mechanical. Following this line of argument, it corresponds very well to present-day reductionism in biology, trying to take all problems of phenotypical characters, organization, morphogenesis, behavior, ecology, etc. back to the biochemistry of genetics. But the other side of the argument is that no matter how successful this reduction may prove, it will never be able to reduce or replace the teleological point of view necessary in order to understand the organism as such in the first place.

Evidently, there is something deeply unsatisfactory in this conclusion which is why most biologists have hesitated at adopting it and cling to either full-blown reductionism or to some brand of vitalism, subjecting themselves to the dangers of ‘transcendental illusion’ and allowing for some Goethe-like intuitive idea without any schematization. Kant tries to soften up the question by philosophical means by establishing an crossing over from metaphysics to physics, or, from the metaphysical constraints on mechanical physics and to physics in its empirical totality, including the organized beings of biology. Pure mechanics leaves physics as a whole unorganized, and this organization is sought to be established by means of mediating concepts’. Among them is the formative power, which is not conceived of in a vitalist substantialist manner, but rather a notion referring to the means by which matter manages to self-organize. It thus comprehends not only biological organization, but macrophysic solid matter physics as well. Here, he adds an important argument to the critic of judgment:

Because man is conscious of himself as a self-moving machine, without being able to further understand such a possibility, he can, and is entitled to, introduce a priori organic-moving forces of bodies into the classification of bodies in general and thus to distinguish mere mechanical bodies from self-propelled organic bodies.

Fallibilist a priori. Thought of the Day 127.0

Figure-1-Peirce's-ten-classes-of-sign-represented-in-terms-of-three-core-functions-and

Kant’s ‘transcendental subject’ is pragmatized in this notion in Peirce, transcending any delimitation of reason to the human mind: the ‘anybody’ is operational and refers to anything which is able to undertake reasoning’s formal procedures. In the same way, Kant’s synthetic a priori notion is pragmatized in Peirce’s account:

Kant declares that the question of his great work is ‘How are synthetical judgments a priori possible?’ By a priori he means universal; by synthetical, experiential (i.e., relating to experience, not necessarily derived wholly from experience). The true question for him should have been, ‘How are universal propositions relating to experience to be justified?’ But let me not be understood to speak with anything less than profound and almost unparalleled admiration for that wonderful achievement, that indispensable stepping-stone of philosophy. (The Essential Peirce Selected Philosophical Writings)

Synthetic a priori is interpreted as experiential and universal, or, to put it another way, observational and general – thus Peirce’s rationalism in demanding rational relations is connected to his scholastic realism posing the existence of real universals.

But we do not make a diagram simply to represent the relation of killer to killed, though it would not be impossible to represent this relation in a Graph-Instance; and the reason why we do not is that there is little or nothing in that relation that is rationally comprehensible. It is known as a fact, and that is all. I believe I may venture to affirm that an intelligible relation, that is, a relation of thought, is created only by the act of representing it. I do not mean to say that if we should some day find out the metaphysical nature of the relation of killing, that intelligible relation would thereby be created. [ ] No, for the intelligible relation has been signified, though not read by man, since the first killing was done, if not long before. (The New Elements of Mathematics)

Peirce’s pragmatizing Kant enables him to escape the threatening subjectivism: rational relations are inherent in the universe and are not our inventions, but we must know (some of) them in order to think. The relation of killer to killed, is not, however, given our present knowledge, one of those rational relations, even if we might later become able to produce a rational diagram of aspects of it. Yet, such a relation is, as Peirce says, a mere fact. On the other hand, rational relations are – even if inherent in the universe – not only facts. Their extension is rather that of mathematics as such, which can be seen from the fact that the rational relations are what make necessary reasoning possible – at the same time as Peirce subscribes to his father’s mathematics definition: Mathematics is the science that draws necessary conclusions – with Peirce’s addendum that these conclusions are always hypothetical. This conforms to Kant’s idea that the result of synthetic a priori judgments comprised mathematics as well as the sciences built on applied mathematics. Thus, in constructing diagrams, we have all the possible relations in mathematics (which is inexhaustible, following Gödel’s 1931 incompleteness theorem) at our disposal. Moreover, the idea that we might later learn about the rational relations involved in killing entails a historical, fallibilist rendering of the a priori notion. Unlike the case in Kant, the a priori is thus removed from a privileged connection to the knowing subject and its transcendental faculties. Thus, Peirce rather anticipates a fallibilist notion of the a priori.

Metaphysical Continuity in Peirce. Thought of the Day 122.0

image12

Continuity has wide implications in the different parts of Peirce’s architectonics of theories. Time and time again, Peirce refers to his ‘principle of continuity’ which has not immediately anything to do with Poncelet’s famous such principle in geometry, but, is rather, a metaphysical implication taken to follow from fallibilism: if all more or less distinct phenomena swim in a vague sea of continuity then it is no wonder that fallibilism must be accepted. And if the world is basically continuous, we should not expect conceptual borders to be definitive but rather conceive of terminological distinctions as relative to an underlying, monist continuity. In this system, mathematics is first science. Thereafter follows philosophy which is distinguished form purely hypothetical mathematics by having an empirical basis. Philosophy, in turn, has three parts, phenomenology, the normative sciences, and metaphysics. The first investigates solely ‘the Phaneron’ which is all what could be imagined to appear as an object for experience: ‘ by the word phaneron I mean the collective total of all that is in any way or in any sense present to the mind, quite regardless whether it corresponds to any real thing or not.’ (Charles Sanders Peirce – Collected Papers of Charles Sanders Peirce) As is evident, this definition of Peirce’s ‘phenomenology’ is parallel to Husserl’s phenomenological reduction in bracketing the issue of the existence of the phenomenon in question. Even if it thus is built on introspection and general experience, it is – analogous to Husserl and other Brentano disciples at the same time – conceived in a completely antipsychological manner: ‘It religiously abstains from all speculation as to any relations between its categories and physiological facts, cerebral or other.’ and ‘ I abstain from psychology which has nothing to do with ideoscopy.’ (Letter to Lady Welby). The normative sciences fall in three: aesthetics, ethics, logic, in that order (and hence decreasing generality), among which Peirce does not spend very much time on the former two. Aesthetics is the investigation of which possible goals it is possible to aim at (Good, Truth, Beauty, etc.), and ethics how they may be reached. Logic is concerned with the grasping and conservation of Truth and takes up the larger part of Peirce’s interest among the normative sciences. As it deals with how truth can be obtained by means of signs, it is also called semiotics (‘logic is formal semiotics’) which is thus coextensive with theory of science – logic in this broad sense contains all parts of philosophy of science, including contexts of discovery as well as contexts of justification. Semiotics has, in turn, three branches: grammatica speculativa (or stekheiotics), critical logic, and methodeutic (inspired by mediaeval trivium: grammar, logic, and rhetoric). The middle one of these three lies closest to our days’ conception of logic; it is concerned with the formal conditions for truth in symbols – that is, propositions, arguments, their validity and how to calculate them, including Peirce’s many developments of the logic of his time: quantifiers, logic of relations, ab-, de-, and induction, logic notation systems, etc. All of these, however, presuppose the existence of simple signs which are investigated by what is often seen as semiotics proper, the grammatica speculativa; it may also be called formal grammar. It investigates the formal condition for symbols having meaning, and it is here we find Peirce’s definition of signs and his trichotomies of different types of sign aspects. Methodeutic or formal rhetorics, on the other hand, concerns the pragmatical use of the former two branches, that is, the study of how to use logic in a fertile way in research, the formal conditions for the ‘power’ of symbols, that is, their reference to their interpretants; here can be found, e.g., Peirce’s famous definitions of pragmati(ci)sm and his directions for scientific investigation. To phenomenology – again in analogy to Husserl – logic adds the interest in signs and their truth. After logic, metaphysics follows in Peirce’s system, concerning the inventarium of existing objects, conceived in general – and strongly influenced by logic in the Kantian tradition for seeing metaphysics mirroring logic. Also here, Peirce has several proposals for subtypologies, even if none of them seem stable, and under this headline classical metaphysical issues mix freely with generalizations of scientific results and cosmological speculations.

Peirce himself saw this classification in an almost sociological manner, so that the criteria of distinction do not stem directly from the implied objects’ natural kinds, but after which groups of persons study which objects: ‘the only natural lines of demarcation between nearly related sciences are the divisions between the social groups of devotees of those sciences’. Science collects scientists into bundles, because they are defined by their causa finalis, a teleologial intention demanding of them to solve a central problem.

Measured on this definition, one has to say that Peirce himself was not modest, not only does he continuously transgress such boundaries in his production, he frequently does so even within the scope of single papers. There is always, in his writings, a brief distance only from mathematics to metaphysics – or between any other two issues in mathematics and philosophy, and this implies, first, that the investigation of continuity and generality in Peirce’s system is more systematic than any actually existing exposition of these issues in Peirce’s texts, second, that the discussion must constantly rely on cross-references. This has the structural motivation that as soon as you are below the level of mathematics in Peirce’s system, inspired by the Comtean system, the single science receives determinations from three different directions, each science consisting of material and formal aspects alike. First, it receives formal directives ‘from above’, from those more general sciences which stand above it, providing the general frameworks in which it must unfold. Second, it receives material determinations from its own object, requiring it to make certain choices in its use of formal insights from the higher sciences. The cosmological issue of the character of empirical space, for instance, can take from mathematics the different (non-)Euclidean geometries and investigate which of these are fit to describe spatial aspects of our universe, but it does not, in itself, provide the formal tools. Finally, the single sciences receive in practice determinations ‘from below’, from more specific sciences, when their results by means of abstraction, prescission, induction, and other procedures provide insights on its more general, material level. Even if cosmology is, for instance, part of metaphysics, it receives influences from the empirical results of physics (or biology, from where Peirce takes the generalized principle of evolution). The distinction between formal and material is thus level specific: what is material on one level is a formal bundle of possibilities for the level below; what is formal on one level is material on the level above.

For these reasons, the single step on the ladder of sciences is only partially independent in Peirce, hence also the tendency of his own investigations to zigzag between the levels. His architecture of theories thus forms a sort of phenomenological theory of aspects: the hierarchy of sciences is an architecture of more and less general aspects of the phenomena, not completely independent domains. Finally, Peirce’s realism has as a result a somewhat disturbing style of thinking: many of his central concepts receive many, often highly different determinations which has often led interpreters to assume inconsistencies or theoretical developments in Peirce where none necessarily exist. When Peirce, for instance, determines the icon as the sign possessing a similarity to its object, and elsewhere determines it as the sign by the contemplation of which it is possible to learn more about its object, then they are not conflicting definitions. Peirce’s determinations of concepts are rarely definitions at all in the sense that they provide necessary and sufficient conditions exhausting the phenomenon in question. His determinations should rather be seen as descriptions from different perspectives of a real (and maybe ideal) object – without these descriptions necessarily conflicting. This style of thinking can, however, be seen as motivated by metaphysical continuity. When continuous grading between concepts is the rule, definitions in terms of necessary and sufficient conditions should not be expected to be exhaustive.

The Second Trichotomy. Thought of the Day 120.0

Figure-2-Peirce's-triple-trichotomy

The second trichotomy (here is the first) is probably the most well-known piece of Peirce’s semiotics: it distinguishes three possible relations between the sign and its (dynamical) object. This relation may be motivated by similarity, by actual connection, or by general habit – giving rise to the sign classes icon, index, and symbol, respectively.

According to the second trichotomy, a Sign may be termed an Icon, an Index, or a Symbol.

An Icon is a sign which refers to the Object that it denotes merely by virtue of characters of its own, and which it possesses, just the same, whether any such Object actually exists or not. It is true that unless there really is such an Object, the Icon does not act as a sign; but this has nothing to do with its character as a sign. Anything whatever, be it quality, existent individual, or law, is an Icon of anything, in so far as it is like that thing and used as a sign of it.

An Index is a sign which refers to the Object that it denotes by virtue of being really affected by that Object. It cannot, therefore, be a Qualisign, because qualities are whatever they are independently of anything else. In so far as the Index is affected by the Object, it necessarily has some Quality in common with the Object, and it is in respect to these that it refers to the Object. It does, therefore, involve a sort of Icon, although an Icon of a peculiar kind; and it is not the mere resemblance of its Object, even in these respects which makes it a sign, but it is the actual modification of it by the Object. 

A Symbol is a sign which refers to the Object that it denotes by virtue of a law, usually an association of general ideas, which operates to cause the Symbol to be interpreted as referring to that Object. It is thus itself a general type or law, that is, a Legisign. As such it acts through a Replica. Not only is it general in itself, but the Object to which it refers is of general nature. Now that which is general has its being in the instances it will determine. There must, therefore, be existent instances of what the Symbol denotes, although we must here understand by ‘existent’, existent in the possibly imaginary universe to which the Symbol refers. The Symbol will indirectly, through the association or other law, be affected by those instances; and thus the Symbol will involve a sort of Index, although an Index of a peculiar kind. It will not, however, be by any means true that the slight effect upon the Symbol of those instances accounts for the significant character of the Symbol.

The icon refers to its object solely by means of its own properties. This implies that an icon potentially refers to an indefinite class of objects, namely all those objects which have, in some respect, a relation of similarity to it. In recent semiotics, it has often been remarked by someone like Nelson Goodman that any phenomenon can be said to be like any other phenomenon in some respect, if the criterion of similarity is chosen sufficiently general, just like the establishment of any convention immediately implies a similarity relation. If Nelson Goodman picks out two otherwise very different objects, then they are immediately similar to the extent that they now have the same relation to Nelson Goodman. Goodman and others have for this reason deemed the similarity relation insignificant – and consequently put the whole burden of semiotics on the shoulders of conventional signs only. But the counterargument against this rejection of the relevance of the icon lies close at hand. Given a tertium comparationis, a measuring stick, it is no longer possible to make anything be like anything else. This lies in Peirce’s observation that ‘It is true that unless there really is such an Object, the Icon does not act as a sign ’ The icon only functions as a sign to the extent that it is, in fact, used to refer to some object – and when it does that, some criterion for similarity, a measuring stick (or, at least, a delimited bundle of possible measuring sticks) are given in and with the comparison. In the quote just given, it is of course the immediate object Peirce refers to – it is no claim that there should in fact exist such an object as the icon refers to. Goodman and others are of course right in claiming that as ‘Anything whatever ( ) is an Icon of anything ’, then the universe is pervaded by a continuum of possible similarity relations back and forth, but as soon as some phenomenon is in fact used as an icon for an object, then a specific bundle of similarity relations are picked out: ‘ in so far as it is like that thing.’

Just like the qualisign, the icon is a limit category. ‘A possibility alone is an Icon purely by virtue of its quality; and its object can only be a Firstness.’ (Charles S. PeirceThe Essential Peirce_ Selected Philosophical Writings). Strictly speaking, a pure icon may only refer one possible Firstness to another. The pure icon would be an identity relation between possibilities. Consequently, the icon must, as soon as it functions as a sign, be more than iconic. The icon is typically an aspect of a more complicated sign, even if very often a most important aspect, because providing the predicative aspect of that sign. This Peirce records by his notion of ‘hypoicon’: ‘But a sign may be iconic, that is, may represent its object mainly by its similarity, no matter what its mode of being. If a substantive is wanted, an iconic representamen may be termed a hypoicon’. Hypoicons are signs which to a large extent makes use of iconical means as meaning-givers: images, paintings, photos, diagrams, etc. But the iconic meaning realized in hypoicons have an immensely fundamental role in Peirce’s semiotics. As icons are the only signs that look-like, then they are at the same time the only signs realizing meaning. Thus any higher sign, index and symbol alike, must contain, or, by association or inference terminate in, an icon. If a symbol can not give an iconic interpretant as a result, it is empty. In that respect, Peirce’s doctrine parallels that of Husserl where merely signitive acts require fulfillment by intuitive (‘anschauliche’) acts. This is actually Peirce’s continuation of Kant’s famous claim that intuitions without concepts are blind, while concepts without intuitions are empty. When Peirce observes that ‘With the exception of knowledge, in the present instant, of the contents of consciousness in that instant (the existence of which knowledge is open to doubt) all our thought and knowledge is by signs’ (Letters to Lady Welby), then these signs necessarily involve iconic components. Peirce has often been attacked for his tendency towards a pan-semiotism which lets all mental and physical processes take place via signs – in the quote just given, he, analogous to Husserl, claims there must be a basic evidence anterior to the sign – just like Husserl this evidence before the sign must be based on a ‘metaphysics of presence’ – the ‘present instant’ provides what is not yet mediated by signs. But icons provide the connection of signs, logic and science to this foundation for Peirce’s phenomenology: the icon is the only sign providing evidence (Charles S. Peirce The New Elements of Mathematics Vol. 4). The icon is, through its timeless similarity, apt to communicate aspects of an experience ‘in the present instant’. Thus, the typical index contains an icon (more or less elaborated, it is true): any symbol intends an iconic interpretant. Continuity is at stake in relation to the icon to the extent that the icon, while not in itself general, is the bearer of a potential generality. The infinitesimal generality is decisive for the higher sign types’ possibility to give rise to thought: the symbol thus contains a bundle of general icons defining its meaning. A special icon providing the condition of possibility for general and rigorous thought is, of course, the diagram.

The index connects the sign directly with its object via connection in space and time; as an actual sign connected to its object, the index is turned towards the past: the action which has left the index as a mark must be located in time earlier than the sign, so that the index presupposes, at least, the continuity of time and space without which an index might occur spontaneously and without any connection to a preceding action. Maybe surprisingly, in the Peircean doctrine, the index falls in two subtypes: designators vs. reagents. Reagents are the simplest – here the sign is caused by its object in one way or another. Designators, on the other hand, are more complex: the index finger as pointing to an object or the demonstrative pronoun as the subject of a proposition are prototypical examples. Here, the index presupposes an intention – the will to point out the object for some receiver. Designators, it must be argued, presuppose reagents: it is only possible to designate an object if you have already been in reagent contact (simulated or not) with it (this forming the rational kernel of causal reference theories of meaning). The closer determination of the object of an index, however, invariably involves selection on the background of continuities.

On the level of the symbol, continuity and generality play a main role – as always when approaching issues defined by Thirdness. The symbol is, in itself a legisign, that is, it is a general object which exists only due to its actual instantiations. The symbol itself is a real and general recipe for the production of similar instantiations in the future. But apart from thus being a legisign, it is connected to its object thanks to a habit, or regularity. Sometimes, this is taken to mean ‘due to a convention’ – in an attempt to distinguish conventional as opposed to motivated sign types. This, however, rests on a misunderstanding of Peirce’s doctrine in which the trichotomies record aspects of sign, not mutually exclusive, independent classes of signs: symbols and icons do not form opposed, autonomous sign classes; rather, the content of the symbol is constructed from indices and general icons. The habit realized by a symbol connects it, as a legisign, to an object which is also general – an object which just like the symbol itself exists in instantiations, be they real or imagined. The symbol is thus a connection between two general objects, each of them being actualized through replicas, tokens – a connection between two continua, that is:

Definition 1. Any Blank is a symbol which could not be vaguer than it is (although it may be so connected with a definite symbol as to form with it, a part of another partially definite symbol), yet which has a purpose.

Axiom 1. It is the nature of every symbol to blank in part. [ ]

Definition 2. Any Sheet would be that element of an entire symbol which is the subject of whatever definiteness it may have, and any such element of an entire symbol would be a Sheet. (‘Sketch of Dichotomic Mathematics’ (The New Elements of Mathematics Vol. 4 Mathematical Philosophy)

The symbol’s generality can be described as it having always blanks having the character of being indefinite parts of its continuous sheet. Thus, the continuity of its blank parts is what grants its generality. The symbol determines its object according to some rule, granting the object satisfies that rule – but leaving the object indeterminate in all other respects. It is tempting to take the typical symbol to be a word, but it should rather be taken as the argument – the predicate and the proposition being degenerate versions of arguments with further continuous blanks inserted by erasure, so to speak, forming the third trichotomy of term, proposition, argument.

Kant and Non-Euclidean Geometries. Thought of the Day 94.0

ei5yC

The argument that non-Euclidean geometries contradict Kant’s doctrine on the nature of space apparently goes back to Hermann Helmholtz and was retaken by several philosophers of science such as Hans Reichenbach (The Philosophy of Space and Time) who devoted much work to this subject. In a essay written in 1870, Helmholtz argued that the axioms of geometry are not a priori synthetic judgments (in the sense given by Kant), since they can be subjected to experiments. Given that Euclidian geometry is not the only possible geometry, as was believed in Kant’s time, it should be possible to determine by means of measurements whether, for instance, the sum of the three angles of a triangle is 180 degrees or whether two straight parallel lines always keep the same distance among them. If it were not the case, then it would have been demonstrated experimentally that space is not Euclidean. Thus the possibility of verifying the axioms of geometry would prove that they are empirical and not given a priori.

Helmholtz developed his own version of a non-Euclidean geometry on the basis of what he believed to be the fundamental condition for all geometries: “the possibility of figures moving without change of form or size”; without this possibility, it would be impossible to define what a measurement is. According to Helmholtz:

the axioms of geometry are not concerned with space-relations only but also at the same time with the mechanical deportment of solidest bodies in motion.

Nevertheless, he was aware that a strict Kantian might argue that the rigidity of bodies is an a priori property, but

then we should have to maintain that the axioms of geometry are not synthetic propositions… they would merely define what qualities and deportment a body must have to be recognized as rigid.

At this point, it is worth noticing that Helmholtz’s formulation of geometry is a rudimentary version of what was later developed as the theory of Lie groups. As for the transport of rigid bodies, it is well known that rigid motion cannot be defined in the framework of the theory of relativity: since there is no absolute simultaneity of events, it is impossible to move all parts of a material body in a coordinated and simultaneous way. What is defined as the length of a body depends on the reference frame from where it is observed. Thus, it is meaningless to invoke the rigidity of bodies as the basis of a geometry that pretend to describe the real world; it is only in the mathematical realm that the rigid displacement of a figure can be defined in terms of what mathematicians call a congruence.

Arguments similar to those of Helmholtz were given by Reichenbach in his intent to refute Kant’s doctrine on the nature of space and time. Essentially, the argument boils down to the following: Kant assumed that the axioms of geometry are given a priori and he only had classical geometry in mind, Einstein demonstrated that space is not Euclidean and that this could be verified empirically, ergo Kant was wrong. However, Kant did not state that space must be Euclidean; instead, he argued that it is a pure form of intuition. As such, space has no physical reality of its own, and therefore it is meaningless to ascribe physical properties to it. Actually, Kant never mentioned Euclid directly in his work, but he did refer many times to the physics of Newton, which is based on classical geometry. Kant had in mind the axioms of this geometry which is a most powerful tool of Newtonian mechanics. Actually, he did not even exclude the possibility of other geometries, as can be seen in his early speculations on the dimensionality of space.

The important point missed by Reichenbach is that Riemannian geometry is necessarily based on Euclidean geometry. More precisely, a Riemannian space must be considered as locally Euclidean in order to be able to define basic concepts such as distance and parallel transport; this is achieved by defining a flat tangent space at every point, and then extending all properties of this flat space to the globally curved space (Luther Pfahler Eisenhart Riemannian Geometry). To begin with, the structure of a Riemannian space is given by its metric tensor gμν from which the (differential) length is defined as ds2 = gμν dxμ dxν; but this is nothing less than a generalization of the usual Pythagoras theorem in Euclidean space. As for the fundamental concept of parallel transport, it is taken directly from its analogue in Euclidean space: it refers to the transport of abstract (not material, as Helmholtz believed) figures in such a space. Thus Riemann’s geometry cannot be free of synthetic a priori propositions because it is entirely based upon concepts such as length and congruence taken form Euclid. We may conclude that Euclids geometry is the condition of possibility for a more general geometry, such as Riemann’s, simply because it is the natural geometry adapted to our understanding; Kant would say that it is our form of grasping space intuitively. The possibility of constructing abstract spaces does not refute Kant’s thesis; on the contrary, it reinforces it.

Something Out of Almost Nothing. Drunken Risibility.

Kant’s first antinomy makes the error of the excluded third option, i.e. it is not impossible that the universe could have both a beginning and an eternal past. If some kind of metaphysical realism is true, including an observer-independent and relational time, then a solution of the antinomy is conceivable. It is based on the distinction between a microscopic and a macroscopic time scale. Only the latter is characterized by an asymmetry of nature under a reversal of time, i.e. the property of having a global (coarse-grained) evolution – an arrow of time – or many arrows, if they are independent from each other. Thus, the macroscopic scale is by definition temporally directed – otherwise it would not exist.

On the microscopic scale, however, only local, statistically distributed events without dynamical trends, i.e. a global time-evolution or an increase of entropy density, exist. This is the case if one or both of the following conditions are satisfied: First, if the system is in thermodynamic equilibrium (e.g. there is degeneracy). And/or second, if the system is in an extremely simple ground state or meta-stable state. (Meta-stable states have a local, but not a global minimum in their potential landscape and, hence, they can decay; ground states might also change due to quantum uncertainty, i.e. due to local tunneling events.) Some still speculative theories of quantum gravity permit the assumption of such a global, macroscopically time-less ground state (e.g. quantum or string vacuum, spin networks, twistors). Due to accidental fluctuations, which exceed a certain threshold value, universes can emerge out of that state. Due to some also speculative physical mechanism (like cosmic inflation) they acquire – and, thus, are characterized by – directed non-equilibrium dynamics, specific initial conditions, and, hence, an arrow of time.

It is a matter of debate whether such an arrow of time is

1) irreducible, i.e. an essential property of time,

2) governed by some unknown fundamental and not only phenomenological law,

3) the effect of specific initial conditions or

4) of consciousness (if time is in some sense subjective), or

5) even an illusion.

Many physicists favour special initial conditions, though there is no consensus about their nature and form. But in the context at issue it is sufficient to note that such a macroscopic global time-direction is the main ingredient of Kant’s first antinomy, for the question is whether this arrow has a beginning or not.

Time’s arrow is inevitably subjective, ontologically irreducible, fundamental and not only a kind of illusion, thus if some form of metaphysical idealism for instance is true, then physical cosmology about a time before time is mistaken or quite irrelevant. However, if we do not want to neglect an observer-independent physical reality and adopt solipsism or other forms of idealism – and there are strong arguments in favor of some form of metaphysical realism -, Kant’s rejection seems hasty. Furthermore, if a Kantian is not willing to give up some kind of metaphysical realism, namely the belief in a “Ding an sich“, a thing in itself – and some philosophers actually insisted that this is superfluous: the German idealists, for instance -, he has to admit that time is a subjective illusion or that there is a dualism between an objective timeless world and a subjective arrow of time. Contrary to Kant’s thoughts: There are reasons to believe that it is possible, at least conceptually, that time has both a beginning – in the macroscopic sense with an arrow – and is eternal – in the microscopic notion of a steady state with statistical fluctuations.

Is there also some physical support for this proposal?

Surprisingly, quantum cosmology offers a possibility that the arrow has a beginning and that it nevertheless emerged out of an eternal state without any macroscopic time-direction. (Note that there are some parallels to a theistic conception of the creation of the world here, e.g. in the Augustinian tradition which claims that time together with the universe emerged out of a time-less God; but such a cosmological argument is quite controversial, especially in a modern form.) So this possible overcoming of the first antinomy is not only a philosophical conceivability but is already motivated by modern physics. At least some scenarios of quantum cosmology, quantum geometry/loop quantum gravity, and string cosmology can be interpreted as examples for such a local beginning of our macroscopic time out of a state with microscopic time, but with an eternal, global macroscopic timelessness.

To put it in a more general, but abstract framework and get a sketchy illustration, consider the figure.

Untitled

Physical dynamics can be described using “potential landscapes” of fields. For simplicity, here only the variable potential (or energy density) of a single field is shown. To illustrate the dynamics, one can imagine a ball moving along the potential landscape. Depressions stand for states which are stable, at least temporarily. Due to quantum effects, the ball can “jump over” or “tunnel through” the hills. The deepest depression represents the ground state.

In the common theories the state of the universe – the product of all its matter and energy fields, roughly speaking – evolves out of a metastable “false vacuum” into a “true vacuum” which has a state of lower energy (potential). There might exist many (perhaps even infinitely many) true vacua which would correspond to universes with different constants or laws of nature. It is more plausible to start with a ground state which is the minimum of what physically can exist. According to this view an absolute nothingness is impossible. There is something rather than nothing because something cannot come out of absolutely nothing, and something does obviously exist. Thus, something can only change, and this change might be described with physical laws. Hence, the ground state is almost “nothing”, but can become thoroughly “something”. Possibly, our universe – and, independent from this, many others, probably most of them having different physical properties – arose from such a phase transition out of a quasi atemporal quantum vacuum (and, perhaps, got disconnected completely). Tunneling back might be prevented by the exponential expansion of this brand new space. Because of this cosmic inflation the universe not only became gigantic but simultaneously the potential hill broadened enormously and got (almost) impassable. This preserves the universe from relapsing into its non-existence. On the other hand, if there is no physical mechanism to prevent the tunneling-back or makes it at least very improbable, respectively, there is still another option: If infinitely many universes originated, some of them could be long-lived only for statistical reasons. But this possibility is less predictive and therefore an inferior kind of explanation for not tunneling back.

Another crucial question remains even if universes could come into being out of fluctuations of (or in) a primitive substrate, i.e. some patterns of superposition of fields with local overdensities of energy: Is spacetime part of this primordial stuff or is it also a product of it? Or, more specifically: Does such a primordial quantum vacuum have a semi-classical spacetime structure or is it made up of more fundamental entities? Unique-universe accounts, especially the modified Eddington models – the soft bang/emergent universe – presuppose some kind of semi-classical spacetime. The same is true for some multiverse accounts describing our universe, where Minkowski space, a tiny closed, finite space or the infinite de Sitter space is assumed. The same goes for string theory inspired models like the pre-big bang account, because string and M- theory is still formulated in a background-dependent way, i.e. requires the existence of a semi-classical spacetime. A different approach is the assumption of “building-blocks” of spacetime, a kind of pregeometry also the twistor approach of Roger Penrose, and the cellular automata approach of Stephen Wolfram. The most elaborated accounts in this line of reasoning are quantum geometry (loop quantum gravity). Here, “atoms of space and time” are underlying everything.

Though the question whether semiclassical spacetime is fundamental or not is crucial, an answer might be nevertheless neutral with respect of the micro-/macrotime distinction. In both kinds of quantum vacuum accounts the macroscopic time scale is not present. And the microscopic time scale in some respect has to be there, because fluctuations represent change (or are manifestations of change). This change, reversible and relationally conceived, does not occur “within” microtime but constitutes it. Out of a total stasis nothing new and different can emerge, because an uncertainty principle – fundamental for all quantum fluctuations – would not be realized. In an almost, but not completely static quantum vacuum however, macroscopically nothing changes either, but there are microscopic fluctuations.

The pseudo-beginning of our universe (and probably infinitely many others) is a viable alternative both to initial and past-eternal cosmologies and philosophically very significant. Note that this kind of solution bears some resemblance to a possibility of avoiding the spatial part of Kant’s first antinomy, i.e. his claimed proof of both an infinite space without limits and a finite, limited space: The theory of general relativity describes what was considered logically inconceivable before, namely that there could be universes with finite, but unlimited space, i.e. this part of the antinomy also makes the error of the excluded third option. This offers a middle course between the Scylla of a mysterious, secularized creatio ex nihilo, and the Charybdis of an equally inexplicable eternity of the world.

In this context it is also possible to defuse some explanatory problems of the origin of “something” (or “everything”) out of “nothing” as well as a – merely assumable, but never provable – eternal cosmos or even an infinitely often recurring universe. But that does not offer a final explanation or a sufficient reason, and it cannot eliminate the ultimate contingency of the world.

Transcendentally Realist Modality. Thought of the Day 78.1

22jun

Let us start at the beginning first! Though the fact is not mentioned in Genesis, the first thing God said on the first day of creation was ‘Let there be necessity’. And there was necessity. And God saw necessity, that it was good. And God divided necessity from contingency. And only then did He say ‘Let there be light’. Several days later, Adam and Eve were introducing names for the animals into their language, and during a break between the fish and the birds, introduced also into their language modal auxiliary verbs, or devices that would be translated into English using modal auxiliary verbs, and rules for their use, rules according to which it can be said of some things that they ‘could’ have been otherwise, and of other things that they ‘could not’. In so doing they were merely putting labels on a distinction that was no more their creation than were the fishes of the sea or the beasts of the field or the birds of the air.

And here is the rival view. The failure of Genesis to mention any command ‘Let there be necessity’ is to be explained simply by the fact that no such command was issued. We have no reason to suppose that the language in which God speaks to the angels contains modal auxiliary verbs or any equivalent device. Sometime after the Tower of Babel some tribes found that their purposes would be better served by introducing into their language certain modal auxiliary verbs, and fixing certain rules for their use. When we say that this is necessary while that is contingent, we are applying such rules, rules that are products of human, not divine intelligence.

This theological language would have been the natural way for seventeenth or eighteenth century philosophers, who nearly all were or professed to be theists or deists, to discuss the matter. For many today, such language cannot be literally accepted, and if it is only taken metaphorically, then at least better than those who speak figuratively and frame the question as that of whether the ‘origin’ of necessity lies outside us or within us. So let us drop the theological language, and try again.

Well, here the first view: Ultimately reality as it is in itself, independently of our attempts to conceptualize and comprehend it, contains both facts about what is, and superfacts about what not only is but had to have been. Our modal usages, for instance, the distinction between the simple indicative ‘is’ and the construction ‘had to have been’, simply reflect this fundamental distinction in the world, a distinction that is and from the beginning always was there, independently of us and our concerns.

And here is the second view: We have reasons, connected with our various purposes in life, to use certain words, including ‘would’ and ‘might’, in certain ways, and thereby to make certain distinctions. The distinction between those things in the world that would have been no matter what and those that might have failed to be if only is a projection of the distinctions made in our language. Our saying there were necessities there before us is a retroactive application to the pre-human world of a way of speaking invented and created by human beings in order to solve human problems.

Well, that’s the second try. With it even if one has gotten rid of theology, unfortunately one has not gotten rid of all metaphors. The key remaining metaphor is the optical one: reflection vs projection. Perhaps the attempt should be to get rid of all metaphors, and admit that the two views are not so much philosophical theses or doctrines as ‘metaphilosophical’ attitudes or orientations: a stance that finds the ‘reflection’ metaphor congenial, and the stance that finds the ‘projection’ metaphor congenial. So, lets try a third time to describe the distinction between the two outlooks in literal terms, avoiding optics as well as theology.

To begin with, both sides grant that there is a correspondence or parallelism between two items. On the one hand, there are facts about the contrast between what is necessary and what is contingent. On the other hand, there are facts about our usage of modal auxiliary verbs such as ‘would’ and ‘might’, and these include, for instance, the fact that we have no use for questions of the form ‘Would 29 still have been a prime number if such-and- such?’ but may have use for questions of the form ‘Would 29 still have been the number of years it takes for Saturn to orbit the sun if such-and-such?’ The difference between the two sides concerns the order of explanation of the relation between the two parallel ranges of facts.

And what is meant by that? Well, both sides grant that ‘29 is necessarily prime’, for instance, is a proper thing to say, but they differ in the explanation why it is a proper thing to say. Asked why, the first side will say that ultimately it is simply because 29 is necessarily prime. That makes the proposition that 29 is necessarily prime true, and since the sentence ‘29 is necessarily prime’ expresses that proposition, it is true also, and a proper thing to say. The second side will say instead that ‘29 is necessarily prime’ is a proper thing to say because there is a rule of our language according to which it is a proper thing to say. This formulation of the difference between the two sides gets rid of metaphor, though it does put an awful lot of weight on the perhaps fragile ‘why’ and ‘because’.

Note that the adherents of the second view need not deny that 29 is necessarily prime. On the contrary, having said that the sentence ‘29 is necessarily prime’ is, per rules of our language, a proper thing to say, they will go on to say it. Nor need the adherents of the first view deny that recognition of the propriety of saying ‘29 is necessarily prime’ is enshrined in a rule of our language. The adherents of the first view need not even deny that proximately, as individuals, we learn that ‘29 is necessarily prime’ is a proper thing to say by picking up the pertinent rule in the course of learning our language. But the adherents of the first view will maintain that the rule itself is only proper because collectively, as the creators of the language, we or our remote answers have, in setting up the rule, managed to achieve correspondence with a pre-existing fact, or rather, a pre-existing superfact, the superfact that 29 is necessarily prime. The difference between the two views is, in the order of explanation.

The adherents regarding labels for the two sides, or ‘metaphilosophical’ stances, rather than inventing new ones, will simply take two of the most overworked terms in the philosophical lexicon and give them one more job to do, calling the reflection view ‘realism’ about modality, and the projection view ‘pragmatism’. That at least will be easy to remember, since ‘realism’ and ‘reflection’ begin with the same first two letters, as do ‘pragmatism’ and ‘projection’. The realist/pragmatist distinction has bearing across a range of issues and problems, and above all it has bearing on the meta-issue of which issues are significant. For the two sides will, or ought to, recognize quite different questions as the central unsolved problems in the theory of modality.

For those on the realist side, the old problem of the ultimate source of our knowledge of modality remains, even if it is granted that the proximate source lies in knowledge of linguistic conventions. For knowledge of linguistic conventions constitutes knowledge of a reality independent of us only insofar as our linguistic conventions reflect, at least to some degree, such an ultimate reality. So for the realist the problem remains of explaining how such degree of correspondence as there is between distinctions in language and distinctions in the world comes about. If the distinction in the world is something primary and independent, and not a mere projection of the distinction in language, then how the distinction in language comes to be even imperfectly aligned with the distinction in the world remains to be explained. For it cannot be said that we have faculties responsive to modal facts independent of us – not in any sense of ‘responsive’ implying that if the facts had been different, then our language would have been different, since modal facts couldn’t have been different. What then is the explanation? This is the problem of the epistemology of modality as it confronts the realist, and addressing it is or ought to be at the top of the realist agenda.

As for the pragmatist side, a chief argument of thinkers from Kant to Ayer and Strawson and beyond for their anti-realist stance has been precisely that if the distinction we perceive in reality is taken to be merely a projection of a distinction created by ourselves, then the epistemological problem dissolves. That seems more like a reason for hoping the Kantian or Ayerite or Strawsonian view is the right one, than for believing that it is; but in any case, even supposing the pragmatist view is the right one, and the problems of the epistemology of modality are dissolved, still the pragmatist side has an important unanswered question of its own to address. The pragmatist account, begins by saying that we have certain reasons, connected with our various purposes in life, to use certain words, including ‘would’ and ‘might’, in certain ways, and thereby to make certain distinctions. What the pragmatist owes us is an account of what these purposes are, and how the rules of our language help us to achieve them. Addressing that issue is or ought to be at the top of the pragmatists’ to-do list.

While the positivist Ayer dismisses all metaphysics, the ordinary-language philosopher Strawson distinguishes good metaphysics, which he calls ‘descriptive’, from bad metaphysics, which he calls ‘revisionary’, but which rather be called ‘transcendental’ (without intending any specifically Kantian connotations). Descriptive metaphysics aims to provide an explicit account of our ‘conceptual scheme’, of the most general categories of commonsense thought, as embodied in ordinary language. Transcendental metaphysics aims to get beyond or behind all merely human conceptual schemes and representations to ultimate reality as it is in itself, an aim that Ayer and Strawson agree is infeasible and probably unintelligible. The descriptive/transcendental divide in metaphysics is a paradigmatically ‘metaphilosophical’ issue, one about what philosophy is about. Realists about modality are paradigmatic transcendental metaphysicians. Pragmatists must in the first instance be descriptive metaphysicians, since we must to begin with understand much better than we currently do how our modal distinctions work and what work they do for us, before proposing any revisions or reforms. And so the difference between realists and pragmatists goes beyond the question of what issue should come first on the philosopher’s agenda, being as it is an issue about what philosophical agendas are about.

The Mystery of Modality. Thought of the Day 78.0

sixdimensionquantificationalmodallogic.01

The ‘metaphysical’ notion of what would have been no matter what (the necessary) was conflated with the epistemological notion of what independently of sense-experience can be known to be (the a priori), which in turn was identified with the semantical notion of what is true by virtue of meaning (the analytic), which in turn was reduced to a mere product of human convention. And what motivated these reductions?

The mystery of modality, for early modern philosophers, was how we can have any knowledge of it. Here is how the question arises. We think that when things are some way, in some cases they could have been otherwise, and in other cases they couldn’t. That is the modal distinction between the contingent and the necessary.

How do we know that the examples are examples of that of which they are supposed to be examples? And why should this question be considered a difficult problem, a kind of mystery? Well, that is because, on the one hand, when we ask about most other items of purported knowledge how it is we can know them, sense-experience seems to be the source, or anyhow the chief source of our knowledge, but, on the other hand, sense-experience seems able only to provide knowledge about what is or isn’t, not what could have been or couldn’t have been. How do we bridge the gap between ‘is’ and ‘could’? The classic statement of the problem was given by Immanuel Kant, in the introduction to the second or B edition of his first critique, The Critique of Pure Reason: ‘Experience teaches us that a thing is so, but not that it cannot be otherwise.’

Note that this formulation allows that experience can teach us that a necessary truth is true; what it is not supposed to be able to teach is that it is necessary. The problem becomes more vivid if one adopts the language that was once used by Leibniz, and much later re-popularized by Saul Kripke in his famous work on model theory for formal modal systems, the usage according to which the necessary is that which is ‘true in all possible worlds’. In these terms the problem is that the senses only show us this world, the world we live in, the actual world as it is called, whereas when we claim to know about what could or couldn’t have been, we are claiming knowledge of what is going on in some or all other worlds. For that kind of knowledge, it seems, we would need a kind of sixth sense, or extrasensory perception, or nonperceptual mode of apprehension, to see beyond the world in which we live to these various other worlds.

Kant concludes, that our knowledge of necessity must be what he calls a priori knowledge or knowledge that is ‘prior to’ or before or independent of experience, rather than what he calls a posteriori knowledge or knowledge that is ‘posterior to’ or after or dependant on experience. And so the problem of the origin of our knowledge of necessity becomes for Kant the problem of the origin of our a priori knowledge.

Well, that is not quite the right way to describe Kant’s position, since there is one special class of cases where Kant thinks it isn’t really so hard to understand how we can have a priori knowledge. He doesn’t think all of our a priori knowledge is mysterious, but only most of it. He distinguishes what he calls analytic from what he calls synthetic judgments, and holds that a priori knowledge of the former is unproblematic, since it is not really knowledge of external objects, but only knowledge of the content of our own concepts, a form of self-knowledge.

We can generate any number of examples of analytic truths by the following three-step process. First, take a simple logical truth of the form ‘Anything that is both an A and a B is a B’, for instance, ‘Anyone who is both a man and unmarried is unmarried’. Second, find a synonym C for the phrase ‘thing that is both an A and a B’, for instance, ‘bachelor’ for ‘one who is both a man and unmarried’. Third, substitute the shorter synonym for the longer phrase in the original logical truth to get the truth ‘Any C is a B’, or in our example, the truth ‘Any bachelor is unmarried’. Our knowledge of such a truth seems unproblematic because it seems to reduce to our knowledge of the meanings of our own words.

So the problem for Kant is not exactly how knowledge a priori is possible, but more precisely how synthetic knowledge a priori is possible. Kant thought we do have examples of such knowledge. Arithmetic, according to Kant, was supposed to be synthetic a priori, and geometry, too – all of pure mathematics. In his Prolegomena to Any Future Metaphysics, Kant listed ‘How is pure mathematics possible?’ as the first question for metaphysics, for the branch of philosophy concerned with space, time, substance, cause, and other grand general concepts – including modality.

Kant offered an elaborate explanation of how synthetic a priori knowledge is supposed to be possible, an explanation reducing it to a form of self-knowledge, but later philosophers questioned whether there really were any examples of the synthetic a priori. Geometry, so far as it is about the physical space in which we live and move – and that was the original conception, and the one still prevailing in Kant’s day – came to be seen as, not synthetic a priori, but rather a posteriori. The mathematician Carl Friedrich Gauß had already come to suspect that geometry is a posteriori, like the rest of physics. Since the time of Einstein in the early twentieth century the a posteriori character of physical geometry has been the received view (whence the need for border-crossing from mathematics into physics if one is to pursue the original aim of geometry).

As for arithmetic, the logician Gottlob Frege in the late nineteenth century claimed that it was not synthetic a priori, but analytic – of the same status as ‘Any bachelor is unmarried’, except that to obtain something like ‘29 is a prime number’ one needs to substitute synonyms in a logical truth of a form much more complicated than ‘Anything that is both an A and a B is a B’. This view was subsequently adopted by many philosophers in the analytic tradition of which Frege was a forerunner, whether or not they immersed themselves in the details of Frege’s program for the reduction of arithmetic to logic.

Once Kant’s synthetic a priori has been rejected, the question of how we have knowledge of necessity reduces to the question of how we have knowledge of analyticity, which in turn resolves into a pair of questions: On the one hand, how do we have knowledge of synonymy, which is to say, how do we have knowledge of meaning? On the other hand how do we have knowledge of logical truths? As to the first question, presumably we acquire knowledge, explicit or implicit, conscious or unconscious, of meaning as we learn to speak, by the time we are able to ask the question whether this is a synonym of that, we have the answer. But what about knowledge of logic? That question didn’t loom large in Kant’s day, when only a very rudimentary logic existed, but after Frege vastly expanded the realm of logic – only by doing so could he find any prospect of reducing arithmetic to logic – the question loomed larger.

Many philosophers, however, convinced themselves that knowledge of logic also reduces to knowledge of meaning, namely, of the meanings of logical particles, words like ‘not’ and ‘and’ and ‘or’ and ‘all’ and ‘some’. To be sure, there are infinitely many logical truths, in Frege’s expanded logic. But they all follow from or are generated by a finite list of logical rules, and philosophers were tempted to identify knowledge of the meanings of logical particles with knowledge of rules for using them: Knowing the meaning of ‘or’, for instance, would be knowing that ‘A or B’ follows from A and follows from B, and that anything that follows both from A and from B follows from ‘A or B’. So in the end, knowledge of necessity reduces to conscious or unconscious knowledge of explicit or implicit semantical rules or linguistics conventions or whatever.

Such is the sort of picture that had become the received wisdom in philosophy departments in the English speaking world by the middle decades of the last century. For instance, A. J. Ayer, the notorious logical positivist, and P. F. Strawson, the notorious ordinary-language philosopher, disagreed with each other across a whole range of issues, and for many mid-century analytic philosophers such disagreements were considered the main issues in philosophy (though some observers would speak of the ‘narcissism of small differences’ here). And people like Ayer and Strawson in the 1920s through 1960s would sometimes go on to speak as if linguistic convention were the source not only of our knowledge of modality, but of modality itself, and go on further to speak of the source of language lying in ourselves. Individually, as children growing up in a linguistic community, or foreigners seeking to enter one, we must consciously or unconsciously learn the explicit or implicit rules of the communal language as something with a source outside us to which we must conform. But by contrast, collectively, as a speech community, we do not so much learn as create the language with its rules. And so if the origin of modality, of necessity and its distinction from contingency, lies in language, it therefore lies in a creation of ours, and so in us. ‘We, the makers and users of language’ are the ground and source and origin of necessity. Well, this is not a literal quotation from any one philosophical writer of the last century, but a pastiche of paraphrases of several.

Mathematical Reductionism: As Case Via C. S. Peirce’s Hypothetical Realism.

mathematical-beauty

During the 20th century, the following epistemology of mathematics was predominant: a sufficient condition for the possibility of the cognition of objects is that these objects can be reduced to set theory. The conditions for the possibility of the cognition of the objects of set theory (the sets), in turn, can be given in various manners; in any event, the objects reduced to sets do not need an additional epistemological discussion – they “are” sets. Hence, such an epistemology relies ultimately on ontology. Frege conceived the axioms as descriptions of how we actually manipulate extensions of concepts in our thinking (and in this sense as inevitable and intuitive “laws of thought”). Hilbert admitted the use of intuition exclusively in metamathematics where the consistency proof is to be done (by which the appropriateness of the axioms would be established); Bourbaki takes the axioms as mere hypotheses. Hence, Bourbaki’s concept of justification is the weakest of the three: “it works as long as we encounter no contradiction”; nevertheless, it is still epistemology, because from this hypothetical-deductive point of view, one insists that at least a proof of relative consistency (i.e., a proof that the hypotheses are consistent with the frequently tested and approved framework of set theory) should be available.

Doing mathematics, one tries to give proofs for propositions, i.e., to deduce the propositions logically from other propositions (premisses). Now, in the reductionist perspective, a proof of a mathematical proposition yields an insight into the truth of the proposition, if the premisses are already established (if one has already an insight into their truth); this can be done by giving in turn proofs for them (in which new premisses will occur which ask again for an insight into their truth), or by agreeing to put them at the beginning (to consider them as axioms or postulates). The philosopher tries to understand how the decision about what propositions to take as axioms is arrived at, because he or she is dissatisfied with the reductionist claim that it is on these axioms that the insight into the truth of the deduced propositions rests. Actually, this epistemology might contain a short-coming since Poincaré (and Wittgenstein) stressed that to have a proof of a proposition is by no means the same as to have an insight into its truth.

Attempts to disclose the ontology of mathematical objects reveal the following tendency in epistemology of mathematics: Mathematics is seen as suffering from a lack of ontological “determinateness”, namely that this science (contrarily to many others) does not concern material data such that the concept of material truth is not available (especially in the case of the infinite). This tendency is embarrassing since on the other hand mathematical cognition is very often presented as cognition of the “greatest possible certainty” just because it seems not to be bound to material evidence, let alone experimental check.

The technical apparatus developed by the reductionist and set-theoretical approach nowadays serves other purposes, partly for the reason that tacit beliefs about sets were challenged; the explanations of the science which it provides are considered as irrelevant by the practitioners of this science. There is doubt that the above mentioned sufficient condition is also necessary; it is not even accepted throughout as a sufficient one. But what happens if some objects, as in the case of category theory, do not fulfill the condition? It seems that the reductionist approach, so to say, has been undocked from the historical development of the discipline in several respects; an alternative is required.

Anterior to Peirce, epistemology was dominated by the idea of a grasp of objects; since Descartes, intuition was considered throughout as a particular, innate capacity of cognition (even if idealists thought that it concerns the general, and empiricists that it concerns the particular). The task of this particular capacity was the foundation of epistemology; already from Aristotle’s first premisses of syllogism, what was aimed at was to go back to something first. In this traditional approach, it is by the ontology of the objects that one hopes to answer the fundamental question concerning the conditions for the possibility of the cognition of these objects. One hopes that there are simple “basic objects” to which the more complex objects can be reduced and whose cognition is possible by common sense – be this an innate or otherwise distinguished capacity of cognition common to all human beings. Here, epistemology is “wrapped up” in (or rests on) ontology; to do epistemology one has to do ontology first.

Peirce shares Kant’s opinion according to which the object depends on the subject; however, he does not agree that reason is the crucial means of cognition to be criticised. In his paper “Questions concerning certain faculties claimed for man”, he points out the basic assumption of pragmatist philosophy: every cognition is semiotically mediated. He says that there is no immediate cognition (a cognition which “refers immediately to its object”), but that every cognition “has been determined by a previous cognition” of the same object. Correspondingly, Peirce replaces critique of reason by critique of signs. He thinks that Kant’s distinction between the world of things per se (Dinge an sich) and the world of apparition (Erscheinungswelt) is not fruitful; he rather distinguishes the world of the subject and the world of the object, connected by signs; his position consequently is a “hypothetical realism” in which all cognitions are only valid with reservations. This position does not negate (nor assert) that the object per se (with the semiotical mediation stripped off) exists, since such assertions of “pure” existence are seen as necessarily hypothetical (that means, not withstanding philosophical criticism).

By his basic assumption, Peirce was led to reveal a problem concerning the subject matter of epistemology, since this assumption means in particular that there is no intuitive cognition in the classical sense (which is synonymous to “immediate”). Hence, one could no longer consider cognitions as objects; there is no intuitive cognition of an intuitive cognition. Intuition can be no more than a relation. “All the cognitive faculties we know of are relative, and consequently their products are relations”. According to this new point of view, intuition cannot any longer serve to found epistemology, in departure from the former reductionist attitude. A central argument of Peirce against reductionism or, as he puts it,

the reply to the argument that there must be a first is as follows: In retracing our way from our conclusions to premisses, or from determined cognitions to those which determine them, we finally reach, in all cases, a point beyond which the consciousness in the determined cognition is more lively than in the cognition which determines it.

Peirce gives some examples derived from physiological observations about perception, like the fact that the third dimension of space is inferred, and the blind spot of the retina. In this situation, the process of reduction loses its legitimacy since it no longer fulfills the function of cognition justification. At such a place, something happens which I would like to call an “exchange of levels”: the process of reduction is interrupted in that the things exchange the roles performed in the determination of a cognition: what was originally considered as determining is now determined by what was originally considered as asking for determination.

The idea that contents of cognition are necessarily provisional has an effect on the very concept of conditions for the possibility of cognitions. It seems that one can infer from Peirce’s words that what vouches for a cognition is not necessarily the cognition which determines it but the livelyness of our consciousness in the cognition. Here, “to vouch for a cognition” means no longer what it meant before (which was much the same as “to determine a cognition”), but it still means that the cognition is (provisionally) reliable. This conception of the livelyness of our consciousness roughly might be seen as a substitute for the capacity of intuition in Peirce’s epistemology – but only roughly, since it has a different coverage.