Husserl’s Flip-Flop on Arithmetic Axiomatics. Thought of the Day 118.0

g5198

Husserl’s position in his Philosophy of Arithmetic (Psychological and Logical Investigations with Supplementary Texts) was resolutely anti-axiomatic. He attacked those who fell into remote, artificial constructions which, with the intent of building the elementary arithmetic concepts out of their ultimate definitional properties, interpret and change their meaning so much that totally strange, practically and scientifically useless conceptual formations finally result. Especially targeted was Frege’s ideal of the

founding of arithmetic on a sequence of formal definitions, out of which all the theorems of that science could be deduced purely syllogistically.

As soon as one comes to the ultimate, elemental concepts, Husserl reasoned, all defining has to come to an end. All one can then do is to point to the concrete phenomena from or through which the concepts are abstracted and show the nature of the abstractive process. A verbal explanation should place us in the proper state of mind for picking out, in inner or outer intuition, the abstract moments intended and for reproducing in ourselves the mental processes required for the formation of the concept. He said that his analyses had shown with incontestable clarity that the concepts of multiplicity and unity rest directly upon ultimate, elemental psychical data, and so belong among the indefinable concepts. Since the concept of number was so closely joined to them, one could scarcely speak of defining it either. All these points are made on the only pages of Philosophy of Arithmetic that Husserl ever explicitly retracted.

In On the Concept of Number, Husserl had set out to anchor arithmetical concepts in direct experience by analyzing the actual psychological processes to which he thought the concept of number owed its genesis. To obtain the concept of number of a concrete set of objects, say A, A, and A, he explained, one abstracts from the particular characteristics of the individual contents collected, only considering and retaining each one insofar as it is a something or a one. Regarding their collective combination, one thus obtains the general form of the set belonging to the set in question: one and one, etc. and. . . and one, to which a number name is assigned.

The enthusiastic espousal of psychologism of On the Concept of Number is not found in Philosophy of Arithmetic. Husserl later confessed that doubts about basic differences between the concept of number and the concept of collecting, which was all that could be obtained from reflection on acts, had troubled and tormented him from the very beginning and had eventually extended to all categorial concepts and to concepts of objectivities of any sort whatsoever, ultimately to include modern analysis and the theory of manifolds, and simultaneously to mathematical logic and the entire field of logic in general. He did not see how one could reconcile the objectivity of mathematics with psychological foundations for logic.

In sharp contrast to Brouwer who denounced logic as a source of truth, from the mid-1890s on, Husserl defended the view, which he attributed to Frege’s teacher Hermann Lotze, that pure arithmetic was basically no more than a branch of logic that had undergone independent development. He bid students not to be “scared” by that thought and to grow used to Lotze’s initially strange idea that arithmetic was only a particularly highly developed piece of logic.

Years later, Husserl would explain in Formal and Transcendental Logic that his

war against logical psychologism was meant to serve no other end than the supremely important one of making the specific province of analytic logic visible in its purity and ideal particularity, freeing it from the psychologizing confusions and misinterpretations in which it had remained enmeshed from the beginning.

He had come to see arithmetic truths as being analytic, as grounded in meanings independently of matters of fact. He had come to believe that the entire overthrowing of psychologism through phenomenology showed that his analyses in On the Concept of Number and Philosophy of Arithmetic had to be considered a pure a priori analysis of essence. For him, pure arithmetic, pure mathematics, and pure logic were a priori disciplines entirely grounded in conceptual essentialities, where truth was nothing other than the analysis of essences or concepts. Pure mathematics as pure arithmetic investigated what is grounded in the essence of number. Pure mathematical laws were laws of essence.

He is said to have told his students that it was to be stressed repeatedly and emphatically that the ideal entities so unpleasant for empiricistic logic, and so consistently disregarded by it, had not been artificially devised either by himself, or by Bolzano, but were given beforehand by the meaning of the universal talk of propositions and truths indispensable in all the sciences. This, he said, was an indubitable fact that had to be the starting point of all logic. All purely mathematical propositions, he taught, express something about the essence of what is mathematical. Their denial is consequently an absurdity. Denying a proposition of the natural sciences, a proposition about real matters of fact, never means an absurdity, a contradiction in terms. In denying the law of gravity, I cast experience to the wind. I violate the evident, extremely valuable probability that experience has established for the laws. But, I do not say anything “unthinkable,” absurd, something that nullifies the meaning of the word as I do when I say that 2 × 2 is not 4, but 5.

Husserl taught that every judgment either is a truth or cannot be a truth, that every presentation either accorded with a possible experience adequately redeeming it, or was in conflict with the experience, and that grounded in the essence of agreement was the fact that it was incompatible with the conflict, and grounded in the essence of conflict that it was incompatible with agreement. For him, that meant that truth ruled out falsehood and falsehood ruled out truth. And, likewise, existence and non-existence, correctness and incorrectness cancelled one another out in every sense. He believed that that became immediately apparent as soon as one had clarified the essence of existence and truth, of correctness and incorrectness, of Evidenz as consciousness of givenness, of being and not-being in fully redeeming intuition.

At the same time, Husserl contended, one grasps the “ultimate meaning” of the basic logical law of contradiction and of the excluded middle. When we state the law of validity that of any two contradictory propositions one holds and the other does not hold, when we say that for every proposition there is a contradictory one, Husserl explained, then we are continually speaking of the proposition in its ideal unity and not at all about mental experiences of individuals, not even in the most general way. With talk of truth it is always a matter of propositions in their ideal unity, of the meaning of statements, a matter of something identical and atemporal. What lies in the identically-ideal meaning of one’s words, what one cannot deny without invalidating the fixed meaning of one’s words has nothing at all to do with experience and induction. It has only to do with concepts. In sharp contrast to this, Brouwer saw intuitionistic mathematics as deviating from classical mathematics because the latter uses logic to generate theorems and in particular applies the principle of the excluded middle. He believed that Intuitionism had proven that no mathematical reality corresponds to the affirmation of the principle of the excluded middle and to conclusions derived by means of it. He reasoned that “since logic is based on mathematics – and not vice versa – the use of the Principle of the Excluded Middle is not permissible as part of a mathematical proof.”

Fictionalism. Drunken Risibility.

mathematical-objects

Applied mathematics is often used as a source of support for platonism. How else but by becoming platonists can we make sense of the success of applied mathematics in science? As an answer to this question, the fictionalist empiricist will note that it’s not the case that applied mathematics always works. In several cases, it doesn’t work as initially intended, and it works only when accompanied by suitable empirical interpretations of the mathematical formalism. For example, when Dirac found negative energy solutions to the equation that now bears his name, he tried to devise physically meaningful interpretations of these solutions. His first inclination was to ignore these negative energy solutions as not being physically significant, and he took the solutions to be just an artifact of the mathematics – as is commonly done in similar cases in classical mechanics. Later, however, he identified a physically meaningful interpretation of these negative energy solutions in terms of “holes” in a sea of electrons. But the resulting interpretation was empirically inadequate, since it entailed that protons and electrons had the same mass. Given this difficulty, Dirac rejected that interpretation and formulated another. He interpreted the negative energy solutions in terms of a new particle that had the same mass as the electron but opposite charge. A couple of years after Dirac’s final interpretation was published Carl Anderson detected something that could be interpreted as the particle that Dirac posited. Asked as to whether Anderson was aware of Dirac’s papers, Anderson replied that he knew of the work, but he was so busy with his instruments that, as far as he was concerned, the discovery of the positron was entirely accidental.

The application of mathematics is ultimately a matter of using the vocabulary of mathematical theories to express relations among physical entities. Given that, for the fictionalist empiricist, the truth of the various theories involved – mathematical, physical, biological, and whatnot – is never asserted, no commitment to the existence of the entities that are posited by such theories is forthcoming. But if the theories in question – and, in particular, the mathematical theories – are not taken to be true, how can they be successfully applied? There is no mystery here. First, even in science, false theories can have true consequences. The situation here is analogous to what happens in fiction. Novels can, and often do, provide insightful, illuminating descriptions of phenomena of various kinds – for example, psychological or historical events – that help us understand the events in question in new, unexpected ways, despite the fact that the novels in question are not true. Second, given that mathematical entities are not subject to spatial-temporal constraints, it’s not surprising that they have no active role in applied contexts. Mathematical theories need only provide a framework that, suitably interpreted, can be used to describe the behavior of various types of phenomena – whether the latter are physical, chemical, biological, or whatnot. Having such a descriptive function is clearly compatible with the (interpreted) mathematical framework not being true, as Dirac’s case illustrates so powerfully. After all, as was just noted, one of the interpretations of the mathematical formalism was empirically inadequate.

On the fictionalist empiricist account, mathematical discourse is clearly taken on a par with scientific discourse. There is no change in the semantics. Mathematical and scientific statements are treated in exactly the same way. Both sorts of statements are truth-apt, and are taken as describing (correctly or not) the objects and relations they are about. The only shift here is on the aim of the research. After all, on the fictionalist empiricist proposal, the goal is not truth, but something weaker: empirical adequacy – or truth only with respect to the observable phenomena. However, once again, this goal matters to both science and (applied) mathematics, and the semantic uniformity between the two fields is still preserved. According to the fictionalist empiricist, mathematical discourse is also taken literally. If a mathematical theory states that “There are differentiable functions such that…”, the theory is not going to be reformulated in any way to avoid reference to these functions. The truth of the theory, however, is never asserted. There’s no need for that, given that only the empirical adequacy of the overall theoretical package is required.

Utopia as Emergence Initiating a Truth. Thought of the Day 104.0

chernikhov-architecture-of-industrial-forms-1934a

It is true that, in our contemporary world, traditional utopian models have withered, but today a new utopia of canonical majority has taken over the space of any action transformative of current social relations. Instead of radicalness, conformity has become the main expression of solidarity for the subject abandoned to her consecrated individuality. Where past utopias inscribed a collective vision to be fulfilled for future generations, the present utopia confiscates the future of the individual, unless she registers in a collective, popularized expression of the norm that reaps culture, politics, morality, and the like. The ideological outcome of the canonical utopia is the belief that the majority constitutes a safety net for individuality. If the future of the individual is bleak, at least there is some hope in saving his/her present.

This condition reiterates Ernst Bloch’s distinction between anticipatory and compensatory utopia, with the latter gaining ground today (Ruth Levitas). By discarding the myth of a better future for all, the subject succumbs to the immobilizing myth of a safe present for herself (the ultimate transmutation of individuality to individualism). The world can surmount Difference, simply by taking away its painful radicalness, replacing it with a non-violent, pluralistic, and multi-cultural present, as Žižek harshly criticized it for its anti-rational status. In line with Badiou and Jameson, Žižek discerns behind the multitude of identities and lifestyles in our world the dominance of the One and the eradication of Difference (the void of antagonism). It would have been ideal, if pluralism were not translated to populism and the non-violent to a sanctimonious respect of Otherness.

Badiou also points to the nihilism that permeates modern ethicology that puts forward the “recognition of the other”, the respect of “differences”, and “multi-culturalism”. Such ethics is supposed to protect the subject from discriminatory behaviours on the basis of sex, race, culture, religion, and so on, as one must display “tolerance” towards others who maintain different thinking and behaviour patterns. For Badiou, this ethical discourse is far from effective and truthful, as is revealed by the competing axes it forges (e.g., opposition between “tolerance” and “fanaticism”, “recognition of the other” and “identitarian fixity”).

Badiou denounces the decomposed religiosity of current ethical discourse, in the face of the pharisaic advocates of the right to difference who are “clearly horrified by any vigorously sustained difference”. The pharisaism of this respect for difference lies in the fact that it suggests the acceptance of the other, in so far as s/he is a “good other”; in other words, in so far as s/he is the same as everyone else. Such an ethical attitude ironically affirms the hegemonic identity of those who opt for integration of the different other, which is to say, the other is requested to suppress his/her difference, so that he partakes in the “Western identity”.

Rather than equating being with the One, the law of being is the multiple “without one”, that is, every multiple being is a multiple of multiples, stretching alterity into infinity; alterity is simply “what there is” and our experience is “the infinite deployment of infinite differences”. Only the void can discontinue this multiplicity of being, through the event that “breaks” with the existing order and calls for a “new way of being”. Thus, a radical utopian gesture needs to emerge from the perspective of the event, initiating a truth process.

Constructivism. Note Quote.

f110f2532724a581461f7024fdde344c

Constructivism, as portrayed by its adherents, “is the idea that we construct our own world rather than it being determined by an outside reality”. Indeed, a common ground among constructivists of different persuasion lies in a commitment to the idea that knowledge is actively built up by the cognizing subject. But, whereas individualistic constructivism (which is most clearly enunciated by radical constructivism) focuses on the biological/psychological mechanisms that lead to knowledge construction, sociological constructivism focuses on the social factors that influence learning.

Let us briefly consider certain fundamental assumptions of individualistic constructivism. The first issue a constructivist theory of cognition ought to elucidate concerns of course the raw materials on which knowledge is constructed. On this issue, von Glaserfeld, an eminent representative of radical constructivism, gives a categorical answer: “from the constructivist point of view, the subject cannot transcend the limits of individual experience” (Michael R. Matthews Constructivism in Science Education_ A Philosophical Examination). This statement presents the keystone of constructivist epistemology, which conclusively asserts that “the only tools available to a ‘knower’ are the senses … [through which] the individual builds a picture of the world”. What is more, the so formed mental pictures do not shape an ‘external’ to the subject world, but the distinct personal reality of each individual. And this of course entails, in its turn, that the responsibility for the gained knowledge lies with the constructor; it cannot be shifted to a pre-existing world. As Ranulph Glanville confesses, “reality is what I sense, as I sense it, when I’m being honest about it” .

In this way, individualistic constructivism estranges the cognizing subject from the external world. Cognition is not considered as aiming at the discovery and investigation of an ‘independent’ world; it is viewed as a ‘tool’ that exclusively serves the adaptation of the subject to the world as it is experienced. From this perspective, ‘knowledge’ acquires an entirely new meaning. In the expression of von Glaserfeld,

the word ‘knowledge’ refers to conceptual structures that epistemic agents, given the range of present experience, within their tradition of thought and language, consider viable….[Furthermore] concepts have to be individually built up by reflective abstraction; and reflective abstraction is not a matter of looking closer but at operating mentally in a way that happens to be compatible with the perceptual material at hand.

To say it briefly, ‘knowledge’ signifies nothing more than an adequate organization of the experiential world, which makes the cognizing subject capable to effectively manipulate its perceptual experience.

It is evident that such insights, precluding any external point of reference, have impacts on knowledge evaluation. Indeed, the ascertainment that “for constructivists there are no structures other than those which the knower forms by its own activity” (Michael R. MatthewsConstructivism in Science Education A Philosophical Examination) yields unavoidably the conclusion drawn by Gerard De Zeeuw that “there is no mind-independent yardstick against which to measure the quality of any solution”. Hence, knowledge claims should not be evaluated by reference to a supposed ‘external’ world, but only by reference to their internal consistency and personal utility. This is precisely the reason that leads von Glaserfeld to suggest the substitution of the notion of “truth” by the notion of “viability” or “functional fit”: knowledge claims are appraised as “true”, if they “functionally fit” into the subject’s experiential world; and to find a “fit” simply means not to notice any discrepancies. This functional adaptation of ‘knowledge’ to experience is what finally secures the intended “viability”.

In accordance with the constructivist view, the notion of ‘object’, far from indicating any kind of ‘existence’, it explicitly refers to a strictly personal construction of the cognizing subject. Specifically, “any item of the furniture of someone’s experiential world can be called an ‘object’” (von Glaserfeld). From this point of view, the supposition that “the objects one has isolated in his experience are identical with those others have formed … is an illusion”. This of course deprives language of any rigorous criterion of objectivity; its physical-object statements, being dependent upon elements that are derived from personal experience, cannot be considered to reveal attributes of the objects as they factually are. Incorporating concepts whose meaning is highly associated with the individual experience of the cognizing subject, these statements form at the end a personal-specific description of the world. Conclusively, for constructivists the term ‘objectivity’ “shows no more than a relative compatibility of concepts” in situations where individuals have had occasion to compare their “individual uses of the particular words”.

From the viewpoint of radical constructivism, science, being a human enterprise, is amenable, by its very nature, to human limitations. It is then naturally inferred on constructivist grounds that “science cannot transcend [just as individuals cannot] the domain of experience” (von Glaserfeld). This statement, indicating that there is no essential differentiation between personal and scientific knowledge, permits, for instance, John Staver to assert that “for constructivists, observations, objects, events, data, laws and theory do not exist independent of observers. The lawful and certain nature of natural phenomena is a property of us, those who describe, not of nature, what is described”. Accordingly, by virtue of the preceding premise, one may argue that “scientific theories are derived from human experience and formulated in terms of human concepts” (von Glaserfeld).

In the framework now of social constructivism, if one accepts that the term ‘knowledge’ means no more than “what is collectively endorsed” (David Bloor Knowledge and Social Imagery), he will probably come to the conclusion that “the natural world has a small or non-existent role in the construction of scientific knowledge” (Collins). Or, in a weaker form, one can postulate that “scientific knowledge is symbolic in nature and socially negotiated. The objects of science are not the phenomena of nature but constructs advanced by the scientific community to interpret nature” (Rosalind Driver et al.). It is worth remarking that both views of constructivism eliminate, or at least downplay, the role of the natural world in the construction of scientific knowledge.

It is evident that the foregoing considerations lead most versions of constructivism to ultimately conclude that the very word ‘existence’ has no meaning in itself. It does acquire meaning only by referring to individuals or human communities. The acknowledgement of this fact renders subsequently the notion of ‘external’ physical reality useless and therefore redundant. As Riegler puts it, within the constructivist framework, “an external reality is neither rejected nor confirmed, it must be irrelevant”.

Platonist Assertory Mathematics. Thought of the Day 88.0

god-and-platonic-host-1

Traditional Platonism, according to which our mathematical theories are bodies of truths about a realm of mathematical objects, assumes that only some amongst consistent theory candidates succeed in correctly describing the mathematical realm. For platonists, while mathematicians may contemplate alternative consistent extensions of the axioms for ZF (Zermelo–Fraenkel) set theory, for example, at most one such extension can correctly describe how things really are with the universe of sets. Thus, according to Platonists such as Kurt Gödel, intuition together with quasi-empirical methods (such as the justification of axioms by appeal to their intuitively acceptable consequences) can guide us in discovering which amongst alternative axiom candidates for set theory has things right about set theoretic reality. Alternatively, according to empiricists such as Quine, who hold that our belief in the truth of mathematical theories is justified by their role in empirical science, empirical evidence can choose between alternative consistent set theories. In Quine’s view, we are justified in believing the truth of the minimal amount of set theory required by our most attractive scientific account of the world.

Despite their differences at the level of detail, both of these versions of Platonism share the assumption that mere consistency is not enough for a mathematical theory: For such a theory to be true, it must correctly describe a realm of objects, where the existence of these objects is not guaranteed by consistency alone. Such a view of mathematical theories requires that we must have some grasp of the intended interpretation of an axiomatic theory that is independent of our axiomatization – otherwise inquiry into whether our axioms “get things right” about this intended interpretation would be futile. Hence, it is natural to see these Platonist views of mathematics as following Frege in holding that axioms

. . . must not contain a word or sign whose sense and meaning, or whose contribution to the expression of a thought, was not already completely laid down, so that there is no doubt about the sense of the proposition and the thought it expresses. The only question can be whether this thought is true and what its truth rests on. (Frege to Hilbert Gottlob Frege The Philosophical and Mathematical Correspondence)

On such an account, our mathematical axioms express genuine assertions (thoughts), which may or may not succeed in asserting truths about their subject matter. These Platonist views are “assertory” views of mathematics. Assertory views of mathematics make room for a gap between our mathematical theories and their intended subject matter, and the possibility of such a gap leads to at least two difficulties for traditional Platonism. These difficulties are articulated by Paul Benacerraf (here and here) in his aforementioned papers. The first difficulty comes from the realization that our mathematical theories, even when axioms are supplemented with less formal characterizations of their subject matter, may be insufficient to choose between alternative interpretations. For example, assertory views hold that the Peano axioms for arithmetic aim to assert truths about the natural numbers. But there are many candidate interpretations of these axioms, and nothing in the axioms, or in our wider mathematical practices, seems to suffice to pin down one interpretation over any other as the correct one. The view of mathematical theories as assertions about a specific realm of objects seems to force there to be facts about the correct interpretation of our theories even if, so far as our mathematical practice goes (for example, in the case of arithmetic), any ω-sequence would do.

Benacerraf’s second worry is perhaps even more pressing for assertory views. The possibility of a gap between our mathematical theories and their intended subject matter raises the question, “How do we know that our mathematical theories have things right about their subject matter?”. To answer this, we need to consider the nature of the purported objects about which our theories are supposed to assert truths. It seems that our best characterization of mathematical objects is negative: to account for the extent of our mathematical theories, and the timelessness of mathematical truths, it seems reasonable to suppose that mathematical objects are non-physical, non- spatiotemporal (and, it is sometimes added, mind- and language-independent) objects – in short, mathematical objects are abstract. But this negative characterization makes it difficult to say anything positive about how we could know anything about how things are with these objects. Assertory, Platonist views of mathematics are thus challenged to explain just how we are meant to evaluate our mathematical assertions – just how do the kinds of evidence these Platonists present in support of their theories succeed in ensuring that these theories track the truth?

The Mystery of Modality. Thought of the Day 78.0

sixdimensionquantificationalmodallogic.01

The ‘metaphysical’ notion of what would have been no matter what (the necessary) was conflated with the epistemological notion of what independently of sense-experience can be known to be (the a priori), which in turn was identified with the semantical notion of what is true by virtue of meaning (the analytic), which in turn was reduced to a mere product of human convention. And what motivated these reductions?

The mystery of modality, for early modern philosophers, was how we can have any knowledge of it. Here is how the question arises. We think that when things are some way, in some cases they could have been otherwise, and in other cases they couldn’t. That is the modal distinction between the contingent and the necessary.

How do we know that the examples are examples of that of which they are supposed to be examples? And why should this question be considered a difficult problem, a kind of mystery? Well, that is because, on the one hand, when we ask about most other items of purported knowledge how it is we can know them, sense-experience seems to be the source, or anyhow the chief source of our knowledge, but, on the other hand, sense-experience seems able only to provide knowledge about what is or isn’t, not what could have been or couldn’t have been. How do we bridge the gap between ‘is’ and ‘could’? The classic statement of the problem was given by Immanuel Kant, in the introduction to the second or B edition of his first critique, The Critique of Pure Reason: ‘Experience teaches us that a thing is so, but not that it cannot be otherwise.’

Note that this formulation allows that experience can teach us that a necessary truth is true; what it is not supposed to be able to teach is that it is necessary. The problem becomes more vivid if one adopts the language that was once used by Leibniz, and much later re-popularized by Saul Kripke in his famous work on model theory for formal modal systems, the usage according to which the necessary is that which is ‘true in all possible worlds’. In these terms the problem is that the senses only show us this world, the world we live in, the actual world as it is called, whereas when we claim to know about what could or couldn’t have been, we are claiming knowledge of what is going on in some or all other worlds. For that kind of knowledge, it seems, we would need a kind of sixth sense, or extrasensory perception, or nonperceptual mode of apprehension, to see beyond the world in which we live to these various other worlds.

Kant concludes, that our knowledge of necessity must be what he calls a priori knowledge or knowledge that is ‘prior to’ or before or independent of experience, rather than what he calls a posteriori knowledge or knowledge that is ‘posterior to’ or after or dependant on experience. And so the problem of the origin of our knowledge of necessity becomes for Kant the problem of the origin of our a priori knowledge.

Well, that is not quite the right way to describe Kant’s position, since there is one special class of cases where Kant thinks it isn’t really so hard to understand how we can have a priori knowledge. He doesn’t think all of our a priori knowledge is mysterious, but only most of it. He distinguishes what he calls analytic from what he calls synthetic judgments, and holds that a priori knowledge of the former is unproblematic, since it is not really knowledge of external objects, but only knowledge of the content of our own concepts, a form of self-knowledge.

We can generate any number of examples of analytic truths by the following three-step process. First, take a simple logical truth of the form ‘Anything that is both an A and a B is a B’, for instance, ‘Anyone who is both a man and unmarried is unmarried’. Second, find a synonym C for the phrase ‘thing that is both an A and a B’, for instance, ‘bachelor’ for ‘one who is both a man and unmarried’. Third, substitute the shorter synonym for the longer phrase in the original logical truth to get the truth ‘Any C is a B’, or in our example, the truth ‘Any bachelor is unmarried’. Our knowledge of such a truth seems unproblematic because it seems to reduce to our knowledge of the meanings of our own words.

So the problem for Kant is not exactly how knowledge a priori is possible, but more precisely how synthetic knowledge a priori is possible. Kant thought we do have examples of such knowledge. Arithmetic, according to Kant, was supposed to be synthetic a priori, and geometry, too – all of pure mathematics. In his Prolegomena to Any Future Metaphysics, Kant listed ‘How is pure mathematics possible?’ as the first question for metaphysics, for the branch of philosophy concerned with space, time, substance, cause, and other grand general concepts – including modality.

Kant offered an elaborate explanation of how synthetic a priori knowledge is supposed to be possible, an explanation reducing it to a form of self-knowledge, but later philosophers questioned whether there really were any examples of the synthetic a priori. Geometry, so far as it is about the physical space in which we live and move – and that was the original conception, and the one still prevailing in Kant’s day – came to be seen as, not synthetic a priori, but rather a posteriori. The mathematician Carl Friedrich Gauß had already come to suspect that geometry is a posteriori, like the rest of physics. Since the time of Einstein in the early twentieth century the a posteriori character of physical geometry has been the received view (whence the need for border-crossing from mathematics into physics if one is to pursue the original aim of geometry).

As for arithmetic, the logician Gottlob Frege in the late nineteenth century claimed that it was not synthetic a priori, but analytic – of the same status as ‘Any bachelor is unmarried’, except that to obtain something like ‘29 is a prime number’ one needs to substitute synonyms in a logical truth of a form much more complicated than ‘Anything that is both an A and a B is a B’. This view was subsequently adopted by many philosophers in the analytic tradition of which Frege was a forerunner, whether or not they immersed themselves in the details of Frege’s program for the reduction of arithmetic to logic.

Once Kant’s synthetic a priori has been rejected, the question of how we have knowledge of necessity reduces to the question of how we have knowledge of analyticity, which in turn resolves into a pair of questions: On the one hand, how do we have knowledge of synonymy, which is to say, how do we have knowledge of meaning? On the other hand how do we have knowledge of logical truths? As to the first question, presumably we acquire knowledge, explicit or implicit, conscious or unconscious, of meaning as we learn to speak, by the time we are able to ask the question whether this is a synonym of that, we have the answer. But what about knowledge of logic? That question didn’t loom large in Kant’s day, when only a very rudimentary logic existed, but after Frege vastly expanded the realm of logic – only by doing so could he find any prospect of reducing arithmetic to logic – the question loomed larger.

Many philosophers, however, convinced themselves that knowledge of logic also reduces to knowledge of meaning, namely, of the meanings of logical particles, words like ‘not’ and ‘and’ and ‘or’ and ‘all’ and ‘some’. To be sure, there are infinitely many logical truths, in Frege’s expanded logic. But they all follow from or are generated by a finite list of logical rules, and philosophers were tempted to identify knowledge of the meanings of logical particles with knowledge of rules for using them: Knowing the meaning of ‘or’, for instance, would be knowing that ‘A or B’ follows from A and follows from B, and that anything that follows both from A and from B follows from ‘A or B’. So in the end, knowledge of necessity reduces to conscious or unconscious knowledge of explicit or implicit semantical rules or linguistics conventions or whatever.

Such is the sort of picture that had become the received wisdom in philosophy departments in the English speaking world by the middle decades of the last century. For instance, A. J. Ayer, the notorious logical positivist, and P. F. Strawson, the notorious ordinary-language philosopher, disagreed with each other across a whole range of issues, and for many mid-century analytic philosophers such disagreements were considered the main issues in philosophy (though some observers would speak of the ‘narcissism of small differences’ here). And people like Ayer and Strawson in the 1920s through 1960s would sometimes go on to speak as if linguistic convention were the source not only of our knowledge of modality, but of modality itself, and go on further to speak of the source of language lying in ourselves. Individually, as children growing up in a linguistic community, or foreigners seeking to enter one, we must consciously or unconsciously learn the explicit or implicit rules of the communal language as something with a source outside us to which we must conform. But by contrast, collectively, as a speech community, we do not so much learn as create the language with its rules. And so if the origin of modality, of necessity and its distinction from contingency, lies in language, it therefore lies in a creation of ours, and so in us. ‘We, the makers and users of language’ are the ground and source and origin of necessity. Well, this is not a literal quotation from any one philosophical writer of the last century, but a pastiche of paraphrases of several.

Intuition

intuition-psychology

During his attempt to axiomatize the category of all categories, Lawvere says

Our intuition tells us that whenever two categories exist in our world, then so does the corresponding category of all natural transformations between the functors from the first category to the second (The Category of Categories as a Foundation).

However, if one tries to reduce categorial constructions to set theory, one faces some serious problems in the case of a category of functors. Lawvere (who, according to his aim of axiomatization, is not concerned by such a reduction) relies here on “intuition” to stress that those working with categorial concepts despite these problems have the feeling that the envisaged construction is clear, meaningful and legitimate. Not the reducibility to set theory, but an “intuition” to be specified answers for clarity, meaningfulness and legitimacy of a construction emerging in a mathematical working situation. In particular, Lawvere relies on a collective intuition, a common sense – for he explicitly says “our intuition”. Further, one obviously has to deal here with common sense on a technical level, for the “we” can only extend to a community used to the work with the concepts concerned.

In the tradition of philosophy, “intuition” means immediate, i.e., not conceptually mediated cognition. The use of the term in the context of validity (immediate insight in the truth of a proposition) is to be thoroughly distinguished from its use in the sensual context (the German Anschauung). Now, language is a manner of representation, too, but contrary to language, in the context of images the concept of validity is meaningless.

Obviously, the aspect of cognition guiding is touched on here. Especially the sensual intuition can take the guiding (or heuristic) function. There have been many working situations in history of mathematics in which making the objects of investigation accessible to a sensual intuition (by providing a Veranschaulichung) yielded considerable progress in the development of the knowledge concerning these objects. As an example, take the following account by Emil Artin of Emmy Noether’s contribution to the theory of algebras:

Emmy Noether introduced the concept of representation space – a vector space upon which the elements of the algebra operate as linear transformations, the composition of the linear transformation reflecting the multiplication in the algebra. By doing so she enables us to use our geometric intuition.

Similarly, Fréchet thinks to have really “powered” research in the theory of functions and functionals by the introduction of a “geometrical” terminology:

One can [ …] consider the numbers of the sequence [of coefficients of a Taylor series] as coordinates of a point in a space [ …] of infinitely many dimensions. There are several advantages to proceeding thus, for instance the advantage which is always present when geometrical language is employed, since this language is so appropriate to intuition due to the analogies it gives birth to.

Mathematical terminology often stems from a current language usage whose (intuitive, sensual) connotation is welcomed and serves to give the user an “intuition” of what is intended. While Category Theory is often classified as a highly abstract matter quite remote from intuition, in reality it yields, together with its applications, a multitude of examples for the role of current language in mathematical conceptualization.

This notwithstanding, there is naturally also a tendency in contemporary mathematics to eliminate as much as possible commitments to (sensual) intuition in the erection of a theory. It seems that algebraic geometry fulfills only in the language of schemes that essential requirement of all contemporary mathematics: to state its definitions and theorems in their natural abstract and formal setting in which they can be considered independent of geometric intuition (Mumford D., Fogarty J. Geometric Invariant Theory).

In the pragmatist approach, intuition is seen as a relation. This means: one uses a piece of language in an intuitive manner (or not); intuitive use depends on the situation of utterance, and it can be learned and transformed. The reason for this relational point of view, consists in the pragmatist conviction that each cognition of an object depends on the means of cognition employed – this means that for pragmatism there is no intuitive (in the sense of “immediate”) cognition; the term “intuitive” has to be given a new meaning.

What does it mean to use something intuitively? Heinzmann makes the following proposal: one uses language intuitively if one does not even have the idea to question validity. Hence, the term intuition in the Heinzmannian reading of pragmatism takes a different meaning, no longer signifies an immediate grasp. However, it is yet to be explained what it means for objects in general (and not only for propositions) to “question the validity of a use”. One uses an object intuitively, if one is not concerned with how the rules of constitution of the object have been arrived at, if one does not focus the materialization of these rules but only the benefits of an application of the object in the present context. “In principle”, the cognition of an object is determined by another cognition, and this determination finds its expression in the “rules of constitution”; one uses it intuitively (one does not bother about the being determined of its cognition), if one does not question the rules of constitution (does not focus the cognition which determines it). This is precisely what one does when using an object as a tool – because in doing so, one does not (yet) ask which cognition determines the object. When something is used as a tool, this constitutes an intuitive use, whereas the use of something as an object does not (this defines tool and object). Here, each concept in principle can play both roles; among two concepts, one may happen to be used intuitively before and the other after the progress of insight. Note that with respect to a given cognition, Peirce when saying “the cognition which determines it” always thinks of a previous cognition because he thinks of a determination of a cognition in our thought by previous thoughts. In conceptual history of mathematics, however, one most often introduced an object first as a tool and only after having done so did it come to one’s mind to ask for “the cognition which determines the cognition of this object” (that means, to ask how the use of this object can be legitimized).

The idea that it could depend on the situation whether validity is questioned or not has formerly been overlooked, perhaps because one always looked for a reductionist epistemology where the capacity called intuition is used exclusively at the last level of regression; in a pragmatist epistemology, to the contrary, intuition is used at every level in form of the not thematized tools. In classical systems, intuition was not simply conceived as a capacity; it was actually conceived as a capacity common to all human beings. “But the power of intuitively distinguishing intuitions from other cognitions has not prevented men from disputing very warmly as to which cognitions are intuitive”. Moreover, Peirce criticises strongly cartesian individualism (which has it that the individual has the capacity to find the truth). We could sum up this philosophy thus: we cannot reach definite truth, only provisional; significant progress is not made individually but only collectively; one cannot pretend that the history of thought did not take place and start from scratch, but every cognition is determined by a previous cognition (maybe by other individuals); one cannot uncover the ultimate foundation of our cognitions; rather, the fact that we sometimes reach a new level of insight, “deeper” than those thought of as fundamental before, merely indicates that there is no “deepest” level. The feeling that something is “intuitive” indicates a prejudice which can be philosophically criticised (even if this does not occur to us at the beginning).

In our approach, intuitive use is collectively determined: it depends on the particular usage of the community of users whether validity criteria are or are not questioned in a given situation of language use. However, it is acknowledged that for example scientific communities develop usages making them communities of language users on their own. Hence, situations of language use are not only partitioned into those where it comes to the users’ mind to question validity criteria and those where it does not, but moreover this partition is specific to a particular community (actually, the community of language users is established partly through a peculiar partition; this is a definition of the term “community of language users”). The existence of different communities with different common senses can lead to the following situation: something is used intuitively by one group, not intuitively by another. In this case, discussions inside the discipline occur; one has to cope with competing common senses (which are therefore not really “common”). This constitutes a task for the historian.

Mathematical Reductionism: As Case Via C. S. Peirce’s Hypothetical Realism.

mathematical-beauty

During the 20th century, the following epistemology of mathematics was predominant: a sufficient condition for the possibility of the cognition of objects is that these objects can be reduced to set theory. The conditions for the possibility of the cognition of the objects of set theory (the sets), in turn, can be given in various manners; in any event, the objects reduced to sets do not need an additional epistemological discussion – they “are” sets. Hence, such an epistemology relies ultimately on ontology. Frege conceived the axioms as descriptions of how we actually manipulate extensions of concepts in our thinking (and in this sense as inevitable and intuitive “laws of thought”). Hilbert admitted the use of intuition exclusively in metamathematics where the consistency proof is to be done (by which the appropriateness of the axioms would be established); Bourbaki takes the axioms as mere hypotheses. Hence, Bourbaki’s concept of justification is the weakest of the three: “it works as long as we encounter no contradiction”; nevertheless, it is still epistemology, because from this hypothetical-deductive point of view, one insists that at least a proof of relative consistency (i.e., a proof that the hypotheses are consistent with the frequently tested and approved framework of set theory) should be available.

Doing mathematics, one tries to give proofs for propositions, i.e., to deduce the propositions logically from other propositions (premisses). Now, in the reductionist perspective, a proof of a mathematical proposition yields an insight into the truth of the proposition, if the premisses are already established (if one has already an insight into their truth); this can be done by giving in turn proofs for them (in which new premisses will occur which ask again for an insight into their truth), or by agreeing to put them at the beginning (to consider them as axioms or postulates). The philosopher tries to understand how the decision about what propositions to take as axioms is arrived at, because he or she is dissatisfied with the reductionist claim that it is on these axioms that the insight into the truth of the deduced propositions rests. Actually, this epistemology might contain a short-coming since Poincaré (and Wittgenstein) stressed that to have a proof of a proposition is by no means the same as to have an insight into its truth.

Attempts to disclose the ontology of mathematical objects reveal the following tendency in epistemology of mathematics: Mathematics is seen as suffering from a lack of ontological “determinateness”, namely that this science (contrarily to many others) does not concern material data such that the concept of material truth is not available (especially in the case of the infinite). This tendency is embarrassing since on the other hand mathematical cognition is very often presented as cognition of the “greatest possible certainty” just because it seems not to be bound to material evidence, let alone experimental check.

The technical apparatus developed by the reductionist and set-theoretical approach nowadays serves other purposes, partly for the reason that tacit beliefs about sets were challenged; the explanations of the science which it provides are considered as irrelevant by the practitioners of this science. There is doubt that the above mentioned sufficient condition is also necessary; it is not even accepted throughout as a sufficient one. But what happens if some objects, as in the case of category theory, do not fulfill the condition? It seems that the reductionist approach, so to say, has been undocked from the historical development of the discipline in several respects; an alternative is required.

Anterior to Peirce, epistemology was dominated by the idea of a grasp of objects; since Descartes, intuition was considered throughout as a particular, innate capacity of cognition (even if idealists thought that it concerns the general, and empiricists that it concerns the particular). The task of this particular capacity was the foundation of epistemology; already from Aristotle’s first premisses of syllogism, what was aimed at was to go back to something first. In this traditional approach, it is by the ontology of the objects that one hopes to answer the fundamental question concerning the conditions for the possibility of the cognition of these objects. One hopes that there are simple “basic objects” to which the more complex objects can be reduced and whose cognition is possible by common sense – be this an innate or otherwise distinguished capacity of cognition common to all human beings. Here, epistemology is “wrapped up” in (or rests on) ontology; to do epistemology one has to do ontology first.

Peirce shares Kant’s opinion according to which the object depends on the subject; however, he does not agree that reason is the crucial means of cognition to be criticised. In his paper “Questions concerning certain faculties claimed for man”, he points out the basic assumption of pragmatist philosophy: every cognition is semiotically mediated. He says that there is no immediate cognition (a cognition which “refers immediately to its object”), but that every cognition “has been determined by a previous cognition” of the same object. Correspondingly, Peirce replaces critique of reason by critique of signs. He thinks that Kant’s distinction between the world of things per se (Dinge an sich) and the world of apparition (Erscheinungswelt) is not fruitful; he rather distinguishes the world of the subject and the world of the object, connected by signs; his position consequently is a “hypothetical realism” in which all cognitions are only valid with reservations. This position does not negate (nor assert) that the object per se (with the semiotical mediation stripped off) exists, since such assertions of “pure” existence are seen as necessarily hypothetical (that means, not withstanding philosophical criticism).

By his basic assumption, Peirce was led to reveal a problem concerning the subject matter of epistemology, since this assumption means in particular that there is no intuitive cognition in the classical sense (which is synonymous to “immediate”). Hence, one could no longer consider cognitions as objects; there is no intuitive cognition of an intuitive cognition. Intuition can be no more than a relation. “All the cognitive faculties we know of are relative, and consequently their products are relations”. According to this new point of view, intuition cannot any longer serve to found epistemology, in departure from the former reductionist attitude. A central argument of Peirce against reductionism or, as he puts it,

the reply to the argument that there must be a first is as follows: In retracing our way from our conclusions to premisses, or from determined cognitions to those which determine them, we finally reach, in all cases, a point beyond which the consciousness in the determined cognition is more lively than in the cognition which determines it.

Peirce gives some examples derived from physiological observations about perception, like the fact that the third dimension of space is inferred, and the blind spot of the retina. In this situation, the process of reduction loses its legitimacy since it no longer fulfills the function of cognition justification. At such a place, something happens which I would like to call an “exchange of levels”: the process of reduction is interrupted in that the things exchange the roles performed in the determination of a cognition: what was originally considered as determining is now determined by what was originally considered as asking for determination.

The idea that contents of cognition are necessarily provisional has an effect on the very concept of conditions for the possibility of cognitions. It seems that one can infer from Peirce’s words that what vouches for a cognition is not necessarily the cognition which determines it but the livelyness of our consciousness in the cognition. Here, “to vouch for a cognition” means no longer what it meant before (which was much the same as “to determine a cognition”), but it still means that the cognition is (provisionally) reliable. This conception of the livelyness of our consciousness roughly might be seen as a substitute for the capacity of intuition in Peirce’s epistemology – but only roughly, since it has a different coverage.

Tarski, Wittgenstein and Undecidable Sentences in Affine Relation to Gödel’s. Thought of the Day 65.0

 maxresdefault

I imagine someone asking my advice; he says: “I have constructed a proposition (I will use ‘P’ to designate it) in Russell’s symbolism, and by means of certain definitions and transformations it can be so interpreted that it says: ‘P is not provable in Russell’s system.’ Must I not say that this proposition on the one hand is true, and on the other hand is unprovable? For suppose it were false; then it is true that it is provable. And that surely cannot be! And if it is proved, then it is proved that it is not provable. Thus it can only be true, but unprovable.” — Wittgenstein

Any language of such a set, say Peano Arithmetic PA (or Russell and Whitehead’s Principia Mathematica, or ZFC), expresses – in a finite, unambiguous, and communicable manner – relations between concepts that are external to the language PA (or to Principia, or to ZFC). Each such language is, thus, essentially two-valued, since a relation either holds or does not hold externally (relative to the language).

Further, a selected, finite, number of primitive formal assertions about a finite set of selected primitive relations of, say, PA are defined as axiomatically PA-provable; all other assertions about relations that can be effectively defined in terms of the primitive relations are termed as PA-provable if, and only if, there is a finite sequence of assertions of PA, each of which is either a primitive assertion, or which can effectively be determined in a finite number of steps as an immediate consequence of any two assertions preceding it in the sequence by a finite set of rules of consequence.

The philosophical dimensions of this emerges if we take M as the standard, arithmetical, interpretation of PA, where:

(a)  the set of non-negative integers is the domain,

(b)  the integer 0 is the interpretation of the symbol “0” of PA,

(c)  the successor operation (addition of 1) is the interpretation of the “ ‘ ” function,

(d)  ordinary addition and multiplication are the interpretations of “+” and “.“,

(e) the interpretation of the predicate letter “=” is the equality relation.

Now, post-Gödel, the standard interpretation of classical theory seems to be that:

(f) PA can, indeed, be interpreted in M;

(g) assertions in M are decidable by Tarski’s definitions of satisfiability and truth;

(h) Tarskian truth and satisfiability are, however, not effectively verifiable in M.

Tarski made clear his indebtedness to Gödel’s methods,

We owe the method used here to Gödel who employed it for other purposes in his recently published work Gödel. This exceedingly important and interesting article is not directly connected with the theme of our work it deals with strictly methodological problems the consistency and completeness of deductive systems, nevertheless we shall be able to use the methods and in part also the results of Gödel’s investigations for our purpose.

On the other hand Tarski strongly emphasized the fact that his results were obtained independently, even though Tarski’s theorem on the undefinability of truth implies the existence of undecidable sentences, and hence Gödel’s first incompleteness theorem. Shifting gears here, how far was the Wittgensteinian quote really close to Gödel’s? However, the question, implicit in Wittgenstein’s argument regarding the possibility of a semantic contradiction in Gödel’s reasoning, then arises: How can we assert that a PA-assertion (whether such an assertion is PA-provable or not) is true under interpretation in M, so long as such truth remains effectively unverifiable in M? Since the issue is not resolved unambiguously by Gödel in his paper (nor, apparently, by subsequent standard interpretations of his formal reasoning and conclusions), Wittgenstein’s quote can be taken to argue that, although we may validly draw various conclusions from Gödel’s formal reasoning and conclusions, the existence of a true or false assertion of M cannot be amongst them.

Conjuncted: Occam’s Razor and Nomological Hypothesis. Thought of the Day 51.1.1

rockswater

Conjuncted here, here and here.

A temporally evolving system must possess a sufficiently rich set of symmetries to allow us to infer general laws from a finite set of empirical observations. But what justifies this hypothesis?

This question is central to the entire scientific enterprise. Why are we justified in assuming that scientific laws are the same in different spatial locations, or that they will be the same from one day to the next? Why should replicability of other scientists’ experimental results be considered the norm, rather than a miraculous exception? Why is it normally safe to assume that the outcomes of experiments will be insensitive to irrelevant details? Why, for that matter, are we justified in the inductive generalizations that are ubiquitous in everyday reasoning?

In effect, we are assuming that the scientific phenomena under investigation are invariant under certain symmetries – both temporal and spatial, including translations, rotations, and so on. But where do we get this assumption from? The answer lies in the principle of Occam’s Razor.

Roughly speaking, this principle says that, if two theories are equally consistent with the empirical data, we should prefer the simpler theory:

Occam’s Razor: Given any body of empirical evidence about a temporally evolving system, always assume that the system has the largest possible set of symmetries consistent with that evidence.

Making it more precise, we begin by explaining what it means for a particular symmetry to be “consistent” with a body of empirical evidence. Formally, our total body of evidence can be represented as a subset E of H, i.e., namely the set of all logically possible histories that are not ruled out by that evidence. Note that we cannot assume that our evidence is a subset of Ω; when we scientifically investigate a system, we do not normally know what Ω is. Hence we can only assume that E is a subset of the larger set H of logically possible histories.

Now let ψ be a transformation of H, and suppose that we are testing the hypothesis that ψ is a symmetry of the system. For any positive integer n, let ψn be the transformation obtained by applying ψ repeatedly, n times in a row. For example, if ψ is a rotation about some axis by angle θ, then ψn is the rotation by the angle nθ. For any such transformation ψn, we write ψ–n(E) to denote the inverse image in H of E under ψn. We say that the transformation ψ is consistent with the evidence E if the intersection

E ∩ ψ–1(E) ∩ ψ–2(E) ∩ ψ–3(E) ∩ …

is non-empty. This means that the available evidence (i.e., E) does not falsify the hypothesis that ψ is a symmetry of the system.

For example, suppose we are interested in whether cosmic microwave background radiation is isotropic, i.e., the same in every direction. Suppose we measure a background radiation level of x1 when we point the telescope in direction d1, and a radiation level of x2 when we point it in direction d2. Call these events E1 and E2. Thus, our experimental evidence is summarized by the event E = E1 ∩ E2. Let ψ be a spatial rotation that rotates d1 to d2. Then, focusing for simplicity just on the first two terms of the infinite intersection above,

E ∩ ψ–1(E) = E1 ∩ E2 ∩ ψ–1(E1) ∩ ψ–1(E2).

If x1 = x2, we have E1 = ψ–1(E2), and the expression for E ∩ ψ–1(E) simplifies to E1 ∩ E2 ∩ ψ–1(E1), which has at least a chance of being non-empty, meaning that the evidence has not (yet) falsified isotropy. But if x1 ≠ x2, then E1 and ψ–1(E2) are disjoint. In that case, the intersection E ∩ ψ–1(E) is empty, and the evidence is inconsistent with isotropy. As it happens, we know from recent astronomy that x1 ≠ x2 in some cases, so cosmic microwave background radiation is not isotropic, and ψ is not a symmetry.

Our version of Occam’s Razor now says that we should postulate as symmetries of our system a maximal monoid of transformations consistent with our evidence. Formally, a monoid Ψ of transformations (where each ψ in Ψ is a function from H into itself) is consistent with evidence E if the intersection

ψ∈Ψ ψ–1(E)

is non-empty. This is the generalization of the infinite intersection that appeared in our definition of an individual transformation’s consistency with the evidence. Further, a monoid Ψ that is consistent with E is maximal if no proper superset of Ψ forms a monoid that is also consistent with E.

Occam’s Razor (formal): Given any body E of empirical evidence about a temporally evolving system, always assume that the set of symmetries of the system is a maximal monoid Ψ consistent with E.

What is the significance of this principle? We define Γ to be the set of all symmetries of our temporally evolving system. In practice, we do not know Γ. A monoid Ψ that passes the test of Occam’s Razor, however, can be viewed as our best guess as to what Γ is.

Furthermore, if Ψ is this monoid, and E is our body of evidence, the intersection

ψ∈Ψ ψ–1(E)

can be viewed as our best guess as to what the set of nomologically possible histories is. It consists of all those histories among the logically possible ones that are not ruled out by the postulated symmetry monoid Ψ and the observed evidence E. We thus call this intersection our nomological hypothesis and label it Ω(Ψ,E).

To see that this construction is not completely far-fetched, note that, under certain conditions, our nomological hypothesis does indeed reflect the truth about nomological possibility. If the hypothesized symmetry monoid Ψ is a subset of the true symmetry monoid Γ of our temporally evolving system – i.e., we have postulated some of the right symmetries – then the true set Ω of all nomologically possible histories will be a subset of Ω(Ψ,E). So, our nomological hypothesis will be consistent with the truth and will, at most, be logically weaker than the truth.

Given the hypothesized symmetry monoid Ψ, we can then assume provisionally (i) that any empirical observation we make, corresponding to some event D, can be generalized to a Ψ-invariant law and (ii) that unconditional and conditional probabilities can be estimated from empirical frequency data using a suitable version of the Ergodic Theorem.