Individuation. Thought of the Day 91.0


The first distinction is between two senses of the word “individuation” – one semantic, the other metaphysical. In the semantic sense of the word, to individuate an object is to single it out for reference in language or in thought. By contrast, in the metaphysical sense of the word, the individuation of objects has to do with “what grounds their identity and distinctness.” Sets are often used to illustrate the intended notion of “grounding.” The identity or distinctness of sets is said to be “grounded” in accordance with the principle of extensionality, which says that two sets are identical iff they have precisely the same elements:

SET(x) ∧ SET(y) → [x = y ↔ ∀u(u ∈ x ↔ u ∈ y)]

The metaphysical and semantic senses of individuation are quite different notions, neither of which appears to be reducible to or fully explicable in terms of the other. Since sufficient sense cannot be made of the notion of “grounding of identity” on which the metaphysical notion of individuation is based, focusing on the semantic notion of individuation is an easy way out. This choice of focus means that our investigation is a broadly empirical one drawn on empirical linguistics and psychology.

What is the relation between the semantic notion of individuation and the notion of a criterion of identity? It is by means of criteria of identity that semantic individuation is effected. Singling out an object for reference involves being able to distinguish this object from other possible referents with which one is directly presented. The final distinction is between two types of criteria of identity. A one-level criterion of identity says that two objects of some sort F are identical iff they stand in some relation RF:

Fx ∧ Fy → [x = y ↔ RF(x,y)]

Criteria of this form operate at just one level in the sense that the condition for two objects to be identical is given by a relation on these objects themselves. An example is the set-theoretic principle of extensionality.

A two-level criterion of identity relates the identity of objects of one sort to some condition on entities of another sort. The former sort of objects are typically given as functions of items of the latter sort, in which case the criterion takes the following form:

f(α) = f(β) ↔ α ≈ β

where the variables α and β range over the latter sort of item and ≈ is an equivalence relation on such items. An example is Frege’s famous criterion of identity for directions:

d(l1) = d(l2) ↔ l1 || l2

where the variables l1 and l2 range over lines or other directed items. An analogous two-level criterion relates the identity of geometrical shapes to the congruence of things or figures having the shapes in question. The decision to focus on the semantic notion of individuation makes it natural to focus on two-level criteria. For two-level criteria of identity are much more useful than one-level criteria when we are studying how objects are singled out for reference. A one-level criterion provides little assistance in the task of singling out objects for reference. In order to apply a one-level criterion, one must already be capable of referring to objects of the sort in question. By contrast, a two-level criterion promises a way of singling out an object of one sort in terms of an item of another and less problematic sort. For instance, when Frege investigated how directions and other abstract objects “are given to us”, although “we cannot have any ideas or intuitions of them”, he proposed that we relate the identity of two directions to the parallelism of the two lines in terms of which these directions are presented. This would be explanatory progress since reference to lines is less puzzling than reference to directions.

Carnap, c-notions. Thought of the Day 87.0


A central distinction for Carnap is that between definite and indefinite notions. A definite notion is one that is recursive, such as “is a formula” and “is a proof of φ”. An indefinite notion is one that is non-recursive, such as “is an ω-consequence of PA” and “is true in Vω+ω”. This leads to a distinction between (i) the method of derivation (or d-method), which investigates the semi-definite (recursively enumerable) metamathematical notions, such as demonstrable, derivable, refutable, resoluble, and irresoluble, and (ii) the method of consequence (or c-method), which investigates the (typically) non-recursively enumerable metamathematical notions such as consequence, analytic, contradictory, determinate, and synthetic.

A language for Carnap is what we would today call a formal axiom system. The rules of the formal system are definite (recursive) and Carnap is fully aware that a given language cannot include its own c-notions. The logical syntax of a language is what we would today call metatheory. It is here that one formalizes the c-notions for the (object) language. From among the various c-notions Carnap singles out one as central, namely, the notion of (direct) consequence; from this c-notion all of the other c-notions can be defined in routine fashion.

We now turn to Carnap’s account of his fundamental notions, most notably, the analytic/synthetic distinction and the division of primitive terms into ‘logico-mathematical’ and ‘descriptive’. Carnap actually has two approaches. The first approach occurs in his discussion of specific languages – Languages I and II. Here he starts with a division of primitive terms into ‘logico-mathematical’ and ‘descriptive’ and upon this basis defines the c-notions, in particular the notions of being analytic and synthetic. The second approach occurs in the discussion of general syntax. Here Carnap reverses procedure: he starts with a specific c-notion – namely, the notion of direct consequence – and he uses it to define the other c-notions and draw the division of primitive terms into ‘logico-mathematical’ and ‘descriptive’.

In the first approach Carnap introduces two languages – Language I and Language II. The background languages (in the modern sense) of Language I and Language II are quite general – they include expressions that we would call ‘descriptive’. Carnap starts with a demarcation of primitive terms into ‘logico-mathematical’ and ‘descriptive’. The expressions he classifies as ‘logico-mathematical’ are exactly those included in the modern versions of these systems; the remaining expressions are classified as ‘descriptive’. Language I is a version of Primitive Recursive Arithmetic and Language II is a version of finite type theory built over Peano Arithmetic. The d-notions for these languages are the standard proof-theoretic ones.

For Language I Carnap starts with a consequence relation based on two rules – (i) the rule that allows one to infer φ if T \vdash \!\, φ (where T is some fixed ∑10-complete formal system) and (ii) the ω-rule. It is then easily seen that one has a complete theory for the logico-mathematical fragment, that is, for any logico-mathematical sentence φ, either φ or ¬φ is a consequence of the null set. The other c-notions are then defined in the standard fashion. For example, a sentence is analytic if it is a consequence of the null set; contradictory if its negation is analytic; and so on.

For Language II Carnap starts by defining analyticity. His definition is a notational variant of the Tarskian truth definition with one important difference – namely, it involves an asymmetric treatment of the logico-mathematical and descriptive expressions. For the logico-mathematical expressions his definition really just is a notational variant of the Tarskian truth definition. But descriptive expressions must pass a more stringent test to count as analytic – they must be such that if one replaces all descriptive expressions in them by variables of the appropriate type, then the resulting logico-mathematical expression is analytic, that is, true. In other words, to count as analytic a descriptive expression must be a substitution-instance of a general logico-mathematical truth. With this definition in place the other c-notions are defined in the standard fashion.

The content of a sentence is defined to be the set of its non-analytic consequences. It then follows immediately from the definitions that logico-mathematical sentences (of both Language I and Language II) are analytic or contradictory and (assuming consistency) that analytic sentences are without content.

In the second approach, for a given language, Carnap starts with an arbitrary notion of direct consequence and from this notion he defines the other c-notions in the standard fashion. More importantly, in addition to defining the other c-notion, Carnap also uses the primitive notion of direct consequence (along with the derived c-notions) to effect the classification of terms into ‘logico-mathematical’ and ‘descriptive’. The guiding idea is that “the formally expressible distinguishing peculiarity of logical symbols and expressions [consists] in the fact that each sentence constructed solely from them is determinate”. Howsoever the guiding idea is implemented the actual division between “logico-mathematical” and “descriptive” expressions that one obtains as output is sensitive to the scope of the direct consequence relation with which one starts.

With this basic division in place, Carnap can now draw various derivative divisions, most notably, the division between analytic and synthetic statements: Suppose φ is a consequence of Γ. Then φ is said to be an L-consequence of Γ if either (i) φ and the sentences in Γ are logico-mathematical, or (ii) letting φ’ and Γ’ be the result of unpacking all descriptive symbols, then for every result φ” and Γ” of replacing every (primitive) descriptive symbol by an expression of the same genus, maintaining equal expressions for equal symbols, we have that φ” is a consequence of Γ”. Otherwise φ is a P-consequence of Γ. This division of the notion of consequence into L-consequence and P-consequence induces a division of the notion of demonstrable into L-demonstrable and P-demonstrable and the notion of valid into L-valid and P-valid and likewise for all of the other d-notions and c-notions. The terms ‘analytic’, ‘contradictory’, and ‘synthetic’ are used for ‘L-valid’, ‘L-contravalid’, and ‘L-indeterminate’.

It follows immediately from the definitions that logico-mathematical sentences are analytic or contradictory and that analytic sentences are without content. The trouble with the first approach is that the definitions of analyticity that Carnap gives for Languages I and II are highly sensitive to the original classification of terms into ‘logico-mathematical’ and ‘descriptive’. And the trouble with the second approach is that the division between ‘logico-mathematical’ and ‘descriptive’ expressions (and hence division between ‘analytic’ and ‘synthetic’ truths) is sensitive to the scope of the direct consequence relation with which one starts. This threatens to undermine Carnap’s thesis that logico-mathematical truths are analytic and hence without content. 

In the first approach, the original division of terms into ‘logico-mathematical’ and ‘descriptive’ is made by stipulation and if one alters this division one thereby alters the derivative division between analytic and synthetic sentences. For example, consider the case of Language II. If one calls only the primitive terms of first-order logic ‘logico-mathematical’ and then extends the language by adding the machinery of arithmetic and set theory, then, upon running the definition of ‘analytic’, one will have the result that true statements of first-order logic are without content while (the distinctive) statements of arithmetic and set theory have content. For another example, if one takes the language of arithmetic, calls the primitive terms ‘logico-mathematical’ and then extends the language by adding the machinery of finite type theory, calling the basic terms ‘descriptive’, then, upon running the definition of ‘analytic’, the result will be that statements of first-order arithmetic are analytic or contradictory while (the distinctive) statements of second- and higher-order arithmetic are synthetic and hence have content. In general, by altering the input, one alters the output, and Carnap adjusts the input to achieve his desired output.

In the second approach, there are no constraints on the scope of the direct consequence relation with which one starts and if one alters it one thereby alters the derivative division between ‘logico-mathematical’ and ‘descriptive’ expressions. Logical symbols and expressions have the feature that sentences composed solely of them are determinate. The trouble is that the resulting division of terms into ‘logico-mathematical’ and ‘descriptive’ will be highly sensitive to the scope of the direct consequence relation with which one starts. For example, let S be first-order PA and for the direct consequence relation take “provable in PA”. Under this assignment Fermat’s Last Theorem will be deemed descriptive, synthetic, and to have non-trivial content. For an example at the other extreme, let S be an extension of PA that contains a physical theory and let the notion of direct consequence be given by a Tarskian truth definition for the language. Since in the metalanguage one can prove that every sentence is true or false, every sentence will be either analytic (and so have null content) or contradictory (and so have total content). To overcome such counter-examples and get the classification that Carnap desires one must ensure that the consequence relation is (i) complete for the sublanguage consisting of expressions that one wants to come out as ‘logico-mathematical’ and (ii) not complete for the sublanguage consisting of expressions that one wants to come out as ‘descriptive’. Once again, by altering the input, one alters the output.

Carnap merely provides us with a flexible piece of technical machinery involving free parameters that can be adjusted to yield a variety of outcomes concerning the classifications of analytic/synthetic, contentful/non-contentful, and logico-mathematical/descriptive. In his own case, he has adjusted the parameters in such a way that the output is a formal articulation of his logicist view of mathematics that the truths of mathematics are analytic and without content. And one can adjust them differently to articulate a number of other views, for example, the view that the truths of first-order logic are without content while the truths of arithmetic and set theory have content. The point, however, is that we have been given no reason for fixing the parameters one way rather than another. The distinctions are thus not principled distinctions. It is trivial to prove that mathematics is trivial if one trivializes the claim.

Carnap is perfectly aware that to define c-notions like analyticity one must ascend to a stronger metalanguage. However, there is a distinction that he appears to overlook, namely, the distinction between (i) having a stronger system S that can define ‘analytic in S’ and (ii) having a stronger system S that can, in addition, evaluate a given statement of the form ‘φ is analytic in S’. It is an elementary fact that two systems S1 and S2 can employ the same definition (from an intensional point of view) of ‘analytic in S’ (using either the definition given for Language I or Language II) but differ on their evaluation of ‘φ is analytic in S’ (that is, differ on the extension of ‘analytic in S’). Thus, to determine whether ‘φ is analytic in S’ holds one needs to access much more than the “syntactic design” of φ – in addition to ascending to an essentially richer metalanguage one must move to a sufficiently strong system to evaluate ‘φ is analytic in S’.

In fact, to answer ‘Is φ analytic in Language I?’ is just to answer φ and, in the more general setting, to answer all questions of the form ‘Is φ analytic in S?’ (for various mathematical φ and S), where here ‘analytic’ is defined as Carnap defines it for Language II, just to answer all questions of mathematics. The same, of course, applies to the c-notion of consequence. So, when in first stating the Principle of Tolerance, Carnap tells us that we can choose our system S arbitrarily and that ‘no question of justification arises at all, but only the question of the syntactical consequences to which one or other of the choices leads’, where he means the c-notion of consequence.



During his attempt to axiomatize the category of all categories, Lawvere says

Our intuition tells us that whenever two categories exist in our world, then so does the corresponding category of all natural transformations between the functors from the first category to the second (The Category of Categories as a Foundation).

However, if one tries to reduce categorial constructions to set theory, one faces some serious problems in the case of a category of functors. Lawvere (who, according to his aim of axiomatization, is not concerned by such a reduction) relies here on “intuition” to stress that those working with categorial concepts despite these problems have the feeling that the envisaged construction is clear, meaningful and legitimate. Not the reducibility to set theory, but an “intuition” to be specified answers for clarity, meaningfulness and legitimacy of a construction emerging in a mathematical working situation. In particular, Lawvere relies on a collective intuition, a common sense – for he explicitly says “our intuition”. Further, one obviously has to deal here with common sense on a technical level, for the “we” can only extend to a community used to the work with the concepts concerned.

In the tradition of philosophy, “intuition” means immediate, i.e., not conceptually mediated cognition. The use of the term in the context of validity (immediate insight in the truth of a proposition) is to be thoroughly distinguished from its use in the sensual context (the German Anschauung). Now, language is a manner of representation, too, but contrary to language, in the context of images the concept of validity is meaningless.

Obviously, the aspect of cognition guiding is touched on here. Especially the sensual intuition can take the guiding (or heuristic) function. There have been many working situations in history of mathematics in which making the objects of investigation accessible to a sensual intuition (by providing a Veranschaulichung) yielded considerable progress in the development of the knowledge concerning these objects. As an example, take the following account by Emil Artin of Emmy Noether’s contribution to the theory of algebras:

Emmy Noether introduced the concept of representation space – a vector space upon which the elements of the algebra operate as linear transformations, the composition of the linear transformation reflecting the multiplication in the algebra. By doing so she enables us to use our geometric intuition.

Similarly, Fréchet thinks to have really “powered” research in the theory of functions and functionals by the introduction of a “geometrical” terminology:

One can [ …] consider the numbers of the sequence [of coefficients of a Taylor series] as coordinates of a point in a space [ …] of infinitely many dimensions. There are several advantages to proceeding thus, for instance the advantage which is always present when geometrical language is employed, since this language is so appropriate to intuition due to the analogies it gives birth to.

Mathematical terminology often stems from a current language usage whose (intuitive, sensual) connotation is welcomed and serves to give the user an “intuition” of what is intended. While Category Theory is often classified as a highly abstract matter quite remote from intuition, in reality it yields, together with its applications, a multitude of examples for the role of current language in mathematical conceptualization.

This notwithstanding, there is naturally also a tendency in contemporary mathematics to eliminate as much as possible commitments to (sensual) intuition in the erection of a theory. It seems that algebraic geometry fulfills only in the language of schemes that essential requirement of all contemporary mathematics: to state its definitions and theorems in their natural abstract and formal setting in which they can be considered independent of geometric intuition (Mumford D., Fogarty J. Geometric Invariant Theory).

In the pragmatist approach, intuition is seen as a relation. This means: one uses a piece of language in an intuitive manner (or not); intuitive use depends on the situation of utterance, and it can be learned and transformed. The reason for this relational point of view, consists in the pragmatist conviction that each cognition of an object depends on the means of cognition employed – this means that for pragmatism there is no intuitive (in the sense of “immediate”) cognition; the term “intuitive” has to be given a new meaning.

What does it mean to use something intuitively? Heinzmann makes the following proposal: one uses language intuitively if one does not even have the idea to question validity. Hence, the term intuition in the Heinzmannian reading of pragmatism takes a different meaning, no longer signifies an immediate grasp. However, it is yet to be explained what it means for objects in general (and not only for propositions) to “question the validity of a use”. One uses an object intuitively, if one is not concerned with how the rules of constitution of the object have been arrived at, if one does not focus the materialization of these rules but only the benefits of an application of the object in the present context. “In principle”, the cognition of an object is determined by another cognition, and this determination finds its expression in the “rules of constitution”; one uses it intuitively (one does not bother about the being determined of its cognition), if one does not question the rules of constitution (does not focus the cognition which determines it). This is precisely what one does when using an object as a tool – because in doing so, one does not (yet) ask which cognition determines the object. When something is used as a tool, this constitutes an intuitive use, whereas the use of something as an object does not (this defines tool and object). Here, each concept in principle can play both roles; among two concepts, one may happen to be used intuitively before and the other after the progress of insight. Note that with respect to a given cognition, Peirce when saying “the cognition which determines it” always thinks of a previous cognition because he thinks of a determination of a cognition in our thought by previous thoughts. In conceptual history of mathematics, however, one most often introduced an object first as a tool and only after having done so did it come to one’s mind to ask for “the cognition which determines the cognition of this object” (that means, to ask how the use of this object can be legitimized).

The idea that it could depend on the situation whether validity is questioned or not has formerly been overlooked, perhaps because one always looked for a reductionist epistemology where the capacity called intuition is used exclusively at the last level of regression; in a pragmatist epistemology, to the contrary, intuition is used at every level in form of the not thematized tools. In classical systems, intuition was not simply conceived as a capacity; it was actually conceived as a capacity common to all human beings. “But the power of intuitively distinguishing intuitions from other cognitions has not prevented men from disputing very warmly as to which cognitions are intuitive”. Moreover, Peirce criticises strongly cartesian individualism (which has it that the individual has the capacity to find the truth). We could sum up this philosophy thus: we cannot reach definite truth, only provisional; significant progress is not made individually but only collectively; one cannot pretend that the history of thought did not take place and start from scratch, but every cognition is determined by a previous cognition (maybe by other individuals); one cannot uncover the ultimate foundation of our cognitions; rather, the fact that we sometimes reach a new level of insight, “deeper” than those thought of as fundamental before, merely indicates that there is no “deepest” level. The feeling that something is “intuitive” indicates a prejudice which can be philosophically criticised (even if this does not occur to us at the beginning).

In our approach, intuitive use is collectively determined: it depends on the particular usage of the community of users whether validity criteria are or are not questioned in a given situation of language use. However, it is acknowledged that for example scientific communities develop usages making them communities of language users on their own. Hence, situations of language use are not only partitioned into those where it comes to the users’ mind to question validity criteria and those where it does not, but moreover this partition is specific to a particular community (actually, the community of language users is established partly through a peculiar partition; this is a definition of the term “community of language users”). The existence of different communities with different common senses can lead to the following situation: something is used intuitively by one group, not intuitively by another. In this case, discussions inside the discipline occur; one has to cope with competing common senses (which are therefore not really “common”). This constitutes a task for the historian.

Reductionism of Numerical Complexity: A Wittgensteinian Excursion


Wittgenstein’s criticism of Russell’s logicist foundation of mathematics contained in (Remarks on the Foundation of Mathematics) consists in saying that it is not the formalized version of mathematical deduction which vouches for the validity of the intuitive version but conversely.

If someone tries to shew that mathematics is not logic, what is he trying to shew? He is surely trying to say something like: If tables, chairs, cupboards, etc. are swathed in enough paper, certainly they will look spherical in the end.

He is not trying to shew that it is impossible that, for every mathematical proof, a Russellian proof can be constructed which (somehow) ‘corresponds’ to it, but rather that the acceptance of such a correspondence does not lean on logic.

Taking up Wittgenstein’s criticism, Hao Wang (Computation, Logic, Philosophy) discusses the view that mathematics “is” axiomatic set theory as one of several possible answers to the question “What is mathematics?”. Wang points out that this view is epistemologically worthless, at least as far as the task of understanding the feature of cognition guiding is concerned:

Mathematics is axiomatic set theory. In a definite sense, all mathematics can be derived from axiomatic set theory. [ . . . ] There are several objections to this identification. [ . . . ] This view leaves unexplained why, of all the possible consequences of set theory, we select only those which happen to be our mathematics today, and why certain mathematical concepts are more interesting than others. It does not help to give us an intuitive grasp of mathematics such as that possessed by a powerful mathematician. By burying, e.g., the individuality of natural numbers, it seeks to explain the more basic and the clearer by the more obscure. It is a little analogous to asserting that all physical objects, such as tables, chairs, etc., are spherical if we swathe them with enough stuff.

Reductionism is an age-old project; a close forerunner of its incarnation in set theory was the arithmetization program of the 19th century. It is interesting that one of its prominent representatives, Richard Dedekind (Essays on the Theory of Numbers), exhibited a quite distanced attitude towards a consequent carrying out of the program:

It appears as something self-evident and not new that every theorem of algebra and higher analysis, no matter how remote, can be expressed as a theorem about natural numbers [ . . . ] But I see nothing meritorious [ . . . ] in actually performing this wearisome circumlocution and insisting on the use and recognition of no other than rational numbers.

Perec wrote a detective novel without using the letter ‘e’ (La disparition, English A void), thus proving not only that such an enormous enterprise is indeed possible but also that formal constraints sometimes have great aesthetic appeal. The translation of mathematical propositions into a poorer linguistic framework can easily be compared with such painful lipogrammatical exercises. In principle all logical connectives can be simulated in a framework exclusively using Sheffer’s stroke, and all cuts (in Gentzen’s sense) can be eliminated; one can do without common language at all in mathematics and formalize everything and so on: in principle, one could leave out a whole lot of things. However, in doing so one would depart from the true way of thinking employed by the mathematician (who really uses “and” and “not” and cuts and who does not reduce many things to formal systems). Obviously, it is the proof theorist as a working mathematician who is interested in things like the reduction to Sheffer’s stroke since they allow for more concise proofs by induction in the analysis of a logical calculus. Hence this proof theorist has much the same motives as a mathematician working on other problems who avoids a completely formalized treatment of these problems since he is not interested in the proof-theoretical aspect.

There might be quite similar reasons for the interest of some set theorists in expressing usual mathematical constructions exclusively with the expressive means of ZF (i.e., in terms of ∈). But beyond this, is there any philosophical interpretation of such a reduction? In the last analysis, mathematicians always transform (and that means: change) their objects of study in order to make them accessible to certain mathematical treatments. If one considers a mathematical concept as a tool, one does not only use it in a way different from the one in which it would be used if it were considered as an object; moreover, in semiotical representation of it, it is given a form which is different in both cases. In this sense, the proof theorist has to “change” the mathematical proof (which is his or her object of study to be treated with mathematical tools). When stating that something is used as object or as tool, we have always to ask: in which situation, or: by whom.

A second observation is that the translation of propositional formulæ in terms of Sheffer’s stroke in general yields quite complicated new formulæ. What is “simple” here is the particularly small number of symbols needed; but neither the semantics becomes clearer (p|q means “not both p and q”; cognitively, this looks more complex than “p and q” and so on), nor are the formulæ you get “short”. What is looked for in this case, hence, is a reduction of numerical complexity, while the primitive basis attained by the reduction cognitively looks less “natural” than the original situation (or, as Peirce expressed it, “the consciousness in the determined cognition is more lively than in the cognition which determines it”); similarly in the case of cut elimination. In contrast to this, many philosophers are convinced that the primitive basis of operating with sets constitutes really a “natural” basis of mathematical thinking, i.e., such operations are seen as the “standard bricks” of which this thinking is actually made – while no one will reasonably claim that expressions of the type p|q play a similar role for propositional logic. And yet: reduction to set theory does not really have the task of “explanation”. It is true, one thus reduces propositions about “complex” objects to propositions about “simple” objects; the propositions themselves, however, thus become in general more complex. Couched in Fregean terms, one can perhaps more easily grasp their denotation (since the denotation of a proposition is its truth value) but not their meaning. A more involved conceptual framework, however, might lead to simpler propositions (and in most cases has actually just been introduced in order to do so). A parallel argument concerns deductions: in its totality, a deduction becomes more complex (and less intelligible) by a decomposition into elementary steps.

Now, it will be subject to discussion whether in the case of some set operations it is admissible at all to claim that they are basic for thinking (which is certainly true in the case of the connectives of propositional logic). It is perfectly possible that the common sense which organizes the acceptance of certain operations as a natural basis relies on something different, not having the character of some eternal laws of thought: it relies on training.

Is it possible to observe that a surface is coloured red and blue; and not to observe that it is red? Imagine a kind of colour adjective were used for things that are half red and half blue: they are said to be ‘bu’. Now might not someone to be trained to observe whether something is bu; and not to observe whether it is also red? Such a man would then only know how to report: “bu” or “not bu”. And from the first report we could draw the conclusion that the thing was partly red.

Mathematical Reductionism: As Case Via C. S. Peirce’s Hypothetical Realism.


During the 20th century, the following epistemology of mathematics was predominant: a sufficient condition for the possibility of the cognition of objects is that these objects can be reduced to set theory. The conditions for the possibility of the cognition of the objects of set theory (the sets), in turn, can be given in various manners; in any event, the objects reduced to sets do not need an additional epistemological discussion – they “are” sets. Hence, such an epistemology relies ultimately on ontology. Frege conceived the axioms as descriptions of how we actually manipulate extensions of concepts in our thinking (and in this sense as inevitable and intuitive “laws of thought”). Hilbert admitted the use of intuition exclusively in metamathematics where the consistency proof is to be done (by which the appropriateness of the axioms would be established); Bourbaki takes the axioms as mere hypotheses. Hence, Bourbaki’s concept of justification is the weakest of the three: “it works as long as we encounter no contradiction”; nevertheless, it is still epistemology, because from this hypothetical-deductive point of view, one insists that at least a proof of relative consistency (i.e., a proof that the hypotheses are consistent with the frequently tested and approved framework of set theory) should be available.

Doing mathematics, one tries to give proofs for propositions, i.e., to deduce the propositions logically from other propositions (premisses). Now, in the reductionist perspective, a proof of a mathematical proposition yields an insight into the truth of the proposition, if the premisses are already established (if one has already an insight into their truth); this can be done by giving in turn proofs for them (in which new premisses will occur which ask again for an insight into their truth), or by agreeing to put them at the beginning (to consider them as axioms or postulates). The philosopher tries to understand how the decision about what propositions to take as axioms is arrived at, because he or she is dissatisfied with the reductionist claim that it is on these axioms that the insight into the truth of the deduced propositions rests. Actually, this epistemology might contain a short-coming since Poincaré (and Wittgenstein) stressed that to have a proof of a proposition is by no means the same as to have an insight into its truth.

Attempts to disclose the ontology of mathematical objects reveal the following tendency in epistemology of mathematics: Mathematics is seen as suffering from a lack of ontological “determinateness”, namely that this science (contrarily to many others) does not concern material data such that the concept of material truth is not available (especially in the case of the infinite). This tendency is embarrassing since on the other hand mathematical cognition is very often presented as cognition of the “greatest possible certainty” just because it seems not to be bound to material evidence, let alone experimental check.

The technical apparatus developed by the reductionist and set-theoretical approach nowadays serves other purposes, partly for the reason that tacit beliefs about sets were challenged; the explanations of the science which it provides are considered as irrelevant by the practitioners of this science. There is doubt that the above mentioned sufficient condition is also necessary; it is not even accepted throughout as a sufficient one. But what happens if some objects, as in the case of category theory, do not fulfill the condition? It seems that the reductionist approach, so to say, has been undocked from the historical development of the discipline in several respects; an alternative is required.

Anterior to Peirce, epistemology was dominated by the idea of a grasp of objects; since Descartes, intuition was considered throughout as a particular, innate capacity of cognition (even if idealists thought that it concerns the general, and empiricists that it concerns the particular). The task of this particular capacity was the foundation of epistemology; already from Aristotle’s first premisses of syllogism, what was aimed at was to go back to something first. In this traditional approach, it is by the ontology of the objects that one hopes to answer the fundamental question concerning the conditions for the possibility of the cognition of these objects. One hopes that there are simple “basic objects” to which the more complex objects can be reduced and whose cognition is possible by common sense – be this an innate or otherwise distinguished capacity of cognition common to all human beings. Here, epistemology is “wrapped up” in (or rests on) ontology; to do epistemology one has to do ontology first.

Peirce shares Kant’s opinion according to which the object depends on the subject; however, he does not agree that reason is the crucial means of cognition to be criticised. In his paper “Questions concerning certain faculties claimed for man”, he points out the basic assumption of pragmatist philosophy: every cognition is semiotically mediated. He says that there is no immediate cognition (a cognition which “refers immediately to its object”), but that every cognition “has been determined by a previous cognition” of the same object. Correspondingly, Peirce replaces critique of reason by critique of signs. He thinks that Kant’s distinction between the world of things per se (Dinge an sich) and the world of apparition (Erscheinungswelt) is not fruitful; he rather distinguishes the world of the subject and the world of the object, connected by signs; his position consequently is a “hypothetical realism” in which all cognitions are only valid with reservations. This position does not negate (nor assert) that the object per se (with the semiotical mediation stripped off) exists, since such assertions of “pure” existence are seen as necessarily hypothetical (that means, not withstanding philosophical criticism).

By his basic assumption, Peirce was led to reveal a problem concerning the subject matter of epistemology, since this assumption means in particular that there is no intuitive cognition in the classical sense (which is synonymous to “immediate”). Hence, one could no longer consider cognitions as objects; there is no intuitive cognition of an intuitive cognition. Intuition can be no more than a relation. “All the cognitive faculties we know of are relative, and consequently their products are relations”. According to this new point of view, intuition cannot any longer serve to found epistemology, in departure from the former reductionist attitude. A central argument of Peirce against reductionism or, as he puts it,

the reply to the argument that there must be a first is as follows: In retracing our way from our conclusions to premisses, or from determined cognitions to those which determine them, we finally reach, in all cases, a point beyond which the consciousness in the determined cognition is more lively than in the cognition which determines it.

Peirce gives some examples derived from physiological observations about perception, like the fact that the third dimension of space is inferred, and the blind spot of the retina. In this situation, the process of reduction loses its legitimacy since it no longer fulfills the function of cognition justification. At such a place, something happens which I would like to call an “exchange of levels”: the process of reduction is interrupted in that the things exchange the roles performed in the determination of a cognition: what was originally considered as determining is now determined by what was originally considered as asking for determination.

The idea that contents of cognition are necessarily provisional has an effect on the very concept of conditions for the possibility of cognitions. It seems that one can infer from Peirce’s words that what vouches for a cognition is not necessarily the cognition which determines it but the livelyness of our consciousness in the cognition. Here, “to vouch for a cognition” means no longer what it meant before (which was much the same as “to determine a cognition”), but it still means that the cognition is (provisionally) reliable. This conception of the livelyness of our consciousness roughly might be seen as a substitute for the capacity of intuition in Peirce’s epistemology – but only roughly, since it has a different coverage.

Conjuncted: Indiscernibles – Philosophical Constructibility. Thought of the Day 48.1

Simulated Reality

Conjuncted here.

“Thought is nothing other than the desire to finish with the exorbitant excess of the state” (Being and Event). Since Cantor’s theorem implies that this excess cannot be removed or reduced to the situation itself, the only way left is to take control of it. A basic, paradigmatic strategy for achieving this goal is to subject the excess to the power of language. Its essence has been expressed by Leibniz in the form of the principle of indiscernibles: there cannot exist two things whose difference cannot be marked by a describable property. In this manner, language assumes the role of a “law of being”, postulating identity, where it cannot find a difference. Meanwhile – according to Badiou – the generic truth is indiscernible: there is no property expressible in the language of set theory that characterizes elements of the generic set. Truth is beyond the power of knowledge, only the subject can support a procedure of fidelity by deciding what belongs to a truth. This key thesis is established using purely formal means, so it should be regarded as one of the peak moments of the mathematical method employed by Badiou.

Badiou composes the indiscernible out of as many as three different mathematical notions. First of all, he decides that it corresponds to the concept of the inconstructible. Later, however, he writes that “a set δ is discernible (…) if there exists (…) an explicit formula λ(x) (…) such that ‘belong to δ’ and ‘have the property expressed by λ(x)’ coincide”. Finally, at the outset of the argument designed to demonstrate the indiscernibility of truth he brings in yet another definition: “let us suppose the contrary: the discernibility of G. A formula thus exists λ(x, a1,…, an) with parameters a1…, an belonging to M[G] such that for an inhabitant of M[G] it defines the multiple G”. In short, discernibility is understood as:

  1. constructibility
  2. definability by a formula F(y) with one free variable and no parameters. In this approach, a set a is definable if there exists a formula F(y) such that b is an element of a if F(b) holds.
  3. definability by a formula F (y, z1 . . . , zn) with parameters. This time, a set a is definable if there exists a formula F(y, z1,…, zn) and sets a1,…, an such that after substituting z1 = a1,…, zn = an, an element b belongs to a iff F(b, a1,…, an) holds.

Even though in “Being and Event” Badiou does not explain the reasons for this variation, it clearly follows from his other writings (Alain Badiou Conditions) that he is convinced that these notions are equivalent. It should be emphasized then that this is not true: a set may be discernible in one sense, but indiscernible in another. First of all, the last definition has been included probably by mistake because it is trivial. Every set in M[G] is discernible in this sense because for every set a the formula F(y, x) defined as y belongs to x defines a after substituting x = a. Accepting this version of indiscernibility would lead to the conclusion that truth is always discernible, while Badiou claims that it is not so.

Is it not possible to choose the second option and identify discernibility with definability by a formula with no parameters? After all, this notion is most similar to the original idea of Leibniz intuitively, the formula F(y) expresses a property characterizing elements of the set defined by it. Unfortunately, this solution does not warrant indiscernibility of the generic set either. As a matter of fact, assuming that in ontology, that is, in set theory, discernibility corresponds to constructibility, Badiou is right that the generic set is necessarily indiscernible. However, constructibility is a highly technical notion, and its philosophical interpretation seems very problematic. Let us take a closer look at it.

The class of constructible sets – usually denoted by the letter L – forms a hierarchy indexed or numbered by ordinal numbers. The lowest level L0 is simply the empty set. Assuming that some level – let us denote it by Lα – has already been

constructed, the next level Lα+1 is constructed by choosing all subsets of L that can be defined by a formula (possibly with parameters) bounded to the lower level Lα.

Bounding a formula to Lα means that its parameters must belong to Lα and that its quantifiers are restricted to elements of Lα. For instance, the formula ‘there exists z such that z is in y’ simply says that y is not empty. After bounding it to Lα this formula takes the form ‘there exists z in Lα such that z is in y’, so it says that y is not empty, and some element from Lα witnesses it. Accordingly, the set defined by it consists of precisely those sets in Lα that contain an element from Lα.

After constructing an infinite sequence of levels, the level directly above them all is simply the set of all elements constructed so far. For example, the first infinite level Lω consists of all elements constructed on levels L0, L1, L2,….

As a result of applying this inductive definition, on each level of the hierarchy all the formulas are used, so that two distinct sets may be defined by the same formula. On the other hand, only bounded formulas take part in the construction. The definition of constructibility offers too little and too much at the same time. This technical notion resembles the Leibnizian discernibility only in so far as it refers to formulas. In set theory there are more notions of this type though.

To realize difficulties involved in attempts to philosophically interpret constructibility, one may consider a slight, purely technical, extension of it. Let us also accept sets that can be defined by a formula F (y, z1, . . . , zn) with constructible parameters, that is, parameters coming from L. Such a step does not lead further away from the common understanding of Leibniz’s principle than constructibility itself: if parameters coming from lower levels of the hierarchy are admissible when constructing a new set, why not admit others as well, especially since this condition has no philosophical justification?

Actually, one can accept parameters coming from an even more restricted class, e.g., the class of ordinal numbers. Then we will obtain the notion of definability from ordinal numbers. This minor modification of the concept of constructibility – a relaxation of the requirement that the procedure of construction has to be restricted to lower levels of the hierarchy – results in drastic consequences.

Evental Sites. Thought of the Day 48.0


According to Badiou, the undecidable truth is located beyond the boundaries of authoritative claims of knowledge. At the same time, undecidability indicates that truth has a post-evental character: “the heart of the truth is that the event in which it originates is undecidable” (Being and Event). Badiou explains that, in terms of forcing, undecidability means that the conditions belonging to the generic set force sentences that are not consequences of axioms of set theory. If in the domains of specific languages (of politics, science, art or love) the effects of event are not visible, the content of “Being and Event” is an empty exercise in abstraction.

Badiou distances himself from\ a narrow interpretation of the function played by axioms. He rather regards them as collections of basic convictions that organize situations, the conceptual or ideological framework of a historical situation. An event, named by an intervention, is at the theoretical site indexed by a proposition A, a new apparatus, demonstrative or axiomatic, such that A is henceforth clearly admissible as a proposition of the situation. Accordingly, the undecidability of a truth would consist in transcending the theoretical framework of a historical situation or even breaking with it in the sense that the faithful subject accepts beliefs that are impossible to reconcile with the old mode of thinking.

However, if one consequently identifies the effect of event with the structure of the generic extension, they need to conclude that these historical situations are by no means the effects of event. This is because a crucial property of every generic extension is that axioms of set theory remain valid within it. It is the very core of the method of forcing. Without this assumption, Cohen’s original construction would have no raison d’être because it would not establish the undecidability of the cardinality of infinite power sets. Every generic extension satisfies axioms of set theory. In reference to historical situations, it must be conceded that a procedure of fidelity may modify a situation by forcing undecidable sentences, nonetheless it never overrules its organizing principles.

Another notion which cannot be located within the generic theory of truth without extreme consequences is evental site. An evental site – an element “on the edge of the void” – opens up a situation to the possibility of an event. Ontologically, it is defined as “a multiple such that none of its elements are presented in the situation”. In other words, it is a set such that neither itself nor any of its subsets are elements of the state of the situation. As the double meaning of this word indicates, the state in the context of historical situations takes the shape of the State. A paradigmatic example of a historical evental site is the proletariat – entirely dispossessed, and absent from the political stage.

The existence of an evental site in a situation is a necessary requirement for an event to occur. Badiou is very strict about this point: “we shall posit once and for all that there are no natural events, nor are there neutral events” – and it should be clarified that situations are divided into natural, neutral, and those that contain an evental site. The very matheme of event – its formal definition is of no importance here is based on the evental site. The event raises the evental site to the surface, making it represented on the level of the state of the situation. Moreover, a novelty that has the structure of the generic set but it does not emerge from the void of an evental site, leads to a simulacrum of truth, which is one of the figures of Evil.

However, if one takes the mathematical framework of Badiou’s concept of event seriously, it turns out that there is no place for the evental site there – it is forbidden by the assumption of transitivity of the ground model M. This ingredient plays a fundamental role in forcing, and its removal would ruin the whole construction of the generic extension. As is known, transitivity means that if a set belongs to M, all its elements also belong to M. However, an evental site is a set none of whose elements belongs to M. Therefore, contrary to Badious intentions, there cannot exist evental sites in the ground model. Using Badiou’s terminology, one can say that forcing may only be the theory of the simulacrum of truth.

Conjuncted: Operations of Truth. Thought of the Day 47.1


Conjuncted here.

Let us consider only the power set of the set of all natural numbers, which is the smallest infinite set – the countable infinity. By a model of set theory we understand a set in which  – if we restrict ourselves to its elements only – all axioms of set theory are satisfied. It follows from Gödel’s completeness theorem that as long as set theory is consistent, no statement which is true in some model of set theory can contradict logical consequences of its axioms. If the cardinality of p(N) was such a consequence, there would exist a cardinal number κ such that the sentence the cardinality of p(N) is κ would be true in all the models. However, for every cardinal κ the technique of forcing allows for finding a model M where the cardinality of p(N) is not equal to κ. Thus, for no κ, the sentence the cardinality of p(N) is κ is a consequence of the axioms of set theory, i.e. they do not decide the cardinality of p(N).

The starting point of forcing is a model M of set theory – called the ground model – which is countably infinite and transitive. As a matter of fact, the existence of such a model cannot be proved but it is known that there exists a countable and transitive model for every finite subset of axioms.

A characteristic subtlety can be observed here. From the perspective of an inhabitant of the universe, that is, if all the sets are considered, the model M is only a small part of this universe. It is deficient in almost every respect; for example all of its elements are countable, even though the existence of uncountable sets is a consequence of the axioms of set theory. However, from the point of view of an inhabitant of M, that is, if elements outside of M are disregarded, everything is in order. Some of M because in this model there are no functions establishing a one-to-one correspondence between them and ω0. One could say that M simulates the properties of the whole universe.

The main objective of forcing is to build a new model M[G] based on M, which contains M, and satisfies certain additional properties. The model M[G] is called the generic extension of M. In order to accomplish this goal, a particular set is distinguished in M and its elements are referred to as conditions which will be used to determine basic properties of the generic extension. In case of the forcing that proves the undecidability of the cardinality of p(N), the set of conditions codes finite fragments of a function witnessing the correspondence between p(N) and a fixed cardinal κ.

In the next step, an appropriately chosen set G is added to M as well as other sets that are indispensable in order for M[G] to satisfy the axioms of set theory. This set – called generic – is a subset of the set of conditions that always lays outside of M. The construction of M[G] is exceptional in the sense that its key properties can be described and proved using M only, or just the conditions, thus, without referring to the generic set. This is possible for three reasons. First of all, every element x of M[G] has a name existing already in M (that is, an element in M that codes x in some particular way). Secondly, based on these names, one can design a language called the forcing language or – as Badiou terms it – the subject language that is powerful enough to express every sentence of set theory referring to the generic extension. Finally, it turns out that the validity of sentences of the forcing language in the extension M[G] depends on the set of conditions: the conditions force validity of sentences of the forcing language in a precisely specified sense. As it has already been said, the generic set G consists of some of the conditions, so even though G is outside of M, its elements are in M. Recognizing which of them will end up in G is not possible for an inhabitant of M, however in some cases the following can be proved: provided that the condition p is an element of G, the sentence S is true in the generic extension constructed using this generic set G. We say then that p forces S.

In this way, with an aid of the forcing language, one can prove that every generic set of the Cohen forcing codes an entire function defining a one-to-one correspondence between elements of p(N) and a fixed (uncountable) cardinal number – it turns out that all the conditions force the sentence stating this property of G, so regardless of which conditions end up in the generic set, it is always true in the generic extension. On the other hand, the existence of a generic set in the model M cannot follow from axioms of set theory, otherwise they would decide the cardinality of p(N).

The method of forcing is of fundamental importance for Badious philosophy. The event escapes ontology; it is “that-which-is-not-being-qua-being”, so it has no place in set theory or the forcing construction. However, the post-evental truth that enters, and modifies the situation, is presented by forcing in the form of a generic set leading to an extension of the ground model. In other words, the situation, understood as the ground model M, is transformed by a post-evental truth identified with a generic set G, and becomes the generic model M[G]. Moreover, the knowledge of the situation is interpreted as the language of set theory, serving to discern elements of the situation; and as axioms of set theory, deciding validity of statements about the situation. Knowledge, understood in this way, does not decide the existence of a generic set in the situation nor can it point to its elements. A generic set is always undecidable and indiscernible.

Therefore, from the perspective of knowledge, it is not possible to establish, whether a situation is still the ground-model or it has undergone a generic extension resulting from the occurrence of an event; only the subject can interventionally decide this. And it is only the subject who decides about the belonging of particular elements to the generic set (i.e. the truth). A procedure of truth or procedure of fidelity (Alain Badiou – Being and Event) supported in this way gives rise to the subject language. It consists of sentences of set theory, so in this respect it is a part of knowledge, although the veridicity of the subject language originates from decisions of the faithful subject. Consequently, a procedure of fidelity forces statements about the situation as it is after being extended, and modified by the operation of truth.

Conjuncted: Internal Logic. Thought of the Day 46.1


So, what exactly is an internal logic? The concept of topos is a generalization of the concept of set. In the categorial language of topoi, the universe of sets is just a topos. The consequence of this generalization is that the universe, or better the conglomerate, of topoi is of overwhelming amplitude. In set theory, the logic employed in the derivation of its theorems is classical. For this reason, the propositions about the different properties of sets are two-valued. There can only be true or false propositions. The traditional fundamental principles: identity, contradiction and excluded third, are absolutely valid.

But if the concept of a topos is a generalization of the concept of set, it is obvious that the logic needed to study, by means of deduction, the properties of all non-set-theoretical topoi, cannot be classic. If it were so, all topoi would coincide with the universe of sets. This fact suggests that to deductively study the properties of a topos, a non-classical logic must be used. And this logic cannot be other than the internal logic of the topos. We know, presently, that the internal logic of all topoi is intuitionistic logic as formalized by Heyting (a disciple of Brouwer). It is very interesting to compare the formal system of classical logic with the intuitionistic one. If both systems are axiomatized, the axioms of classical logic encompass the axioms of intuitionistic logic. The latter has all the axioms of the former, except one: the axiom that formally corresponds to the principle of the excluded middle. This difference can be shown in all kinds of equivalent versions of both logics. But, as Mac Lane says, “in the long run, mathematics is essentially axiomatic.” (Mac Lane). And it is remarkable that, just by suppressing an axiom of classical logic, the soundness of the theory (i.e., intuitionistic logic) can be demonstrated only through the existence of a potentially infinite set of truth-values.

We see, then, that the appellation “internal” is due to the fact that the logic by means of which we study the properties of a topos is a logic that functions within the topos, just as classical logic functions within set theory. As a matter of fact, classical logic is the internal logic of the universe of sets.

Another consequence of the fact that the general internal logic of every topos is the intuitionistic one, is that many different axioms can be added to the axioms of intuitionistic logic. This possibility enriches the internal logic of topoi. Through its application it reveals many new and quite unexpected properties of topoi. This enrichment of logic cannot be made in classical logic because, if we add one or more axioms to it, the new system becomes redundant or inconsistent. This does not happen with intuitionistic logic. So, topos theory shows that classical logic, although very powerful concerning the amount of the resulting theorems, is limited in its mathematical applications. It cannot be applied to study the properties of a mathematical system that cannot be reduced to the system of sets. Of course, if we want, we can utilize classical logic to study the properties of a topos. But, then, there are important properties of the topos that cannot be known, they are occult in the interior of the topos. Classical logic remains external to the topos.

Badiou Contra Grothendieck Functorally. Note Quote.

What makes categories historically remarkable and, in particular, what demonstrates that the categorical change is genuine? On the one hand, Badiou fails to show that category theory is not genuine. But, on the other, it is another thing to say that mathematics itself does change, and that the ‘Platonic’ a priori in Badiou’s endeavour is insufficient, which could be demonstrated empirically.

Yet the empirical does not need to stand only in a way opposed to mathematics. Rather, it relates to results that stemmed from and would have been impossible to comprehend without the use of categories. It is only through experience that we are taught the meaning and use of categories. An experience obviously absent from Badiou’s habituation in mathematics.

To contrast, Grothendieck opened up a new regime of algebraic geometry by generalising the notion of a space first scheme-theoretically (with sheaves) and then in terms of groupoids and higher categories. Topos theory became synonymous to the study of categories that would satisfy the so called Giraud’s axioms based on Grothendieck’s geometric machinery. By utilising such tools, Pierre Deligne was able to prove the so called Weil conjectures, mod-p analogues of the famous Riemann hypothesis.

These conjectures – anticipated already by Gauss – concern the so called local ζ-functions that derive from counting the number of points of an algebraic variety over a finite field, an algebraic structure similar to that of for example rational Q or real numbers R but with only a finite number of elements. By representing algebraic varieties in polynomial terms, it is possible to analyse geometric structures analogous to Riemann hypothesis but over finite fields Z/pZ (the whole numbers modulo p). Such ‘discrete’ varieties had previously been excluded from topological and geometric inquiry, while it now occurred that geometry was no longer overshadowed by a need to decide between ‘discrete’ and ‘continuous’ modalities of the subject (that Badiou still separates).

Along with the continuous ones, also discrete variates could then be studied based on Betti numbers, and similarly as what Cohen’s argument made manifest in set-theory, there seemed to occur ‘deeper’, topological precursors that had remained invisible under the classical formalism. In particular, the so called étale-cohomology allowed topological concepts (e.g., neighbourhood) to be studied in the context of algebraic geometry whose classical, Zariski-description was too rigid to allow a meaningful interpretation. Introducing such concepts on the basis of Jean-Pierre Serre’s suggestion, Alexander Grothendieck did revolutionarize the field of geometry, and Pierre Deligne’s proof of the Weil-conjenctures, not to mention Wiles’ work on Fermat’s last theorem that subsequentely followed.

Grothendieck’s crucial insight drew on his observation that if morphisms of varieties were considered by their ‘adjoint’ field of functions, it was possible to consider geometric morphisms as equivalent to algebraic ones. The algebraic category was restrictive, however, because field-morphisms are always monomorphisms which makes geometric morphisms: to generalize the notion of a neighbourhood to algebraic category he needed to embed algebraic fields into a larger category of rings. While a traditional Kuratowski covering space is locally ‘split’ – as mathematicians call it – the same was not true for the dual category of fields. In other words, the category of fields did not have an operator analogous to pull-backs (fibre products) unless considered as being embedded within rings from which pull-backs have a co-dual expressed by the tensor operator ⊗. Grothendieck thus realized he could replace ‘incorporeal’ or contained neighborhoods U ֒→ X by a more relational description: as maps U → X that are not necessarily monic, but which correspond to ring-morphisms instead.

Topos theory applies similar insight but not in the context of only specific varieties but for the entire theory of sets instead. Ultimately, Lawvere and Tierney realized the importance of these ideas to the concept of classification and truth in general. Classification of elements between two sets comes down to a question: does this element belong to a given set or not? In category of S ets this question calls for a binary answer: true or false. But not in a general topos in which the composition of the subobject-classifier is more geometric.

Indeed, Lawvere and Tierney then considered this characteristc map ‘either/or’ as a categorical relationship instead without referring to its ‘contents’. It was the structural form of this morphism (which they called ‘true’) and as contrasted with other relationships that marked the beginning of geometric logic. They thus rephrased the binary complete Heyting algebra of classical truth with the categorical version Ω defined as an object, which satisfies a specific pull-back condition. The crux of topos theory was then the so called Freyd–Mitchell embedding theorem which effectively guaranteed the explicit set of elementary axioms so as to formalize topos theory. The Freyd–Mitchell embedding theorem says that every abelian category is a full subcategory of a category of modules over some ring R and that the embedding is an exact functor. It is easy to see that not every abelian category is equivalent to RMod for some ring R. The reason is that RMod has all small limits and colimits. But for instance the category of finitely generated R-modules is an abelian category but lacks these properties.

But to understand its significance as a link between geometry and language, it is useful to see how the characteristic map (either/or) behaves in set theory. In particular, by expressing truth in this way, it became possible to reduce Axiom of Comprehension, which states that any suitable formal condition λ gives rise to a peculiar set {x ∈ λ}, to a rather elementary statement regarding adjoint functors.

At the same time, many mathematical structures became expressible not only as general topoi but in terms of a more specific class of Grothendieck-topoi. There, too, the ‘way of doing mathematics’ is different in the sense that the object-classifier is categorically defined and there is no empty set (initial object) but mathematics starts from the terminal object 1 instead. However, there is a material way to express the ‘difference’ such topoi make in terms of set theory: for every such a topos there is a sheaf-form enabling it to be expressed as a category of sheaves S etsC for a category C with a specific Grothendieck-topology.