Mathematical Reductionism: As Case Via C. S. Peirce’s Hypothetical Realism.

mathematical-beauty

During the 20th century, the following epistemology of mathematics was predominant: a sufficient condition for the possibility of the cognition of objects is that these objects can be reduced to set theory. The conditions for the possibility of the cognition of the objects of set theory (the sets), in turn, can be given in various manners; in any event, the objects reduced to sets do not need an additional epistemological discussion – they “are” sets. Hence, such an epistemology relies ultimately on ontology. Frege conceived the axioms as descriptions of how we actually manipulate extensions of concepts in our thinking (and in this sense as inevitable and intuitive “laws of thought”). Hilbert admitted the use of intuition exclusively in metamathematics where the consistency proof is to be done (by which the appropriateness of the axioms would be established); Bourbaki takes the axioms as mere hypotheses. Hence, Bourbaki’s concept of justification is the weakest of the three: “it works as long as we encounter no contradiction”; nevertheless, it is still epistemology, because from this hypothetical-deductive point of view, one insists that at least a proof of relative consistency (i.e., a proof that the hypotheses are consistent with the frequently tested and approved framework of set theory) should be available.

Doing mathematics, one tries to give proofs for propositions, i.e., to deduce the propositions logically from other propositions (premisses). Now, in the reductionist perspective, a proof of a mathematical proposition yields an insight into the truth of the proposition, if the premisses are already established (if one has already an insight into their truth); this can be done by giving in turn proofs for them (in which new premisses will occur which ask again for an insight into their truth), or by agreeing to put them at the beginning (to consider them as axioms or postulates). The philosopher tries to understand how the decision about what propositions to take as axioms is arrived at, because he or she is dissatisfied with the reductionist claim that it is on these axioms that the insight into the truth of the deduced propositions rests. Actually, this epistemology might contain a short-coming since Poincaré (and Wittgenstein) stressed that to have a proof of a proposition is by no means the same as to have an insight into its truth.

Attempts to disclose the ontology of mathematical objects reveal the following tendency in epistemology of mathematics: Mathematics is seen as suffering from a lack of ontological “determinateness”, namely that this science (contrarily to many others) does not concern material data such that the concept of material truth is not available (especially in the case of the infinite). This tendency is embarrassing since on the other hand mathematical cognition is very often presented as cognition of the “greatest possible certainty” just because it seems not to be bound to material evidence, let alone experimental check.

The technical apparatus developed by the reductionist and set-theoretical approach nowadays serves other purposes, partly for the reason that tacit beliefs about sets were challenged; the explanations of the science which it provides are considered as irrelevant by the practitioners of this science. There is doubt that the above mentioned sufficient condition is also necessary; it is not even accepted throughout as a sufficient one. But what happens if some objects, as in the case of category theory, do not fulfill the condition? It seems that the reductionist approach, so to say, has been undocked from the historical development of the discipline in several respects; an alternative is required.

Anterior to Peirce, epistemology was dominated by the idea of a grasp of objects; since Descartes, intuition was considered throughout as a particular, innate capacity of cognition (even if idealists thought that it concerns the general, and empiricists that it concerns the particular). The task of this particular capacity was the foundation of epistemology; already from Aristotle’s first premisses of syllogism, what was aimed at was to go back to something first. In this traditional approach, it is by the ontology of the objects that one hopes to answer the fundamental question concerning the conditions for the possibility of the cognition of these objects. One hopes that there are simple “basic objects” to which the more complex objects can be reduced and whose cognition is possible by common sense – be this an innate or otherwise distinguished capacity of cognition common to all human beings. Here, epistemology is “wrapped up” in (or rests on) ontology; to do epistemology one has to do ontology first.

Peirce shares Kant’s opinion according to which the object depends on the subject; however, he does not agree that reason is the crucial means of cognition to be criticised. In his paper “Questions concerning certain faculties claimed for man”, he points out the basic assumption of pragmatist philosophy: every cognition is semiotically mediated. He says that there is no immediate cognition (a cognition which “refers immediately to its object”), but that every cognition “has been determined by a previous cognition” of the same object. Correspondingly, Peirce replaces critique of reason by critique of signs. He thinks that Kant’s distinction between the world of things per se (Dinge an sich) and the world of apparition (Erscheinungswelt) is not fruitful; he rather distinguishes the world of the subject and the world of the object, connected by signs; his position consequently is a “hypothetical realism” in which all cognitions are only valid with reservations. This position does not negate (nor assert) that the object per se (with the semiotical mediation stripped off) exists, since such assertions of “pure” existence are seen as necessarily hypothetical (that means, not withstanding philosophical criticism).

By his basic assumption, Peirce was led to reveal a problem concerning the subject matter of epistemology, since this assumption means in particular that there is no intuitive cognition in the classical sense (which is synonymous to “immediate”). Hence, one could no longer consider cognitions as objects; there is no intuitive cognition of an intuitive cognition. Intuition can be no more than a relation. “All the cognitive faculties we know of are relative, and consequently their products are relations”. According to this new point of view, intuition cannot any longer serve to found epistemology, in departure from the former reductionist attitude. A central argument of Peirce against reductionism or, as he puts it,

the reply to the argument that there must be a first is as follows: In retracing our way from our conclusions to premisses, or from determined cognitions to those which determine them, we finally reach, in all cases, a point beyond which the consciousness in the determined cognition is more lively than in the cognition which determines it.

Peirce gives some examples derived from physiological observations about perception, like the fact that the third dimension of space is inferred, and the blind spot of the retina. In this situation, the process of reduction loses its legitimacy since it no longer fulfills the function of cognition justification. At such a place, something happens which I would like to call an “exchange of levels”: the process of reduction is interrupted in that the things exchange the roles performed in the determination of a cognition: what was originally considered as determining is now determined by what was originally considered as asking for determination.

The idea that contents of cognition are necessarily provisional has an effect on the very concept of conditions for the possibility of cognitions. It seems that one can infer from Peirce’s words that what vouches for a cognition is not necessarily the cognition which determines it but the livelyness of our consciousness in the cognition. Here, “to vouch for a cognition” means no longer what it meant before (which was much the same as “to determine a cognition”), but it still means that the cognition is (provisionally) reliable. This conception of the livelyness of our consciousness roughly might be seen as a substitute for the capacity of intuition in Peirce’s epistemology – but only roughly, since it has a different coverage.

Advertisement

Task of the Philosopher. Thought of the Day 75.0

4578-004-B2A539B2

Poincaré in Science and Method discusses how “reasonable” axioms (theories) are chosen. In a section which is intended to cool down the expectations put in the “logistic” project, he points out the problem as follows:

Even admitting that it has been established that all theorems can be deduced by purely analytical processes, by simple logical combinations of a finite number of axioms, and that these axioms are nothing but conventions, the philosopher would still retain the right to seek the origin of these conventions, and to ask why they were judged preferable to the contrary conventions.

[ …] A selection must be made out of all the constructions that can be combined with the materials furnished by logic. the true geometrician makes this decision judiciously, because he is guided by a sure instinct, or by some vague consciousness of I know not what profounder and more hidden geometry, which alone gives a value to the constructed edifice.

Hence, Poincaré sees the task of the philosophers to be the explanation of how conventions came to be. At the end of the quotation, Poincaré tries to give such an explanation, namely in referring to an “instinct” (in the sequel, he mentions briefly that one can obviously ask where such an instinct comes from, but he gives no answer to this question). The pragmatist position to be developed will lead to an essentially similar, but more complete and clear point of view.

According to Poincaré’s definition, the task of the philosopher starts where that of the mathematician ends: for a mathematician, a result is right if he or she has a proof, that means, if the result can be logically deduced from the axioms; that one has to adopt some axioms is seen as a necessary evil, and one perhaps puts some energy in the project to minimize the number of axioms (this might have been how set theory become thought of as a foundation of mathematics). A philosopher, however, wants to understand why exactly these axioms and no other were chosen. In particular, the philosopher is concerned with the question whether the chosen axioms actually grasp the intended model. This question is justified since formal definitions are not automatically sufficient to grasp the intention of a concept; at the same time, the question is methodologically very hard, since ultimately a concept is available in mathematical proof only by a formal explication. At any rate, it becomes clear that the task of the philosopher is related to a criterion problem.

Georg Kreisel thinks that we do indeed have the capacity to decide whether a given model was intended or not:

many formal independence proofs consist in the construction of models which we recognize to be different from the intended notion. It is a fact of experience that one can be honest about such matters! When we are shown a ‘non-standard’ model we can honestly say that it was not intended. [ . . . ] If it so happens that the intended notion is not formally definable this may be a useful thing to know about the notion, but it does not cast doubt on its objectivity.

Poincaré could not yet know (but he was experienced enough a mathematician to “feel”) that axiom systems quite often fail to grasp the intended model. It was seldom the work of professional philosophers and often the byproduct of the actual mathematical work to point out such discrepancies.

Following Kant, one defines the task of epistemology thus: to determine the conditions of the possibility of the cognition of objects. Now, what is meant by “cognition of objects”? It is meant that we have an insight into (the truth of) propositions about the objects (we can then speak about the propositions as facts); and epistemology asks what are the conditions for the possibility of such an insight. Hence, epistemology is not concerned with what objects are (ontology), but with what (and how) we can know about them (ways of access). This notwithstanding, both things are intimately related, especially, in the Peircean stream of pragmatist philosophy. The 19th century (in particular Helmholtz) stressed against Kant the importance of physiological conditions for this access to objects. Nevertheless, epistemology is concerned with logic and not with the brain. Pragmatism puts the accent on the means of cognition – to which also the brain belongs.

Kant in his epistemology stressed that the object depends on the subject, or, more precisely, that the cognition of an object depends on the means of cognition used by the subject. For him, the decisive means of cognition was reason; thus, his epistemology was to a large degree critique of reason. Other philosophers disagreed about this special role of reason but shared the view that the task of philosophy is to criticise the means of cognition. For all of them, philosophy has to point out about what we can speak “legitimately”. Such a critical approach is implicitly contained in Poincaré’s description of the task of the philosopher.

Reichenbach decomposes the task of epistemology into different parts: guiding, justification and limitation of cognition. While justification is usually considered as the most important of the three aspects, the “task of the philosopher” as specified above following Poincaré is not limited to it. Indeed, the question why just certain axioms and no others were chosen is obviously a question concerning the guiding principles of cognition: which criteria are at work? Mathematics presents itself at its various historical stages as the result of a series of decisions on questions of the kind “Which objects should we consider? Which definitions should we make? Which theorems should we try to prove?” etc. – for short: instances of the “criterion problem”. Epistemology, has all the task to evoke these criteria – used but not evoked by the researchers themselves. For after all, these criteria cannot be without effect on the conditions for the possibility of cognition of the objects which one has decided to consider. (In turn, the conditions for this possibility in general determine the range of objects from which one has to choose.) However, such an epistemology has not the task to resolve the criterion problem normatively (that means to prescribe for the scientist which choices he has to make).

Evental Sites. Thought of the Day 48.0

badiou_being_and_appearance1

According to Badiou, the undecidable truth is located beyond the boundaries of authoritative claims of knowledge. At the same time, undecidability indicates that truth has a post-evental character: “the heart of the truth is that the event in which it originates is undecidable” (Being and Event). Badiou explains that, in terms of forcing, undecidability means that the conditions belonging to the generic set force sentences that are not consequences of axioms of set theory. If in the domains of specific languages (of politics, science, art or love) the effects of event are not visible, the content of “Being and Event” is an empty exercise in abstraction.

Badiou distances himself from\ a narrow interpretation of the function played by axioms. He rather regards them as collections of basic convictions that organize situations, the conceptual or ideological framework of a historical situation. An event, named by an intervention, is at the theoretical site indexed by a proposition A, a new apparatus, demonstrative or axiomatic, such that A is henceforth clearly admissible as a proposition of the situation. Accordingly, the undecidability of a truth would consist in transcending the theoretical framework of a historical situation or even breaking with it in the sense that the faithful subject accepts beliefs that are impossible to reconcile with the old mode of thinking.

However, if one consequently identifies the effect of event with the structure of the generic extension, they need to conclude that these historical situations are by no means the effects of event. This is because a crucial property of every generic extension is that axioms of set theory remain valid within it. It is the very core of the method of forcing. Without this assumption, Cohen’s original construction would have no raison d’être because it would not establish the undecidability of the cardinality of infinite power sets. Every generic extension satisfies axioms of set theory. In reference to historical situations, it must be conceded that a procedure of fidelity may modify a situation by forcing undecidable sentences, nonetheless it never overrules its organizing principles.

Another notion which cannot be located within the generic theory of truth without extreme consequences is evental site. An evental site – an element “on the edge of the void” – opens up a situation to the possibility of an event. Ontologically, it is defined as “a multiple such that none of its elements are presented in the situation”. In other words, it is a set such that neither itself nor any of its subsets are elements of the state of the situation. As the double meaning of this word indicates, the state in the context of historical situations takes the shape of the State. A paradigmatic example of a historical evental site is the proletariat – entirely dispossessed, and absent from the political stage.

The existence of an evental site in a situation is a necessary requirement for an event to occur. Badiou is very strict about this point: “we shall posit once and for all that there are no natural events, nor are there neutral events” – and it should be clarified that situations are divided into natural, neutral, and those that contain an evental site. The very matheme of event – its formal definition is of no importance here is based on the evental site. The event raises the evental site to the surface, making it represented on the level of the state of the situation. Moreover, a novelty that has the structure of the generic set but it does not emerge from the void of an evental site, leads to a simulacrum of truth, which is one of the figures of Evil.

However, if one takes the mathematical framework of Badiou’s concept of event seriously, it turns out that there is no place for the evental site there – it is forbidden by the assumption of transitivity of the ground model M. This ingredient plays a fundamental role in forcing, and its removal would ruin the whole construction of the generic extension. As is known, transitivity means that if a set belongs to M, all its elements also belong to M. However, an evental site is a set none of whose elements belongs to M. Therefore, contrary to Badious intentions, there cannot exist evental sites in the ground model. Using Badiou’s terminology, one can say that forcing may only be the theory of the simulacrum of truth.

Conjuncted: Operations of Truth. Thought of the Day 47.1

mathBIG2

Conjuncted here.

Let us consider only the power set of the set of all natural numbers, which is the smallest infinite set – the countable infinity. By a model of set theory we understand a set in which  – if we restrict ourselves to its elements only – all axioms of set theory are satisfied. It follows from Gödel’s completeness theorem that as long as set theory is consistent, no statement which is true in some model of set theory can contradict logical consequences of its axioms. If the cardinality of p(N) was such a consequence, there would exist a cardinal number κ such that the sentence the cardinality of p(N) is κ would be true in all the models. However, for every cardinal κ the technique of forcing allows for finding a model M where the cardinality of p(N) is not equal to κ. Thus, for no κ, the sentence the cardinality of p(N) is κ is a consequence of the axioms of set theory, i.e. they do not decide the cardinality of p(N).

The starting point of forcing is a model M of set theory – called the ground model – which is countably infinite and transitive. As a matter of fact, the existence of such a model cannot be proved but it is known that there exists a countable and transitive model for every finite subset of axioms.

A characteristic subtlety can be observed here. From the perspective of an inhabitant of the universe, that is, if all the sets are considered, the model M is only a small part of this universe. It is deficient in almost every respect; for example all of its elements are countable, even though the existence of uncountable sets is a consequence of the axioms of set theory. However, from the point of view of an inhabitant of M, that is, if elements outside of M are disregarded, everything is in order. Some of M because in this model there are no functions establishing a one-to-one correspondence between them and ω0. One could say that M simulates the properties of the whole universe.

The main objective of forcing is to build a new model M[G] based on M, which contains M, and satisfies certain additional properties. The model M[G] is called the generic extension of M. In order to accomplish this goal, a particular set is distinguished in M and its elements are referred to as conditions which will be used to determine basic properties of the generic extension. In case of the forcing that proves the undecidability of the cardinality of p(N), the set of conditions codes finite fragments of a function witnessing the correspondence between p(N) and a fixed cardinal κ.

In the next step, an appropriately chosen set G is added to M as well as other sets that are indispensable in order for M[G] to satisfy the axioms of set theory. This set – called generic – is a subset of the set of conditions that always lays outside of M. The construction of M[G] is exceptional in the sense that its key properties can be described and proved using M only, or just the conditions, thus, without referring to the generic set. This is possible for three reasons. First of all, every element x of M[G] has a name existing already in M (that is, an element in M that codes x in some particular way). Secondly, based on these names, one can design a language called the forcing language or – as Badiou terms it – the subject language that is powerful enough to express every sentence of set theory referring to the generic extension. Finally, it turns out that the validity of sentences of the forcing language in the extension M[G] depends on the set of conditions: the conditions force validity of sentences of the forcing language in a precisely specified sense. As it has already been said, the generic set G consists of some of the conditions, so even though G is outside of M, its elements are in M. Recognizing which of them will end up in G is not possible for an inhabitant of M, however in some cases the following can be proved: provided that the condition p is an element of G, the sentence S is true in the generic extension constructed using this generic set G. We say then that p forces S.

In this way, with an aid of the forcing language, one can prove that every generic set of the Cohen forcing codes an entire function defining a one-to-one correspondence between elements of p(N) and a fixed (uncountable) cardinal number – it turns out that all the conditions force the sentence stating this property of G, so regardless of which conditions end up in the generic set, it is always true in the generic extension. On the other hand, the existence of a generic set in the model M cannot follow from axioms of set theory, otherwise they would decide the cardinality of p(N).

The method of forcing is of fundamental importance for Badious philosophy. The event escapes ontology; it is “that-which-is-not-being-qua-being”, so it has no place in set theory or the forcing construction. However, the post-evental truth that enters, and modifies the situation, is presented by forcing in the form of a generic set leading to an extension of the ground model. In other words, the situation, understood as the ground model M, is transformed by a post-evental truth identified with a generic set G, and becomes the generic model M[G]. Moreover, the knowledge of the situation is interpreted as the language of set theory, serving to discern elements of the situation; and as axioms of set theory, deciding validity of statements about the situation. Knowledge, understood in this way, does not decide the existence of a generic set in the situation nor can it point to its elements. A generic set is always undecidable and indiscernible.

Therefore, from the perspective of knowledge, it is not possible to establish, whether a situation is still the ground-model or it has undergone a generic extension resulting from the occurrence of an event; only the subject can interventionally decide this. And it is only the subject who decides about the belonging of particular elements to the generic set (i.e. the truth). A procedure of truth or procedure of fidelity (Alain Badiou – Being and Event) supported in this way gives rise to the subject language. It consists of sentences of set theory, so in this respect it is a part of knowledge, although the veridicity of the subject language originates from decisions of the faithful subject. Consequently, a procedure of fidelity forces statements about the situation as it is after being extended, and modified by the operation of truth.

Categorial Logic – Paracompleteness versus Paraconsistency. Thought of the Day 46.2

1c67cadc0cd0f625e03b399121febd15--category-theory-mathematics

The fact that logic is content-dependent opens a new horizon concerning the relationship of logic to ontology (or objectology). Although the classical concepts of a priori and a posteriori propositions (or judgments) has lately become rather blurred, there is an undeniable fact: it is certain that the far origin of mathematics is based on empirical practical knowledge, but nobody can claim that higher mathematics is empirical.

Thanks to category theory, it is an established fact that some sort of very important logical systems: the classical and the intuitionistic (with all its axiomatically enriched subsystems), can be interpreted through topoi. And these possibility permits to consider topoi, be it in a Noneist or in a Platonist way, as universes, that is, as ontologies or as objectologies. Now, the association of a topos with its correspondent ontology (or objectology) is quite different from the association of theoretical terms with empirical concepts. Within the frame of a physical theory, if a new fact is discovered in the laboratory, it must be explained through logical deduction (with the due initial conditions and some other details). If a logical conclusion is inferred from the fundamental hypotheses, it must be corroborated through empirical observation. And if the corroboration fails, the theory must be readjusted or even rejected.

In the case of categorial logic, the situation has some similarity with the former case; but we must be careful not to be influenced by apparent coincidences. If we add, as an axiom, the tertium non datur to the formalized intuitionistic logic we obtain classical logic. That is, we can formally pass from the one to the other, just by adding or suppressing the tertium. This fact could induce us to think that, just as in physics, if a logical theory, let’s say, intuitionistic logic, cannot include a true proposition, then its axioms must be readjusted, to make it possible to include it among its theorems. But there is a radical difference: in the semantics of intuitionistic logic, and of any logic, the point of departure is not a set of hypothetical propositions that must be corroborated through experiment; it is a set of propositions that are true under some interpretation. This set can be axiomatic or it can consist in rules of inference, but the theorems of the system are not submitted to verification. The derived propositions are just true, and nothing more. The logician surely tries to find new true propositions but, when they are found (through some effective method, that can be intuitive, as it is in Gödel’s theorem) there are only three possible cases: they can be formally derivable, they can be formally underivable, they can be formally neither derivable nor underivable, that is, undecidable. But undecidability does not induce the logician to readjust or to reject the theory. Nobody tries to add axioms or to diminish them. In physics, when we are handling a theory T, and a new describable phenomenon is found that cannot be deduced from the axioms (plus initial or some other conditions), T must be readjusted or even rejected. A classical logician will never think of changing the axioms or rules of inference of classical logic because it is undecidable. And an intuitionist logician would not care at all to add the tertium to the axioms of Heyting’s system because it cannot be derived within it.

The foregoing considerations sufficiently show that in logic and mathematics there is something that, with full right, can be called “a priori“. And although, as we have said, we must acknowledge that the concepts of a priori and a posteriori are not clear-cut, in some cases, we can rightly speak of synthetical a priori knowledge. For instance, the Gödel’s proposition that affirms its own underivabilty is synthetical and a priori. But there are other propositions, for instance, mathematical induction, that can also be considered as synthetical and a priori. And a great deal of mathematical definitions, that are not abbreviations, are synthetical. For instance, the definition of a monoid action is synthetical (and, of course, a priori) because the concept of a monoid does not have among its characterizing traits the concept of an action, and vice versa.

Categorial logic is, the deepest knowledge of logic that has ever been achieved. But its scope does not encompass the whole field of logic. There are other kinds of logic that are also important and, if we intend to know, as much as possible, what logic is and how it is related to mathematics and ontology (or objectology), we must pay attention to them. From a mathematical and a philosophical point of view, the most important logical non-paracomplete systems are the paraconsistent ones. These systems are something like a dual to paracomplete logics. They are employed in inconsistent theories without producing triviality (in this sense also relevant logics are paraconsistent). In intuitionist logic there are interpretations that, with respect to some topoi, include two false contradictory propositions; whereas in paraconsistent systems we can find interpretations in which there are two contradictory true propositions.

There is, though, a difference between paracompleteness and paraconsistency. Insofar as mathematics is concerned, paracomplete systems had to be coined to cope with very deep problems. The paraconsistent ones, on the other hand, although they have been applied with success to mathematical theories, were conceived for purely philosophical and, in some cases, even for political and ideological motivations. The common point of them all was the need to construe a logical system able to cope with contradictions. That means: to have at one’s disposal a deductive method which offered the possibility of deducing consistent conclusions from inconsistent premisses. Of course, the inconsistency of the premisses had to comply with some (although very wide) conditions to avoid triviality. But these conditions made it possible to cope with paradoxes or antinomies with precision and mathematical sense.

But, philosophically, paraconsistent logic has another very important property: it is used in a spontaneous way to formalize the naive set theory, that is, the kind of theory that pre-Zermelian mathematicians had always employed. And it is, no doubt, important to try to develop mathematics within the frame of naive, spontaneous, mathematical thought, without falling into the artificiality of modern set theory. The formalization of the naive way of mathematical thinking, although every formalization is unavoidably artificial, has opened the possibility of coping with dialectical thought.

Hegel and Topos Theory. Thought of the Day 46.0

immagine483037

The intellectual feat of Lawvere is as important as Gödel’s formal undecidability theorem, perhaps even more. But there is a difference between both results: whereas Gödel led to a blind alley, Lawvere has displayed a new and fascinating panorama to be explored by mathematicians and philosophers. Referring to the positive results of topos theory, Lawvere says:

A science student naively enrolling in a course styled “Foundations of Mathematics” is more likely to receive sermons about unknowability… than to receive the needed philosophical guide to a systematic understanding of the concrete richness of pure and applied mathematics as it has been and will be developed. (Categories of space and quantity)

One of the major philosophical results of elementary topos theory, is that the way Hegel looked at logic was, after all, in the good track. According to Hegel, formal mathematical logic was but a superficial tautologous script. True logic was dialectical, and this logic ruled the gigantic process of the development of the Idea. Inasmuch as the Idea was autorealizing itself through the opposition of theses and antitheses, logic was changing but not in an arbitrary change of inferential rules. Briefly, in the dialectical system of Hegel logic was content-dependent.

Now, the fact that every topos has a corresponding internal logic shows that logic is, in quite a precise way, content-dependent; it depends on the structure of the topos. Every topos has its own internal logic, and this logic is materially dependent on the characterization of the topos. This correspondence throws new light on the relation of logic to ontology. Classically, logic was considered as ontologically aseptic. There could be a multitude of different ontologies, but there was only one logic: the classical. Of course, there were some mathematicians that proposed a different logic: the intuitionists. But this proposal was due to not very clear speculative epistemic reasons: they said they could not understand the meaning of the attributive expression “actual infinite”. These mathematicians integrated a minority within the professional mathematical community. They were seen as outsiders that had queer ideas about the exact sciences. However, as soon as intuitionistic logic was recognized as the universal internal logic of topoi, its importance became astronomical. Because it provided, for the first time, a new vision of the interplay of logic with mathematics. Something had definitively changed in the philosophical panorama.

Rhizomatic Topology and Global Politics. A Flirtatious Relationship.

 

rhizome

Deleuze and Guattari see concepts as rhizomes, biological entities endowed with unique properties. They see concepts as spatially representable, where the representation contains principles of connection and heterogeneity: any point of a rhizome must be connected to any other. Deleuze and Guattari list the possible benefits of spatial representation of concepts, including the ability to represent complex multiplicity, the potential to free a concept from foundationalism, and the ability to show both breadth and depth. In this view, geometric interpretations move away from the insidious understanding of the world in terms of dualisms, dichotomies, and lines, to understand conceptual relations in terms of space and shapes. The ontology of concepts is thus, in their view, appropriately geometric, a multiplicity defined not by its elements, nor by a center of unification and comprehension and instead measured by its dimensionality and its heterogeneity. The conceptual multiplicity, is already composed of heterogeneous terms in symbiosis, and is continually transforming itself such that it is possible to follow, and map, not only the relationships between ideas but how they change over time. In fact, the authors claim that there are further benefits to geometric interpretations of understanding concepts which are unavailable in other frames of reference. They outline the unique contribution of geometric models to the understanding of contingent structure:

Principle of cartography and decalcomania: a rhizome is not amenable to any structural or generative model. It is a stranger to any idea of genetic axis or deep structure. A genetic axis is like an objective pivotal unity upon which successive stages are organized; deep structure is more like a base sequence that can be broken down into immediate constituents, while the unity of the product passes into another, transformational and subjective, dimension. (Deleuze and Guattari)

The word that Deleuze and Guattari use for ‘multiplicities’ can also be translated to the topological term ‘manifold.’ If we thought about their multiplicities as manifolds, there are a virtually unlimited number of things one could come to know, in geometric terms, about (and with) our object of study, abstractly speaking. Among those unlimited things we could learn are properties of groups (homological, cohomological, and homeomorphic), complex directionality (maps, morphisms, isomorphisms, and orientability), dimensionality (codimensionality, structure, embeddedness), partiality (differentiation, commutativity, simultaneity), and shifting representation (factorization, ideal classes, reciprocity). Each of these functions allows for a different, creative, and potentially critical representation of global political concepts, events, groupings, and relationships. This is how concepts are to be looked at: as manifolds. With such a dimensional understanding of concept-formation, it is possible to deal with complex interactions of like entities, and interactions of unlike entities. Critical theorists have emphasized the importance of such complexity in representation a number of times, speaking about it in terms compatible with mathematical methods if not mathematically. For example, Foucault’s declaration that: practicing criticism is a matter of making facile gestures difficult both reflects and is reflected in many critical theorists projects of revealing the complexity in (apparently simple) concepts deployed both in global politics.  This leads to a shift in the concept of danger as well, where danger is not an objective condition but “an effect of interpretation”. Critical thinking about how-possible questions reveals a complexity to the concept of the state which is often overlooked in traditional analyses, sending a wave of added complexity through other concepts as well. This work seeking complexity serves one of the major underlying functions of critical theorizing: finding invisible injustices in (modernist, linear, structuralist) givens in the operation and analysis of global politics.

In a geometric sense, this complexity could be thought about as multidimensional mapping. In theoretical geometry, the process of mapping conceptual spaces is not primarily empirical, but for the purpose of representing and reading the relationships between information, including identification, similarity, differentiation, and distance. The reason for defining topological spaces in math, the essence of the definition, is that there is no absolute scale for describing the distance or relation between certain points, yet it makes sense to say that an (infinite) sequence of points approaches some other (but again, no way to describe how quickly or from what direction one might be approaching). This seemingly weak relationship, which is defined purely ‘locally’, i.e., in a small locale around each point, is often surprisingly powerful: using only the relationship of approaching parts, one can distinguish between, say, a balloon, a sheet of paper, a circle, and a dot.

To each delineated concept, one should distinguish and associate a topological space, in a (necessarily) non-explicit yet definite manner. Whenever one has a relationship between concepts (here we think of the primary relationship as being that of constitution, but not restrictively, we ‘specify’ a function (or inclusion, or relation) between the topological spaces associated to the concepts). In these terms, a conceptual space is in essence a multidimensional space in which the dimensions represent qualities or features of that which is being represented. Such an approach can be leveraged for thinking about conceptual components, dimensionality, and structure. In these terms, dimensions can be thought of as properties or qualities, each with their own (often-multidimensional) properties or qualities. A key goal of the modeling of conceptual space being representation means that a key (mathematical and theoretical) goal of concept space mapping is

associationism, where associations between different kinds of information elements carry the main burden of representation. (Conceptual_Spaces_as_a_Framework_for_Knowledge_Representation)

To this end,

objects in conceptual space are represented by points, in each domain, that characterize their dimensional values. A concept geometry for conceptual spaces

These dimensional values can be arranged in relation to each other, as Gardenfors explains that

distances represent degrees of similarity between objects represented in space and therefore conceptual spaces are “suitable for representing different kinds of similarity relation. Concept

These similarity relationships can be explored across ideas of a concept and across contexts, but also over time, since “with the aid of a topological structure, we can speak about continuity, e.g., a continuous change” a possibility which can be found only in treating concepts as topological structures and not in linguistic descriptions or set theoretic representations.

Meillassoux’s Principle of Unreason Towards an Intuition of the Absolute In-itself. Note Quote.

geotime_usgs

The principle of reason such as it appears in philosophy is a principle of contingent reason: not only how philosophical reason concerns difference instead of identity, we but also why the Principle of Sufficient Reason can no longer be understood in terms of absolute necessity. In other words, Deleuze disconnects the Principle of Sufficient Reason from the ontotheological tradition no less than from its Heideggerian deconstruction. What remains then of Meillassoux’s criticism in After finitude: An Essay on the Necessity of Contigency that Deleuze no less than Hegel hypostatizes or absolutizes the correlation between thinking and being and thus brings back a vitalist version of speculative idealism through the back door?

At stake in Meillassoux’s criticism of the Principle of Sufficient Reason is a double problem: the conditions of possibility of thinking and knowing an absolute and subsequently the conditions of possibility of rational ideology critique. The first problem is primarily epistemological: how can philosophy justify scientific knowledge claims about a reality that is anterior to our relation to it and that is hence not given in the transcendental object of possible experience (the arche-fossil )? This is a problem for all post-Kantian epistemologies that hold that we can only ever know the correlate of being and thought. Instead of confronting this weak correlationist position head on, however, Meillassoux seeks a solution in the even stronger correlationist position that denies not only the knowability of the in itself, but also its very thinkability or imaginability. Simplified: if strong correlationists such as Heidegger or Wittgenstein insist on the historicity or facticity (non-necessity) of the correlation of reason and ground in order to demonstrate the impossibility of thought’s self-absolutization, then the very force of their argument, if it is not to contradict itself, implies more than they are willing to accept: the necessity of the contingency of the transcendental structure of the for itself. As a consequence, correlationism is incapable of demonstrating itself to be necessary. This is what Meillassoux calls the principle of factiality or the principle of unreason. It says that it is possible to think of two things that exist independently of thought’s relation to it: contingency as such and the principle of non-contradiction. The principle of unreason thus enables the intellectual intuition of something that is absolutely in itself, namely the absolute impossibility of a necessary being. And this in turn implies the real possibility of the completely random and unpredictable transformation of all things from one moment to the next. Logically speaking, the absolute is thus a hyperchaos or something akin to Time in which nothing is impossible, except it be necessary beings or necessary temporal experiences such as the laws of physics.

There is, moreover, nothing mysterious about this chaos. Contingency, and Meillassoux consistently refers to this as Hume’s discovery, is a purely logical and rational necessity, since without the principle of non-contradiction not even the principle of factiality would be absolute. It is thus a rational necessity that puts the Principle of Sufficient Reason out of action, since it would be irrational to claim that it is a real necessity as everything that is is devoid of any reason to be as it is. This leads Meillassoux to the surprising conclusion that [t]he Principle of Sufficient Reason is thus another name for the irrational… The refusal of the Principle of Sufficient Reason is not the refusal of reason, but the discovery of the power of chaos harboured by its fundamental principle (non-contradiction). (Meillassoux 2007: 61) The principle of factiality thus legitimates or founds the rationalist requirement that reality be perfectly amenable to conceptual comprehension at the same time that it opens up [a] world emancipated from the Principle of Sufficient Reason (Meillassoux) but founded only on that of non-contradiction.

This emancipation brings us to the practical problem Meillassoux tries to solve, namely the possibility of ideology critique. Correlationism is essentially a discourse on the limits of thought for which the deabsolutization of the Principle of Sufficient Reason marks reason’s discovery of its own essential inability to uncover an absolute. Thus if the Galilean-Copernican revolution of modern science meant the paradoxical unveiling of thought’s capacity to think what there is regardless of whether thought exists or not, then Kant’s correlationist version of the Copernican revolution was in fact a Ptolemaic counterrevolution. Since Kant and even more since Heidegger, philosophy has been adverse precisely to the speculative import of modern science as a formal, mathematical knowledge of nature. Its unintended consequence is therefore that questions of ultimate reasons have been dislocated from the domain of metaphysics into that of non-rational, fideist discourse. Philosophy has thus made the contemporary end of metaphysics complicit with the religious belief in the Principle of Sufficient Reason beyond its very thinkability. Whence Meillassoux’s counter-intuitive conclusion that the refusal of the Principle of Sufficient Reason furnishes the minimal condition for every critique of ideology, insofar as ideology cannot be identified with just any variety of deceptive representation, but is rather any form of pseudo-rationality whose aim is to establish that what exists as a matter of fact exists necessarily. In this way a speculative critique pushes skeptical rationalism’s relinquishment of the Principle of Sufficient Reason to the point where it affirms that there is nothing beneath or beyond the manifest gratuitousness of the given nothing, but the limitless and lawless power of its destruction, emergence, or persistence. Such an absolutizing even though no longer absolutist approach would be the minimal condition for every critique of ideology: to reject dogmatic metaphysics means to reject all real necessity, and a fortiori to reject the Principle of Sufficient Reason, as well as the ontological argument.

On the one hand, Deleuze’s criticism of Heidegger bears many similarities to that of Meillassoux when he redefines the Principle of Sufficient Reason in terms of contingent reason or with Nietzsche and Mallarmé: nothing rather than something such that whatever exists is a fiat in itself. His Principle of Sufficient Reason is the plastic, anarchic and nomadic principle of a superior or transcendental empiricism that teaches us a strange reason, that of the multiple, chaos and difference. On the other hand, however, the fact that Deleuze still speaks of reason should make us wary. For whereas Deleuze seeks to reunite chaotic being with systematic thought, Meillassoux revives the classical opposition between empiricism and rationalism precisely in order to attack the pre-Kantian, absolute validity of the Principle of Sufficient Reason. His argument implies a return to a non-correlationist version of Kantianism insofar as it relies on the gap between being and thought and thus upon a logic of representation that renders Deleuze’s Principle of Sufficient Reason unrecognizable, either through a concept of time, or through materialism.

Organic and the Orgiastic. Cartography of Ground and Groundlessness in Deleuze and Heidegger. Thought of the Day 43.0

screen-shot-2015-06-11-at-7-20-31-pm

In his last hermeneutical Erörterung of Leibniz, The Principle of Ground, Heidegger traces back metaphysics to its epochal destiny in the twofold or duplicity (Zwiefalt) of Being and Thought and thus follows the ground in its self-ungrounding (zugrundegehen). Since the foundation of thought is also the foundation of Being, reason and ground are not equal but belong together (zusammenhören) in the Same as the ungrounded yet historical horizon of the metaphysical destiny of Being: On the one hand we say: Being and ground: the Same. On the other hand we say: Being: the abyss (Ab-Grund). What is important is to think the univocity (Einsinnigkeit) of both Sätze, those Sätze that are no longer Sätze. In Difference and Repetition, similarly, Deleuze tells us that sufficient reason is twisted into the groundless. He confirms that the Fold (Pli) is the differenciator of difference engulfed in groundlessness, always folding, unfolding, refolding: to ground is always to bend, to curve and recurve. He thus concludes:

Sufficient reason or ground is strangely bent: on the one hand, it leans towards what it grounds, towards the forms of representation; on the other hand, it turns and plunges into a groundless beyond the ground which resists all forms and cannot be represented.

Despite the fundamental similarity of their conclusions, however, our short overview of Deleuze’s transformation of the Principle of Sufficient Reason has already indicated that his argumentation is very different from Heideggerian hermeneutics. To ground, Deleuze agrees, is always to ground representation. But we should distinguish between two kinds of representation: organic or finite representation and orgiastic or infinite representation. What unites the classicisms of Kant, Descartes and Aristotle is that representation retains organic form as its principle and the finite as its element. Here the logical principle of identity always precedes ontology, such that the ground as element of difference remains undetermined and in itself. It is only with Hegel and Leibniz that representation discovers the ground as its principle and the infinite as its element. It is precisely the Principle of Sufficient Reason that enables thought to determine difference in itself. The ground is like a single and unique total moment, simultaneously the moment of the evanescence and production of difference, of disappearance and appearance. What the attempts at rendering representation infinite reveal, therefore, is that the ground has not only an Apollinian, orderly side, but also a hidden Dionysian, orgiastic side. Representation discovers within itself the limits of the organized; tumult, restlessness and passion underneath apparent calm. It rediscovers monstrosity.

The question then is how to evaluate this ambiguity that is essential to the ground. For Heidegger, the Zwiefalt is either naively interpreted from the perspective of its concave side, following the path of the history of Western thought as the belonging together of Being and thought in a common ground; or it is meditated from its convex side, excavating it from the history of the forgetting of Being the decline of the Fold (Wegfall der Zwiefalt, Vorenthalt der Zwiefalt) as the pivotal point of the Open in its unfolding and following the path that leads from the ground to the abyss. Instead of this all or nothing approach, Deleuze takes up the question in a Nietzschean, i.e. genealogical fashion. The attempt to represent difference in itself cannot be disconnected from its malediction, i.e. the moral representation of groundlessness as a completely undifferentiated abyss. As Bergson already observed, representational reason poses the problem of the ground in terms of the alternative between order and chaos. This goes in particular for the kind of representational reason that seeks to represent the irrepresentable: Representation, especially when it becomes infinite, is imbued with a presentiment of groundlessness. Because it has become infinite in order to include difference within itself, however, it represents groundlessness as a completely undifferentiated abyss, a universal lack of difference, an indifferent black nothingness. Indeed, if Deleuze is so hostile to Hegel, it is because the latter embodies like no other the ultimate illusion inseparable from the Principle of Sufficient Reason insofar as it grounds representation, namely that groundlessness should lack differences, when in fact it swarms with them.

Some content on this page was disabled on May 4, 2020 as a result of a DMCA takedown notice from Columbia University Press. You can learn more about the DMCA here:

https://en.support.wordpress.com/copyright-and-the-dmca/

Meillassoux, Deleuze, and the Ordinal Relation Un-Grounding Hyper-Chaos. Thought of the Day 41.0

v1v2a

As Heidegger demonstrates in Kant and the Problem of Metaphysics, Kant limits the metaphysical hypostatization of the logical possibility of the absolute by subordinating the latter to a domain of real possibility circumscribed by reason’s relation to sensibility. In this way he turns the necessary temporal becoming of sensible intuition into the sufficient reason of the possible. Instead, the anti-Heideggerian thrust of Meillassoux’s intellectual intuition is that it absolutizes the a priori realm of pure logical possibility and disconnects the domain of mathematical intelligibility from sensibility. (Ray Brassier’s The Enigma of Realism: Robin Mackay – Collapse_ Philosophical Research and Development. Speculative Realism.) Hence the chaotic structure of his absolute time: Anything is possible. Whereas real possibility is bound to correlation and temporal becoming, logical possibility is bound only by non-contradiction. It is a pure or absolute possibility that points to a radical diachronicity of thinking and being: we can think of being without thought, but not of thought without being.

Deleuze clearly situates himself in the camp when he argues with Kant and Heidegger that time as pure auto-affection (folding) is the transcendental structure of thought. Whatever exists, in all its contingency, is grounded by the first two syntheses of time and ungrounded by the third, disjunctive synthesis in the implacable difference between past and future. For Deleuze, it is precisely the eternal return of the ordinal relation between what exists and what may exist that destroys necessity and guarantees contingency. As a transcendental empiricist, he thus agrees with the limitation of logical possibility to real possibility. On the one hand, he thus also agrees with Hume and Meillassoux that [r]eality is not the result of the laws which govern it. The law of entropy or degradation in thermodynamics, for example, is unveiled as nihilistic by Nietzsche s eternal return, since it is based on a transcendental illusion in which difference [of temperature] is the sufficient reason of change only to the extent that the change tends to negate difference. On the other hand, Meillassoux’s absolute capacity-to-be-other relative to the given (Quentin Meillassoux, Ray Brassier, Alain Badiou – After finitude: an essay on the necessity of contingency) falls away in the face of what is actual here and now. This is because although Meillassoux s hyper-chaos may be like time, it also contains a tendency to undermine or even reject the significance of time. Thus one may wonder with Jon Roffe (Time_and_Ground_A_Critique_of_Meillassou) how time, as the sheer possibility of any future or different state of affairs, can provide the (non-)ground for the realization of this state of affairs in actuality. The problem is less that Meillassoux’s contingency is highly improbable than that his ontology includes no account of actual processes of transformation or development. As Peter Hallward (Levi Bryant, Nick Srnicek and Graham Harman (editors) – The Speculative Turn: Continental Materialism and Realism) has noted, the abstract logical possibility of change is an empty and indeterminate postulate, completely abstracted from all experience and worldly or material affairs. For this reason, the difference between Deleuze and Meillassoux seems to come down to what is more important (rather than what is more originary): the ordinal sequences of sensible intuition or the logical lack of reason.

But for Deleuze time as the creatio ex nihilo of pure possibility is not just irrelevant in relation to real processes of chaosmosis, which are both chaotic and probabilistic, molecular and molar. Rather, because it puts the Principle of Sufficient Reason as principle of difference out of real action it is either meaningless with respecting to the real or it can only have a negative or limitative function. This is why Deleuze replaces the possible/real opposition with that of virtual/actual. Whereas conditions of possibility always relate asymmetrically and hierarchically to any real situation, the virtual as sufficient reason is no less real than the actual since it is first of all its unconditioned or unformed potential of becoming-other.