# Reductionism of Numerical Complexity: A Wittgensteinian Excursion

Wittgenstein’s criticism of Russell’s logicist foundation of mathematics contained in (Remarks on the Foundation of Mathematics) consists in saying that it is not the formalized version of mathematical deduction which vouches for the validity of the intuitive version but conversely.

If someone tries to shew that mathematics is not logic, what is he trying to shew? He is surely trying to say something like: If tables, chairs, cupboards, etc. are swathed in enough paper, certainly they will look spherical in the end.

He is not trying to shew that it is impossible that, for every mathematical proof, a Russellian proof can be constructed which (somehow) ‘corresponds’ to it, but rather that the acceptance of such a correspondence does not lean on logic.

Taking up Wittgenstein’s criticism, Hao Wang (Computation, Logic, Philosophy) discusses the view that mathematics “is” axiomatic set theory as one of several possible answers to the question “What is mathematics?”. Wang points out that this view is epistemologically worthless, at least as far as the task of understanding the feature of cognition guiding is concerned:

Mathematics is axiomatic set theory. In a definite sense, all mathematics can be derived from axiomatic set theory. [ . . . ] There are several objections to this identification. [ . . . ] This view leaves unexplained why, of all the possible consequences of set theory, we select only those which happen to be our mathematics today, and why certain mathematical concepts are more interesting than others. It does not help to give us an intuitive grasp of mathematics such as that possessed by a powerful mathematician. By burying, e.g., the individuality of natural numbers, it seeks to explain the more basic and the clearer by the more obscure. It is a little analogous to asserting that all physical objects, such as tables, chairs, etc., are spherical if we swathe them with enough stuff.

Reductionism is an age-old project; a close forerunner of its incarnation in set theory was the arithmetization program of the 19th century. It is interesting that one of its prominent representatives, Richard Dedekind (Essays on the Theory of Numbers), exhibited a quite distanced attitude towards a consequent carrying out of the program:

It appears as something self-evident and not new that every theorem of algebra and higher analysis, no matter how remote, can be expressed as a theorem about natural numbers [ . . . ] But I see nothing meritorious [ . . . ] in actually performing this wearisome circumlocution and insisting on the use and recognition of no other than rational numbers.

Perec wrote a detective novel without using the letter ‘e’ (La disparition, English A void), thus proving not only that such an enormous enterprise is indeed possible but also that formal constraints sometimes have great aesthetic appeal. The translation of mathematical propositions into a poorer linguistic framework can easily be compared with such painful lipogrammatical exercises. In principle all logical connectives can be simulated in a framework exclusively using Sheffer’s stroke, and all cuts (in Gentzen’s sense) can be eliminated; one can do without common language at all in mathematics and formalize everything and so on: in principle, one could leave out a whole lot of things. However, in doing so one would depart from the true way of thinking employed by the mathematician (who really uses “and” and “not” and cuts and who does not reduce many things to formal systems). Obviously, it is the proof theorist as a working mathematician who is interested in things like the reduction to Sheffer’s stroke since they allow for more concise proofs by induction in the analysis of a logical calculus. Hence this proof theorist has much the same motives as a mathematician working on other problems who avoids a completely formalized treatment of these problems since he is not interested in the proof-theoretical aspect.

There might be quite similar reasons for the interest of some set theorists in expressing usual mathematical constructions exclusively with the expressive means of ZF (i.e., in terms of ∈). But beyond this, is there any philosophical interpretation of such a reduction? In the last analysis, mathematicians always transform (and that means: change) their objects of study in order to make them accessible to certain mathematical treatments. If one considers a mathematical concept as a tool, one does not only use it in a way different from the one in which it would be used if it were considered as an object; moreover, in semiotical representation of it, it is given a form which is different in both cases. In this sense, the proof theorist has to “change” the mathematical proof (which is his or her object of study to be treated with mathematical tools). When stating that something is used as object or as tool, we have always to ask: in which situation, or: by whom.

A second observation is that the translation of propositional formulæ in terms of Sheffer’s stroke in general yields quite complicated new formulæ. What is “simple” here is the particularly small number of symbols needed; but neither the semantics becomes clearer (p|q means “not both p and q”; cognitively, this looks more complex than “p and q” and so on), nor are the formulæ you get “short”. What is looked for in this case, hence, is a reduction of numerical complexity, while the primitive basis attained by the reduction cognitively looks less “natural” than the original situation (or, as Peirce expressed it, “the consciousness in the determined cognition is more lively than in the cognition which determines it”); similarly in the case of cut elimination. In contrast to this, many philosophers are convinced that the primitive basis of operating with sets constitutes really a “natural” basis of mathematical thinking, i.e., such operations are seen as the “standard bricks” of which this thinking is actually made – while no one will reasonably claim that expressions of the type p|q play a similar role for propositional logic. And yet: reduction to set theory does not really have the task of “explanation”. It is true, one thus reduces propositions about “complex” objects to propositions about “simple” objects; the propositions themselves, however, thus become in general more complex. Couched in Fregean terms, one can perhaps more easily grasp their denotation (since the denotation of a proposition is its truth value) but not their meaning. A more involved conceptual framework, however, might lead to simpler propositions (and in most cases has actually just been introduced in order to do so). A parallel argument concerns deductions: in its totality, a deduction becomes more complex (and less intelligible) by a decomposition into elementary steps.

Now, it will be subject to discussion whether in the case of some set operations it is admissible at all to claim that they are basic for thinking (which is certainly true in the case of the connectives of propositional logic). It is perfectly possible that the common sense which organizes the acceptance of certain operations as a natural basis relies on something different, not having the character of some eternal laws of thought: it relies on training.

Is it possible to observe that a surface is coloured red and blue; and not to observe that it is red? Imagine a kind of colour adjective were used for things that are half red and half blue: they are said to be ‘bu’. Now might not someone to be trained to observe whether something is bu; and not to observe whether it is also red? Such a man would then only know how to report: “bu” or “not bu”. And from the first report we could draw the conclusion that the thing was partly red.

# Mathematical Reductionism: As Case Via C. S. Peirce’s Hypothetical Realism.

During the 20th century, the following epistemology of mathematics was predominant: a sufficient condition for the possibility of the cognition of objects is that these objects can be reduced to set theory. The conditions for the possibility of the cognition of the objects of set theory (the sets), in turn, can be given in various manners; in any event, the objects reduced to sets do not need an additional epistemological discussion – they “are” sets. Hence, such an epistemology relies ultimately on ontology. Frege conceived the axioms as descriptions of how we actually manipulate extensions of concepts in our thinking (and in this sense as inevitable and intuitive “laws of thought”). Hilbert admitted the use of intuition exclusively in metamathematics where the consistency proof is to be done (by which the appropriateness of the axioms would be established); Bourbaki takes the axioms as mere hypotheses. Hence, Bourbaki’s concept of justification is the weakest of the three: “it works as long as we encounter no contradiction”; nevertheless, it is still epistemology, because from this hypothetical-deductive point of view, one insists that at least a proof of relative consistency (i.e., a proof that the hypotheses are consistent with the frequently tested and approved framework of set theory) should be available.

Doing mathematics, one tries to give proofs for propositions, i.e., to deduce the propositions logically from other propositions (premisses). Now, in the reductionist perspective, a proof of a mathematical proposition yields an insight into the truth of the proposition, if the premisses are already established (if one has already an insight into their truth); this can be done by giving in turn proofs for them (in which new premisses will occur which ask again for an insight into their truth), or by agreeing to put them at the beginning (to consider them as axioms or postulates). The philosopher tries to understand how the decision about what propositions to take as axioms is arrived at, because he or she is dissatisfied with the reductionist claim that it is on these axioms that the insight into the truth of the deduced propositions rests. Actually, this epistemology might contain a short-coming since Poincaré (and Wittgenstein) stressed that to have a proof of a proposition is by no means the same as to have an insight into its truth.

Attempts to disclose the ontology of mathematical objects reveal the following tendency in epistemology of mathematics: Mathematics is seen as suffering from a lack of ontological “determinateness”, namely that this science (contrarily to many others) does not concern material data such that the concept of material truth is not available (especially in the case of the infinite). This tendency is embarrassing since on the other hand mathematical cognition is very often presented as cognition of the “greatest possible certainty” just because it seems not to be bound to material evidence, let alone experimental check.

The technical apparatus developed by the reductionist and set-theoretical approach nowadays serves other purposes, partly for the reason that tacit beliefs about sets were challenged; the explanations of the science which it provides are considered as irrelevant by the practitioners of this science. There is doubt that the above mentioned sufficient condition is also necessary; it is not even accepted throughout as a sufficient one. But what happens if some objects, as in the case of category theory, do not fulfill the condition? It seems that the reductionist approach, so to say, has been undocked from the historical development of the discipline in several respects; an alternative is required.

Anterior to Peirce, epistemology was dominated by the idea of a grasp of objects; since Descartes, intuition was considered throughout as a particular, innate capacity of cognition (even if idealists thought that it concerns the general, and empiricists that it concerns the particular). The task of this particular capacity was the foundation of epistemology; already from Aristotle’s first premisses of syllogism, what was aimed at was to go back to something first. In this traditional approach, it is by the ontology of the objects that one hopes to answer the fundamental question concerning the conditions for the possibility of the cognition of these objects. One hopes that there are simple “basic objects” to which the more complex objects can be reduced and whose cognition is possible by common sense – be this an innate or otherwise distinguished capacity of cognition common to all human beings. Here, epistemology is “wrapped up” in (or rests on) ontology; to do epistemology one has to do ontology first.

Peirce shares Kant’s opinion according to which the object depends on the subject; however, he does not agree that reason is the crucial means of cognition to be criticised. In his paper “Questions concerning certain faculties claimed for man”, he points out the basic assumption of pragmatist philosophy: every cognition is semiotically mediated. He says that there is no immediate cognition (a cognition which “refers immediately to its object”), but that every cognition “has been determined by a previous cognition” of the same object. Correspondingly, Peirce replaces critique of reason by critique of signs. He thinks that Kant’s distinction between the world of things per se (Dinge an sich) and the world of apparition (Erscheinungswelt) is not fruitful; he rather distinguishes the world of the subject and the world of the object, connected by signs; his position consequently is a “hypothetical realism” in which all cognitions are only valid with reservations. This position does not negate (nor assert) that the object per se (with the semiotical mediation stripped off) exists, since such assertions of “pure” existence are seen as necessarily hypothetical (that means, not withstanding philosophical criticism).

By his basic assumption, Peirce was led to reveal a problem concerning the subject matter of epistemology, since this assumption means in particular that there is no intuitive cognition in the classical sense (which is synonymous to “immediate”). Hence, one could no longer consider cognitions as objects; there is no intuitive cognition of an intuitive cognition. Intuition can be no more than a relation. “All the cognitive faculties we know of are relative, and consequently their products are relations”. According to this new point of view, intuition cannot any longer serve to found epistemology, in departure from the former reductionist attitude. A central argument of Peirce against reductionism or, as he puts it,

the reply to the argument that there must be a first is as follows: In retracing our way from our conclusions to premisses, or from determined cognitions to those which determine them, we finally reach, in all cases, a point beyond which the consciousness in the determined cognition is more lively than in the cognition which determines it.

Peirce gives some examples derived from physiological observations about perception, like the fact that the third dimension of space is inferred, and the blind spot of the retina. In this situation, the process of reduction loses its legitimacy since it no longer fulfills the function of cognition justification. At such a place, something happens which I would like to call an “exchange of levels”: the process of reduction is interrupted in that the things exchange the roles performed in the determination of a cognition: what was originally considered as determining is now determined by what was originally considered as asking for determination.

The idea that contents of cognition are necessarily provisional has an effect on the very concept of conditions for the possibility of cognitions. It seems that one can infer from Peirce’s words that what vouches for a cognition is not necessarily the cognition which determines it but the livelyness of our consciousness in the cognition. Here, “to vouch for a cognition” means no longer what it meant before (which was much the same as “to determine a cognition”), but it still means that the cognition is (provisionally) reliable. This conception of the livelyness of our consciousness roughly might be seen as a substitute for the capacity of intuition in Peirce’s epistemology – but only roughly, since it has a different coverage.

# Tarski, Wittgenstein and Undecidable Sentences in Affine Relation to Gödel’s. Thought of the Day 65.0

I imagine someone asking my advice; he says: “I have constructed a proposition (I will use ‘P’ to designate it) in Russell’s symbolism, and by means of certain definitions and transformations it can be so interpreted that it says: ‘P is not provable in Russell’s system.’ Must I not say that this proposition on the one hand is true, and on the other hand is unprovable? For suppose it were false; then it is true that it is provable. And that surely cannot be! And if it is proved, then it is proved that it is not provable. Thus it can only be true, but unprovable.” — Wittgenstein

Any language of such a set, say Peano Arithmetic PA (or Russell and Whitehead’s Principia Mathematica, or ZFC), expresses – in a finite, unambiguous, and communicable manner – relations between concepts that are external to the language PA (or to Principia, or to ZFC). Each such language is, thus, essentially two-valued, since a relation either holds or does not hold externally (relative to the language).

Further, a selected, finite, number of primitive formal assertions about a finite set of selected primitive relations of, say, PA are defined as axiomatically PA-provable; all other assertions about relations that can be effectively defined in terms of the primitive relations are termed as PA-provable if, and only if, there is a finite sequence of assertions of PA, each of which is either a primitive assertion, or which can effectively be determined in a finite number of steps as an immediate consequence of any two assertions preceding it in the sequence by a finite set of rules of consequence.

The philosophical dimensions of this emerges if we take M as the standard, arithmetical, interpretation of PA, where:

(a)  the set of non-negative integers is the domain,

(b)  the integer 0 is the interpretation of the symbol “0” of PA,

(c)  the successor operation (addition of 1) is the interpretation of the “ ‘ ” function,

(d)  ordinary addition and multiplication are the interpretations of “+” and “.“,

(e) the interpretation of the predicate letter “=” is the equality relation.

Now, post-Gödel, the standard interpretation of classical theory seems to be that:

(f) PA can, indeed, be interpreted in M;

(g) assertions in M are decidable by Tarski’s definitions of satisfiability and truth;

(h) Tarskian truth and satisfiability are, however, not effectively verifiable in M.

Tarski made clear his indebtedness to Gödel’s methods,

We owe the method used here to Gödel who employed it for other purposes in his recently published work Gödel. This exceedingly important and interesting article is not directly connected with the theme of our work it deals with strictly methodological problems the consistency and completeness of deductive systems, nevertheless we shall be able to use the methods and in part also the results of Gödel’s investigations for our purpose.

On the other hand Tarski strongly emphasized the fact that his results were obtained independently, even though Tarski’s theorem on the undefinability of truth implies the existence of undecidable sentences, and hence Gödel’s first incompleteness theorem. Shifting gears here, how far was the Wittgensteinian quote really close to Gödel’s? However, the question, implicit in Wittgenstein’s argument regarding the possibility of a semantic contradiction in Gödel’s reasoning, then arises: How can we assert that a PA-assertion (whether such an assertion is PA-provable or not) is true under interpretation in M, so long as such truth remains effectively unverifiable in M? Since the issue is not resolved unambiguously by Gödel in his paper (nor, apparently, by subsequent standard interpretations of his formal reasoning and conclusions), Wittgenstein’s quote can be taken to argue that, although we may validly draw various conclusions from Gödel’s formal reasoning and conclusions, the existence of a true or false assertion of M cannot be amongst them.

# Wittgenstein’s Form is the Possibility of Structure

For given two arbitrary objects x and y they can be understood as arguments for a basic ontological connection which, in turn, is either positive or negative. A priori there exist just four cases: the case of positive connection – MP, the case of negative connection – MI, the case that connection is both positive and negative, hence incoherent, denoted – MPI, and the most popular in combinatorial ontology the case of mutual neutrality – N( , ). The first case is taken here to be fundamental.

Explication for σ

Now we can offer the following, rather natural explication for a powerful, nearly omnipotent, synthesizer: y is synthetizable from x iff it is be made possible from x:

σ(x) = {y : MP(x,y)}

Notice that the above explication connects the second approach (operator one) with the third (internal) approach to a general theory of analysis and synthesis.

Quoting one of the most mysterious theses of Wittgenstein’s Tractatus:

(2.033) Form is the possibility of structure.

Ask now what the possibility means? It has been pointed out by Frank Ramsey in his famous review of the Tractatus that it cannot be read as a logical modality (i. e., form cannot be treated as an alternative structure), for this reading would immediately make Tractatus inconsistent.

But, rather ‘Form of x is what makes the structure of y possible’.

Formalization: MP(Form(x), Str(y)), hence – through suitable generalization – MP(x, y).

Wittgensteinian and Leibnizian clues make the nature of MP more clear: form of x is determined by its substance, whereas structurality of y means that y is a complex built up in such and such way. Using syntactical categorization of Lésniewski and Ajdukiewicz we obtain therefore that MP has the category of quantifier: s/n, s – which, as is easy to see, is of higher order and deeply modal.

Therefore M P is a modal quantifier, characterized after Wittgenstein’s clue by

MP(x, y) ↔ MP(S(x), y)

# Transgressive Determinism. Drunken Risibility.

Transgression: though used or starting to be used putatively in academic discourses (Even the Sokal hoax had this term as an ingredient in the essay), it could variously stand for inter-disciplinarity/cross-disciplinarity or cross-pollenization. This way, academics elsewhere believe that humanities (especially philosophy) could derive benefits, as one of the ‘conatus’ (sorry for Spinozism here) for philosophy would indicate it to break out of academia. But, in its original meaning, it stands for breaking out of norm, that somehow is derisive if collated with the modern connotation. None the less, meaning is contextualized so much so today, that this present meaning finding its way into the technical jargon might never be ruled out.

A determination (future well under the rubric of determinism) could be more eschatological rather than teleological and here for obvious reasons, Wittgensteinian invocation is handy. But for similar reasons, pundits from the French/US alliance of post-structuralism would find habitat as well. Metaphysics should shun ontological dependence for any kind of epistemic accessibility. Probably, we could import realism into the fold for a better understanding. The idea of realism entails the possibility of error, which is further articulated by mind-independence or attitude-independence. The former is ontological independence, while the latter is epistemological independence. A proper decisiveness obligated upon/on realism could be a handy tool for issues pertaining to Wittgenstein’s multiplicity of language games/forms of life.

# Phaneroscopy/Phanerology de-agentify

Yes, it is a limitation to break the world/universe, or what have you into the binaries. A resolution of the same ain’t possible, until one either exercises an asymptotic progression/regression machine on it, and thus relegating the whole into an aporetic point of philosophical frustration that goes by the name of dialectics, or, one somehow experiences an event of binaries morphing into one another. Such a collapse of the one into the other gravitates the defining points of differences into identities, and this goes by the name of Laruellean “decisionism-in-the-last-instance”. So, dialectics with the second method goes on a honeymoon where minds of the left spend countless nights trying to get it back to the realistic domain (pun intended!!!).

I’d be sorry to be getting into territories that speak the language of failed poets/prose writers, for otherwise, I’d not be able to justify how bad a writer I really am!!! The lightened poetry of non-sense and/of Being: Even if such a poetry did exist (for me, at least it never did), then it was probably the romantic ideal of the by-gone philosophical ages, and we seem to have come a long way out of it, but still cling on to the symptoms of such an era. Pity!! It is not conjoining the obscure with the nothingness, or the Other World. It is rather the tunnelling of the lyrical aspect with the nothingness, a Schellingian approach to when he says that without confrontation, there is nothing of the creation possible. Dialectical, yes, in a way, but also the underside of it, which is considered a pariah, an outcast, an avoided and avoidable theory of creativity, or what I understand as Leper Creativity. Yes, losing identity could be viewed as relative here. But as I said, “could be”, and I refuse to truck with it imposed-consciously. And hereby, I also answer a subsequent point: “it” is uncharacteristic of holding true to the pillars of what constitutes it. Far-fetchedly, “it” is like what Wittgenstein would say: rise up the rung of the ladder and then discard it. But, a difference is to be spotted here. For Wittgenstein, the climber discards the ladder, whereas in this the present context, with each rising up on the rung of the ladder, a sort of dehumanization takes place in terms of awe/sublime/incapacitation. In other words, a sense of belonging to the “it” is bred in the “we” (agents/agencies) undoubtedly, but is lost sight of due to the intense flows of the “it” in time. A sort of exponential hypertrophy of the “it” due to “we”, or loosely saying emergentism in which node/nodes of “we” are simply sucked in. So, “we” build up the “it”, and lose it identity-wise in the process.

On similar lines, the knowledge of surplus is bluntly replaced by the awareness of it, an excess that is wasted more than it is used, and a kind of “solar anus” in the Bataillean sense, truly. Philosophical aesthetics falling in the hands of terrorizing hermeneutics: yes, I concur on this. This is one of the reasons, why I have started advocating phaneroscopy/phanerology over phenomenology, and it comes close to your recent studies on the quantum physics. But, then do we have a choice? We are yet to be defeated by the exploding solar anus, even though we are well on our way to a crushing defeat. Analogically, when someone says that “a world without capitalism is possible”, I tell of such Occupy/World Social Forum pundits that it is, but in a way that is stripped of agencies, and not otherwise. Sorry for the hubris here on my part, but my way of looking into these aspects could either mean that I am going a bit too far in my analysis, or getting really cracked brain now. On the point of polarity between order and disorder becomes unidentifiable when I say of lemniscate obscuring the horizon. Why do I say this? For me, order is nothing but an echo of a disordered anarchy that still reverberates. With this, I quash ethics, and I have no qualms in doing so, for a whole new set of rules need to be rewritten/rethought in this very darkness, which incidentally is on the avoidable radar still, but is making a stealthy invasion upon us, and before time will annihilate us, and de-anthropocentrize. Can’t help feeling sorry for Kant now for sure.

“It” is the cosmic “capitalism”.

…about meta-narrativizing (sorry for this nonlinear/bottom-up approach to the mail), I could only quip on post-modernism as highly ineffectual, escapist laden movement in its reactionary gesture to modernism. Take for instance, Lyotard, and his turn from libidinal economy to post-modernism through paganism, before he culminates his journey in The Differend. He sure reached a road block in Libidinal Economy itself, when faced with his unflinching commitment to ontology of events, since that raised dire issue for his epistemological affiliations. The resultant: Freud and Marxian marriage was filed for divorce. The way out that he imagined was to sort out matters to even out differences with the incommensurable issues of justice, and thats why he took up paganism. Even here, to begin with, he was in a quandary, since he took recourse to admissibility in irreducible differences plaguing the prevalent order of things (Sorry for this Foucauldian noise!!), and paved the escape route by adhering to the principles of never trying one’s hand/mind or whatever one could use at reductionism. So far, so good. But, was this turn towards micro-narrativizing proving a difficult ordeal? And my reading of the thinker in question undoubtedly says YES. If one reads The Postmodern Condition or The Differend carefully, one notices his liberal borrowings from Wittgenstein’s language games, or what he prefers to call “phase-regimens”. These are used to negate his earlier commitments to ontology of events by stressing more upon his epistemological ones, and therefore are invoked only with the idea of political motivators. The crux of the matter is: to drive his point home forcefully, he negates critical theory, unitary Being of the society (both pillars of modernism, or meta-narratives in themselves), and substitutes it by a post-modern society that is built by compositions of fragmented “phase-regimens” open to alteration in their attempts to successfully pass the test of legitimate narratives. This debt to Wittgenstein is what I call, a movement riddled with escapism, an exegesis that begins, but has a real eschatological problem. I do not know, if I’ve been able to show with this example clearly the fault-lines within micro-narratives?

[addendum]: if Wittgenstein is said to have some resemblances with postmodernism or more importantly poststructuralism, human imagination has transcended its sleep state..

His truth is to be unearthed in mathematics. His mathematics = ontology becomes quite notorious to deal with, none the less has the key component to understanding his concept of truth. In Badiouian mathematics (Can I really use this term???), what constitutes a transition from an inconsistent set to a consistent and a definable one is only the subjective intervention to do so. This obviously is a regressive fall-back. Why can a subjective intervention not slide into inconsistency? In a situation like this, Deleuze and Guattari would NEVER encourage an outside-the-situation intervention on behalf of a subjective agent that can profess and confess allegiance to force consistency onto this inconsistency, and in the process problematizing the given situation for a successful transformation of it. This is advocated through CONNECTIONS, between elements and sets built up by the elements. And thats why their weight on IMMANENCE. And hereafter, getting back to my first reply on the post: Badiou insists on invoking the void for any such consistency to take shape. Badiou gets away from IMMANENCE to construct his version of void, the existence of which is NOT networked to the given situation in any way. Thereafter, he calls upon the subject to prove her allegiance by naturalizing these events to effectuate consistency. And that is the reason why I remarked that Badiou is accused by Deleuze and Guattari to invoke the ‘transcendent’. In any ways, for Badiou, the truth has to be an archaeological stratum within the site of the event, and hence his mathematizing it cannot be under any shadow of doubt.

Apart from this vision of truth in Badiou, I see no other, despite agreeing upon your last phrase of truth getting caught up in the wire-mesh of cold logics and rationality. Truth is an age-old problem with philosophy, that tries in vain to seek answers for thee questions asked pretty badly, and I even dare say in more Manichean manner.

[addendum 2:]: we need to break free from Kantian infused anthropocentric philosophy. The German Idealism turn has been detrimental to doing philosophy, unless it can be freed of the symptoms. One, one talks about human subjectivity, it is difficult to ignore the extra onerous package of ideological practices. Either ways, certain “isms” turn into spoilers…..

How would an event emerge? Because, unless we have an event that has made its existence known, what point is there at all to talk about his version of Truth. We have to discern this something called an event, this ‘new’ situation in a manner that does not hark back to any encyclopedic determinant under the rubric of inclusivity. Badiou makes this very clear in his mathematical treatment of sets while dealing with his take on constructivism in philosophy. Now, with the emergence of such an element, or a situation or what have you, with the sole criterion of it belonging only to itself, is event’s appearance stamped in reality, otherwise not. Badiou is declarative and not demonstrative as far as announcing the advent of such an “appearance” is concerned. This announcement is linguistic in nature. This announcement of the appearance is subtractive, for it never belongs to, as I said above, any known determinants.

Well, subtraction is not to be thought of as ‘stripping away’ (your response points in that direction though). For, if that were the case, the obvious implication would be truth as congruent with representation. Even if Badiou scorns post-theories, he still retains aversion for representation. Instead, truth is catalytic to the situation of the event, for it continuously transforms the structure of the situation upon playing the role of an interventionist, a mediator. Its like truth punching a hole in the fabric of knowledge for a progressive transformation from within which this punching effectuates that is subtractive in Badiou.

p.s. The mail in the first sentence cannot be produced for obvious reasons……but talks of conformist psuedoMarxists is trying to put human imagination to sleep……

# Is Philosophy Revenant?

This piece is in no way trying to endorse the polemical happenings in philosophy on the continent and across the channel and the Atlantic in the English speaking countries. The tradition of analytic philosophy and continental philosophy are indeed compossible and also in a way in a state of cold war. But one thing that is running like a common thread in the minds of many of the philosophers is the proclamation of the ‘End of Philosophy’. I want to shy from giving recognition to the eschatology that philosophy is facing and hence try to show that the death of philosophy is in no way in sight as it would mean the tragic abandonment of reflection and meaning, which keeps me in doubt if at all we would want to suffer such a loss. Indeed we do face a spate of intellectual terrorism and often badly defined and badly done philosophy, but then our valiant attempt, to echo Oliver Wendell Holmes, ‘to churn void and make cheese’ isn’t here to stay.

We have heard that physics is nearing its end. Physicists are trying to set up a system of equations which are together called the Grand Unified Theories (GUT) that would enable to answer all the possible phenomena in the Universe. Although this claim has been made for a long time, the end as such is in no way in sight. Similarly starting with the initial years of the last century, philosophical problems or systems are either being given the confident death knell or they have been branching off to explore new fields. This in a nutshell definitely lends legitimacy to what Ernst Gellner said in his Words and Things: “a cleric who loses his faith abandons his calling, but a philosopher who loses his redefines his subject.” But on the other hand there have been constant questions asked about the purposefulness nature of doing philosophy in the first place. The only philosophy one might engage in after all that has happened would no longer make any pretense of being in control of the absolute. Indeed, it would have to forbid itself to think the absolute, lest it betray the thought. And yet it must not allow anything to be taken away from the emphatic concept of the truth. This contradiction which was closely followed in the earlier days of the Frankfurt School critical theory tradition defined the precise element of the purpose of doing philosophy.

It is definitely not the case of growing contempt towards philosophy, but a sense of decadence in doing it. This despondency in no way should be linked with the building up of contempt. Bertrand Russell in his ‘Unpopular Essays’ thinks that if contempt for philosophy is developed to the point, at which it becomes systematic, then it becomes a philosophy.

My intention in this talk is to side with what EM Forster once said: “Death destroys a man; the idea of death saves him.” In this particular saying, I wish to substitute man with philosophy. It is precisely this thought or the idea that philosophy is dead, that the entire studies in philosophy are continuing in the process of ongoing history.

One must remember the fact that when the Greeks spoke of the end of philosophy, they had telos in mind as the end and not like today’s usage wherein the end depicts the cessation or the terminal end of doing philosophy. Philosophy from the days it began had one companion always following it and that was sophistry. That clearly does not mean that we need to read the history of philosophy along with a history of anti-philosophy.

Before going any further, I would like to quote from Kierkegaard’s Fear and Trembling:

“Heraclitus the obscure said, ‘One cannot pass twice through the same stream’. Heraclitus the obscure had a disciple who did not stop with that, he went further and added, One cannot do it even once’. Poor Heraclitus, to have such a disciple! By this amendment the thesis of Heraclitus was so improved that it became an Eleatic thesis which denies movement, and yet the disciple decided only to be a disciple of Heraclitus… and to go further-not back to the position Heraclitus had abandoned.”

In the universities where new courses in psychology, anthropology, applicative sciences and business sciences are being set up rapidly, philosophy departments are seeing a major decrease in enrollment. Even funds at the disposal of philosophy studies are getting reduced. This could very well mean that philosophy is at an end. This phenomenon is precisely what Heidegger calls the growing impact of specialists in the sense of being more scientific and less democratic control on the various aspects of associate life. This particular train of thought could very well be linked with Plato’s philosopher kings not getting manifested. Heidegger here expresses concern with the emergence of power vested in an uncontrolled manner that he condemns as being very deceitful and dangerous with the ever-increasing inevitability of ‘striking at the heart of the state’. This power according to Heidegger is democratic in format. Many contemporary philosophers are trying to label this scenario in a psychiatric metaphoric manner by terming it as schizophrenic.

The end could be thought of in two manners: the first being Philosophy coming full circle, and hence an aporia is reached and to do philosophy, one starts from where one originally began. This notion is Hegelian. The other is the doctrine of ‘Quietism’, which indicates the clarification of language such that the philosophical problems are not solved but dissolved, the Wittgensteinian notion. He says in the Philosophical Investigations that we are seeking complete clarity in that philosophy is given peace and hence is no more tormented by the questions that bring itself into it (PI, #133). If this is achieved, it is possible to will a stoppage to doing philosophy. But that is not all. There is Deleuze with his proclamation of the end of the verticality of ideas and replaced by the horizontality of ideas, the rhizomatic. I’ll be concentrating on Deleuze’s treatment at the hands of Badiou.

On the continent, it was Nietzsche, who is responsible for killing God. He never achieved any success in consummating philosophy, in setting it any impossible task, but then showed the futility in the very act of doing philosophy. His non-acceptance of traditional pillars of the ideas of classical age indeed persuaded the non-analytical philosophers to accept thinking as the systematic distortion of reality and Heidegger further cemented his notions. If the philosophers on the continent subscribe to this stand, it is indeed trying to correlate with the Hegelian notion of ‘coming full circle’ and thus getting stuck in nostalgia. Heidegger’s notion of ‘metaphysics’ is precisely the idea that being is order,  objectively given for once and all. If being is decidedly given once and for all, history is arrested and finds itself in a closed circuit thus ruling out any possibility of openness.  Heidegger cites in his lecture on the end of philosophy, the overturning of metaphysics at the hands of Marx. Metaphysics is still a talk of some philosophers either as a continuation of the classical thought or by analytical tradition in which it is taken to connote rigidified ‘regional ontologies’ deprived of the historicity that one traces in the Kantian and Husserlian transcendental as the condition for the possibility of any philosophy or science. Heideggerian notion of metaphysics in contemporary philosophy is largely rejected.

As I promised earlier, my focus is on the philosophical thought of Deleuze. To take his treatment at the hand of French philosopher, Badiou is my primary interest here. His contribution could lead us into a created framework wherein we could be led out of the labyrinth of this badly defined continental philosophy. This might not be any space of hope as it could also play itself on the flip side. There are occasions where his doxa that are traces or rather traits of the Heideggerian or Deleuzian doxa are compelling him to fall prey to; thus cutting off a truer confrontation with the radicality of his work that he starts off with.

Badiou talks of the reinvention of the categories of truth and subject against Nietzschean critique, eventuality, politics vis-a-vis ontology born again and the treatment of European nihilism and capitalism. He takes the cases of Heidegger and Deleuze in explicating these issues. In his treatment of Heidegger in the Manifesto and of Deleuze in the Clamor for Being, he has caricatured Heidegger’s opinion supporting crypto-teleology of the ‘end of philosophy’, while opening up the thought of Deleuze for a conceptual confrontation. Badiou’s system echoes Deleuze’s philosophical injunctions in that he never believed metaphysics to die a natural death but insisted it’s stifling at the hands of sophistry, philosophical thought as immanently multiple and without taking any recourse to nostalgia as far as explaining phenomenon like Nihilism.

For philosophy to be revenant, Badiou advocates a concept called ‘Platonism of the multiple’. According to Badiou, the first responsible cause of the death is borrowed from Lacan’s concept of Suture. That philosophy sutures (binds) itself with the non-philosophical conditions i.e. the destiny and the praxis of philosophy is sutured with these conditions. His four conditions are politics, science, art and love. For instance, political suture: Marxism, that is philosophy binding itself to a particular political programme. It is extremely essential if philosophy has to travel historically, these sutures are to be retained. The problem of the end of philosophy arises in the case of ‘double suture’ when a belief in the complicity of the ‘metaphysics of subjectivity’ and technological determined totalitarianism is maintained. Such complicitous natures urge philosophy to abandon its consistency and thus compel a cadence of a kind. This is in a nutshell is the jettisoning of independent procedures philosophy is used to take to.

Badiou demands that philosophy thinks of the discontinuity in the productions of evental subjects as holes in the fabric of knowledge thus undermining living philosophical traditions and reinventing Subject and Truth. Both these reinvented categories are thought of as ‘event’ emerging out of the void (inconsistency) of any situation. His fidelity to the event as rare, the subject as finite fragment of the post-event objectless truth and truth as the event of the void of the situation has adverse ramifications. In his study on Deleuze, the only way of reinventing these categories is through the reinvention of meontology that is the equating of Being with Multiple-Composition of the world through set theory. This is his Platonism of the Multiple. Badiou not only denies the phenomenological subject, but also the continuity of Being thus rejecting the notion of philosophical temporality. To that even Deleuze was anti-phenomenological in his approach, as he would take the experience to its utmost consequences and then de-suturing the subject/object distinction to make it impersonal.

Badiou took the approach to the Set Theory only to discern his denial of the concept of experience and primacy of language. If truth has to be given a rebirth as objectless, the problem of indiscernible must be dealt with. He takes the help of the set-theoretical approach to de-suture being and language. He defines truth as the singular and extra-linguistic production of the multiplicity within one of the four conditions viz, politics, science, art and love of philosophy. If truth is looked at like a supplement rather than any recourse to the transcendence, then there is this inconsistency of the void in the form of an indiscernible (not nameable, but capable of conceptualization), and then are we not dealing with the truth of the situations as such rather than the truth of this situation? What singularity can we attach to this inconsistency? Are truths only to be differentiated on the basis of decisive intervention of meaning? Badiou’s taking to meontology fails in its defence of the singularity of the event. So it seems clear here that the very destination of Deleuze’s thought is the One, and that the profusion of cases does not attest to their irreducible singularity and that alleged philosophy of the event is already there.

As for the treatment meted by Badiou on the topicality of Eternal Return, the opposition is Nietzsche contra Mallarme and is regarded on the basis of chance and accountable to the topology of the fold. Badiou opposes any conceptual probabilism that would allow Events to be tendentially captured by the entropy of the Same. Univocity must approve of divergence. However, Badiou is not too articulating in his distinction between the actual and the virtual with regard to the Bergson’s duration. In Deleuze’s treatment of entropy (D&R), the thought is for both the efficacy of the statistical reduction of events to identity and the inability of this position to account for its own genesis and for genesis itself, a sort of a double bind. What is questionable though is the very transformation of entropy to simulacrum. The philosophical ‘plane of immanence’ and the scientific ‘plane of reference’ are in a sort of unproblematic opposition and this antagonism precisely is the continuity for the philosophical endeavour.

Both Badiou and Deleuze share an utter disdain for ‘End of Philosophy’ and Badiou especially feels a deep scorn for spreading the ‘Empire of Opinion’ as in one conference, he said that ‘The Freedom of Opinion is the Enemy of Philosophy’.

Gerald Bruns mentions in his end of philosophy essays that philosophy is to be located at the level of the singular and irreplaceable rather than at the level of the universal and the necessary. He talks about this openness precisely in the sense of alterity in that this openness finds a way of substitutability of the sovereignty of the subject. Bruns believes that philosophy can recapture ‘an intimacy with the world’ of the kind Levinas talked about of the relation of proximity. This means that our relation with the world is not just confined to purely a theoretical one, but that of practical relation with those situated within an ongoing history. Now with the primacy placed on the practical, ethics can be given a privileged position in establishing a dialogue between philosophy and literature. This thesis aims at subverting the inherited conception of philosophy as the foundation of knowledge by elevating the singular over the universal and event over the law.

I do agree to a complete detour being taken on the continent in the very practice of doing philosophy and that was the reason why I had commented on Badiou being the protector. Postmodernism sounded the death knell for the classical way of thinking of philosophy in terms of grand narratives. Micro or localized narratives are the more sensibly thought of in answering the changing world scenario. Even by the time, post-modernism could actually sink in by dethroning the ideas related to modernism, talks of ‘Performatism’ started to surface. This concept signifies the sign, subject to come together in ways for creating the aesthetic experience of transcendency…locating it in a place where meaning is constructed. Performatism is looked at as ‘New Faith’. Together these new epochal ideas have come to be known as ‘New Sincerity’ and are the talk in the west of a loose connection between cultural studies and philosophy post 9/11.

Thus is to concur that philosophy as revenant is indeed what we are witnessing today as the break from the ideas of the classical ages gone by is getting more and more subscriptions. All is not lost, if we pay heed to deconstruction techniques in the sense that the end is deferred and yet to come. We need to get the old methodology back from its marginalized occupied space to the center. This may just be a lot of demand but then it is the most viable way to encounter this apocalypse.

If philosophy is to be realized, it has to be eliminated – Marx…..