Capital As Power.

DYxMn9QXcAIh8-W.jpg-large

One has the Eric Fromm angle of consciousness as linear and directly proportional to exploitation as one of the strands of Marxian thinking, the non-linearity creeps up from epistemology on the technological side, with, something like, say Moore’s Law, where ascension of conscious thought is or could be likened to exponentials. Now, these exponentials are potent in ridding of the pronouns, as in the “I” having a compossibility with the “We”, for if these aren’t gotten rid of, there is asphyxiation in continuing with them, an effort, an energy expendable into the vestiges of waste, before Capitalism comes sweeping in over such deliberately pronounced islands of pronouns. This is where the sweep is of the “IT”. And this is emancipation of the highest order, where teleology would be replaced by Eschatology. Alienation would be replaced with emancipation. Teleology is alienating, whereas eschatology is emancipating. Agency would become un-agency. An emancipation from alienation, from being, into the arms of becoming, for the former is a mere snapshot of the illusory order, whereas the latter is a continuum of fluidity, the fluid dynamics of the deracinated from the illusory order. The “IT” is pure and brute materialism, the cosmic unfoldings beyond our understanding and importantly mirrored in on the terrestrial. “IT” is not to be realized. “It” is what engulfs us, kills us, and in the process emancipates us from alienation. “IT” is “Realism”, a philosophy without “we”, Capitalism’s excessive power. “IT” enslaves “us” to the point of us losing any identification. In a nutshell, theory of capital is a catalogue of heresies to be welcomed to set free from the vantage of an intention to emancipate economic thought from the etherealized spheres of choice and behaviors or from the paradigm of the disembodied minds.

Jonathan Nitzan and Shimshon Bichler‘s Capital as Power A Study of Order and Creorder

Husserl’s Flip-Flop on Arithmetic Axiomatics. Thought of the Day 118.0

g5198

Husserl’s position in his Philosophy of Arithmetic (Psychological and Logical Investigations with Supplementary Texts) was resolutely anti-axiomatic. He attacked those who fell into remote, artificial constructions which, with the intent of building the elementary arithmetic concepts out of their ultimate definitional properties, interpret and change their meaning so much that totally strange, practically and scientifically useless conceptual formations finally result. Especially targeted was Frege’s ideal of the

founding of arithmetic on a sequence of formal definitions, out of which all the theorems of that science could be deduced purely syllogistically.

As soon as one comes to the ultimate, elemental concepts, Husserl reasoned, all defining has to come to an end. All one can then do is to point to the concrete phenomena from or through which the concepts are abstracted and show the nature of the abstractive process. A verbal explanation should place us in the proper state of mind for picking out, in inner or outer intuition, the abstract moments intended and for reproducing in ourselves the mental processes required for the formation of the concept. He said that his analyses had shown with incontestable clarity that the concepts of multiplicity and unity rest directly upon ultimate, elemental psychical data, and so belong among the indefinable concepts. Since the concept of number was so closely joined to them, one could scarcely speak of defining it either. All these points are made on the only pages of Philosophy of Arithmetic that Husserl ever explicitly retracted.

In On the Concept of Number, Husserl had set out to anchor arithmetical concepts in direct experience by analyzing the actual psychological processes to which he thought the concept of number owed its genesis. To obtain the concept of number of a concrete set of objects, say A, A, and A, he explained, one abstracts from the particular characteristics of the individual contents collected, only considering and retaining each one insofar as it is a something or a one. Regarding their collective combination, one thus obtains the general form of the set belonging to the set in question: one and one, etc. and. . . and one, to which a number name is assigned.

The enthusiastic espousal of psychologism of On the Concept of Number is not found in Philosophy of Arithmetic. Husserl later confessed that doubts about basic differences between the concept of number and the concept of collecting, which was all that could be obtained from reflection on acts, had troubled and tormented him from the very beginning and had eventually extended to all categorial concepts and to concepts of objectivities of any sort whatsoever, ultimately to include modern analysis and the theory of manifolds, and simultaneously to mathematical logic and the entire field of logic in general. He did not see how one could reconcile the objectivity of mathematics with psychological foundations for logic.

In sharp contrast to Brouwer who denounced logic as a source of truth, from the mid-1890s on, Husserl defended the view, which he attributed to Frege’s teacher Hermann Lotze, that pure arithmetic was basically no more than a branch of logic that had undergone independent development. He bid students not to be “scared” by that thought and to grow used to Lotze’s initially strange idea that arithmetic was only a particularly highly developed piece of logic.

Years later, Husserl would explain in Formal and Transcendental Logic that his

war against logical psychologism was meant to serve no other end than the supremely important one of making the specific province of analytic logic visible in its purity and ideal particularity, freeing it from the psychologizing confusions and misinterpretations in which it had remained enmeshed from the beginning.

He had come to see arithmetic truths as being analytic, as grounded in meanings independently of matters of fact. He had come to believe that the entire overthrowing of psychologism through phenomenology showed that his analyses in On the Concept of Number and Philosophy of Arithmetic had to be considered a pure a priori analysis of essence. For him, pure arithmetic, pure mathematics, and pure logic were a priori disciplines entirely grounded in conceptual essentialities, where truth was nothing other than the analysis of essences or concepts. Pure mathematics as pure arithmetic investigated what is grounded in the essence of number. Pure mathematical laws were laws of essence.

He is said to have told his students that it was to be stressed repeatedly and emphatically that the ideal entities so unpleasant for empiricistic logic, and so consistently disregarded by it, had not been artificially devised either by himself, or by Bolzano, but were given beforehand by the meaning of the universal talk of propositions and truths indispensable in all the sciences. This, he said, was an indubitable fact that had to be the starting point of all logic. All purely mathematical propositions, he taught, express something about the essence of what is mathematical. Their denial is consequently an absurdity. Denying a proposition of the natural sciences, a proposition about real matters of fact, never means an absurdity, a contradiction in terms. In denying the law of gravity, I cast experience to the wind. I violate the evident, extremely valuable probability that experience has established for the laws. But, I do not say anything “unthinkable,” absurd, something that nullifies the meaning of the word as I do when I say that 2 × 2 is not 4, but 5.

Husserl taught that every judgment either is a truth or cannot be a truth, that every presentation either accorded with a possible experience adequately redeeming it, or was in conflict with the experience, and that grounded in the essence of agreement was the fact that it was incompatible with the conflict, and grounded in the essence of conflict that it was incompatible with agreement. For him, that meant that truth ruled out falsehood and falsehood ruled out truth. And, likewise, existence and non-existence, correctness and incorrectness cancelled one another out in every sense. He believed that that became immediately apparent as soon as one had clarified the essence of existence and truth, of correctness and incorrectness, of Evidenz as consciousness of givenness, of being and not-being in fully redeeming intuition.

At the same time, Husserl contended, one grasps the “ultimate meaning” of the basic logical law of contradiction and of the excluded middle. When we state the law of validity that of any two contradictory propositions one holds and the other does not hold, when we say that for every proposition there is a contradictory one, Husserl explained, then we are continually speaking of the proposition in its ideal unity and not at all about mental experiences of individuals, not even in the most general way. With talk of truth it is always a matter of propositions in their ideal unity, of the meaning of statements, a matter of something identical and atemporal. What lies in the identically-ideal meaning of one’s words, what one cannot deny without invalidating the fixed meaning of one’s words has nothing at all to do with experience and induction. It has only to do with concepts. In sharp contrast to this, Brouwer saw intuitionistic mathematics as deviating from classical mathematics because the latter uses logic to generate theorems and in particular applies the principle of the excluded middle. He believed that Intuitionism had proven that no mathematical reality corresponds to the affirmation of the principle of the excluded middle and to conclusions derived by means of it. He reasoned that “since logic is based on mathematics – and not vice versa – the use of the Principle of the Excluded Middle is not permissible as part of a mathematical proof.”

Triadomania. Thought of the Day 117.0

figure-2

Peirce’s famous ‘triadomania’ lets most of his decisive distinctions appear in threes, following the tripartition of his list of categories, the famous triad of First, Second, and Third, or Quality, Reaction, Representation, or Possibility, Actuality, Reality.

Firstness is the mode of being of that which is such as it is, positively and without reference to anything else.

Secondness is the mode of being of that which is such as it is, with respect to a second but regardless of any third.

Thirdness is the mode of being of that which is such as it is, in bringing a second and third into relation to each other.

Firstness constitutes the quality of experience: in order for something to appear at all, it must do so due to a certain constellation of qualitative properties. Peirce often uses sensory qualities as examples, but it is important for the understanding of his thought that the examples may refer to phenomena very far from our standard conception of ‘sensory data’, e.g. forms or the ‘feeling’ of a whole melody or of a whole mathematical proof, not to be taken in a subjective sense but as a concept for the continuity of melody or proof as a whole, apart from the analytical steps and sequences in which it may be, subsequently, subdivided. In short, all sorts of simple and complex Gestalt qualities also qualify as Firstnesses. Firstness tend to form continua of possibilities such as the continua of shape, color, tone, etc. These qualities, however, are, taken in themselves, pure possibilities and must necessarily be incarnated in phenomena in order to appear. Secondness is the phenomenological category of ‘incarnation’ which makes this possible: it is the insistency, then, with which the individuated, actualized, existent phenomenon appears. Thus, Secondness necessarily forms discontinuous breaks in Firstness, allowing for particular qualities to enter into existence. The mind may imagine anything whatever in all sorts of quality combinations, but something appears with an irrefutable insisting power, reacting, actively, yielding resistance. Peirce’s favorite example is the resistance of the closed door – which might be imagined reduced to the quality of resistance feeling and thus degenerate to pure Firstness so that his theory imploded into a Hume-like solipsism – but to Peirce this resistance, surprise, event, this thisness, ‘haecceity’ as he calls it with a Scotist term, remains irreducible in the description of the phenomenon (a Kantian idea, at bottom: existence is no predicate). About Thirdness, Peirce may directly state that continuity represents it perfectly: ‘continuity and generality are two names of the same absence of distinction of individuals’. As against Secondness, Thirdness is general; it mediates between First and Second. The events of Secondness are never completely unique, such an event would be inexperiencable, but relates (3) to other events (2) due to certain features (1) in them; Thirdness is thus what facilitates understanding as well as pragmatic action, due to its continuous generality. With a famous example: if you dream about an apple pie, then the very qualities of that dream (taste, smell, warmth, crustiness, etc.) are pure Firstnesses, while the act of baking is composed of a series of actual Secondnesses. But their coordination is governed by a Thirdness: the recipe, being general, can never specify all properties in the individual apple pie, it has a schematic frame-character and subsumes an indefinite series – a whole continuum – of possible apple pies. Thirdness is thus necessarily general and vague. Of course, the recipe may be more or less precise, but no recipe exists which is able to determine each and every property in the cake, including date, hour, place, which tree the apples stem from, etc. – any recipe is necessarily general. In this case, the recipe (3) mediates between dream (1) and fulfilment (2) – its generality, symbolicity, relationality and future orientation are all characteristic for Thirdness. An important aspect of Peirce’s realism is that continuous generality may be experienced directly in perceptual judgments: ‘Generality, Thirdness, pours in upon us in our very perceptual judgments’.

All these determinations remain purely phenomenological, even if the later semiotic and metaphysical interpretations clearly shine through. In a more general, non-Peircean terminology, his phenomenology can be seen as the description of minimum aspects inherent in any imaginable possible world – for this reason it is imaginability which is the main argument, and this might point in the direction that Peirce could be open to critique for subjectivism, so often aimed at Husserl’s project, in some respects analogous. The concept of consciousness is invoked as the basis of imaginability: phenomenology is the study of invariant properties in any phenomenon appearing for a mind. Peirce’s answer would here be, on the one hand, the research community which according to him defines reality – an argument which structurally corresponds to Husserl’s reference to intersubjectivity as a necessary ingredient in objectivity (an object is a phenomenon which is intersubjectively accessible). Peirce, however, has a further argument here, namely his consequent refusal to delimit his concept of mind exclusively to human subjects (a category the use of which he obviously tries to minimize), mind-like processes may take place in nature without any subject being responsible. Peirce will, for continuity reasons, never accept any hard distinction between subject and object and remains extremely parsimonious in the employment of such terms.

From Peirce’s New Elements of Mathematics (The New Elements of Mathematics Vol. 4),

But just as the qualities, which as they are for themselves, are equally unrelated to one other, each being mere nothing for any other, yet form a continuum in which and because of their situation in which they acquire more or less resemblance and contrast with one another; and then this continuum is amplified in the continuum of possible feelings of quality, so the accidents of reaction, which are waking consciousnesses of pairs of qualities, may be expected to join themselves into a continuum. 

Since, then an accidental reaction is a combination or bringing into special connection of two qualities, and since further it is accidental and antigeneral or discontinuous, such an accidental reaction ought to be regarded as an adventitious singularity of the continuum of possible quality, just as two points of a sheet of paper might come into contact.

But although singularities are discontinuous, they may be continuous to a certain extent. Thus the sheet instead of touching itself in the union of two points may cut itself all along a line. Here there is a continuous line of singularity. In like manner, accidental reactions though they are breaches of generality may come to be generalized to a certain extent.

Secondness is now taken to actualize these quality possibilities based on an idea that any actual event involves a clash of qualities – in the ensuing argumentation Peirce underlines that the qualities involved in actualization need not be restrained to two but may be many, if they may only be ‘dissolved’ into pairs and hence do not break into the domain of Thirdness. This appearance of actuality, hence, has the property of singularities, spontaneously popping up in the space of possibilities and actualizing pairs of points in it. This transition from First to Second is conceived of along Aristotelian lines: as an actualization of a possibility – and this is expressed in the picture of a discontinuous singularity in the quality continuum. The topological fact that singularities must in general be defined with respect to the neighborhood of the manifold in which they appear, now becomes the argument for the fact that Secondness can never be completely discontinuous but still ‘inherits’ a certain small measure of continuity from the continuum of Firstness. Singularities, being discontinuous along certain dimensions, may be continuous in others, which provides the condition of possibility for Thirdness to exist as a tendency for Secondness to conform to a general law or regularity. As is evident, a completely pure Secondness is impossible in this continuous metaphysics – it remains a conceivable but unrealizable limit case, because a completely discon- tinuous event would amount to nothing. Thirdness already lies as a germ in the non-discontinuous aspects of the singularity. The occurrences of Secondness seem to be infinitesimal, then, rather than completely extensionless points.

Hegelian Marxism of Lukács: Philosophy as Systematization of Ideology and Politics as Manipulation of Ideology. Thought of the Day 80.0

turleyjames_lukacsaftermathchart

In the Hegelian Marxism of Lukács, for instance, the historicist problematic begins from the relativisation of theory, whereby that it is claimed that historical materialism is the “perspective” and “worldview” of the revolutionary class and that, in general, theory (philosophy) is only the coherent systematisation of the ideological worldview of a social group. No distinction of kind exists between theory and ideology, opening the path for the foundational character of ideology, expressed through the Lukácsian claim that the ideological consciousness of a historical subject is the expression of objective relations, and that, correlatively, this historical subject (the proletariat) alienates-expresses a free society by means of a transparent grasp of social processes. The society, as an expression of a single structure of social relations (where the commodity form and reified consciousness are theoretical equivalents) is an expressive totality, so that politics and ideology can be directly deduced from philosophical relations. According to Lukács’ directly Hegelian conception, the historical subject is the unified proletariat, which, as the “creator of the totality of [social] contents”, makes history according to its conception of the world, and thus functions as an identical subject-object of history. The identical subject-object and the transparency of praxis therefore form the telos of the historical process. Lukács reduces the multiplicity of social practices operative within the social formation to the model of an individual “making history,” through the externalisation of an intellectual conception of the world. Lukács therefore arrives at the final element of the historicist problematic, namely, a theorisation of social practice on the model of individual praxis, presented as the historical action of a “collective individual”. This structure of claims is vulnerable to philosophical deconstruction (Gasché) and leads to individualist political conclusions (Althusser).

In the light of the Gramscian provenance of postmarxism, it is important to note that while the explicit target of Althusser’s critique was the Hegelian totality, Althusser is equally critical of the aleatory posture of Gramsci’s “absolute historicism,” regarding it as exemplary of the impasse of radicalised historicism (Reading Capital). Althusser argues that Gramsci preserves the philosophical structure of historicism exemplified by Lukács and so the criticism of “expressive totality,” or spiritual holism, also applies to Gramsci. According to Gramsci, “the philosophy of praxis is absolute ‘historicism,’ the absolute secularisation and earthiness of thought, an absolute humanism of history”. Gramsci’s is an “absolute” historicism because it subjects the “absolute knowledge” supposed to be possible at the Hegelian “end of history” to historicisation-relativisation: instead of absolute knowledge, every truly universal worldview becomes merely the epochal totalisation of the present. Consequently, Gramsci rejects the conception that a social agent might aspire to “absolute knowledge” by adopting the “perspective of totality”. If anything, this exacerbates the problems of historicism by bringing the inherent relativism of the position to the surface. Ideology, conceptualised as the worldview of a historical subject (revolutionary proletariat, hegemonic alliance), forms the foundation of the social field, because in the historicist lens a social system is cemented by the ideology of the dominant group. Philosophy (and by extension, theory) represents only the systematisation of ideology into a coherent doctrine, while politics is based on ideological manipulation as its necessary precondition. Thus, for historicism, every “theoretical” intervention is immediately a political act, and correlatively, theory becomes the direct servant of ideology.

Something Out of Almost Nothing. Drunken Risibility.

Kant’s first antinomy makes the error of the excluded third option, i.e. it is not impossible that the universe could have both a beginning and an eternal past. If some kind of metaphysical realism is true, including an observer-independent and relational time, then a solution of the antinomy is conceivable. It is based on the distinction between a microscopic and a macroscopic time scale. Only the latter is characterized by an asymmetry of nature under a reversal of time, i.e. the property of having a global (coarse-grained) evolution – an arrow of time – or many arrows, if they are independent from each other. Thus, the macroscopic scale is by definition temporally directed – otherwise it would not exist.

On the microscopic scale, however, only local, statistically distributed events without dynamical trends, i.e. a global time-evolution or an increase of entropy density, exist. This is the case if one or both of the following conditions are satisfied: First, if the system is in thermodynamic equilibrium (e.g. there is degeneracy). And/or second, if the system is in an extremely simple ground state or meta-stable state. (Meta-stable states have a local, but not a global minimum in their potential landscape and, hence, they can decay; ground states might also change due to quantum uncertainty, i.e. due to local tunneling events.) Some still speculative theories of quantum gravity permit the assumption of such a global, macroscopically time-less ground state (e.g. quantum or string vacuum, spin networks, twistors). Due to accidental fluctuations, which exceed a certain threshold value, universes can emerge out of that state. Due to some also speculative physical mechanism (like cosmic inflation) they acquire – and, thus, are characterized by – directed non-equilibrium dynamics, specific initial conditions, and, hence, an arrow of time.

It is a matter of debate whether such an arrow of time is

1) irreducible, i.e. an essential property of time,

2) governed by some unknown fundamental and not only phenomenological law,

3) the effect of specific initial conditions or

4) of consciousness (if time is in some sense subjective), or

5) even an illusion.

Many physicists favour special initial conditions, though there is no consensus about their nature and form. But in the context at issue it is sufficient to note that such a macroscopic global time-direction is the main ingredient of Kant’s first antinomy, for the question is whether this arrow has a beginning or not.

Time’s arrow is inevitably subjective, ontologically irreducible, fundamental and not only a kind of illusion, thus if some form of metaphysical idealism for instance is true, then physical cosmology about a time before time is mistaken or quite irrelevant. However, if we do not want to neglect an observer-independent physical reality and adopt solipsism or other forms of idealism – and there are strong arguments in favor of some form of metaphysical realism -, Kant’s rejection seems hasty. Furthermore, if a Kantian is not willing to give up some kind of metaphysical realism, namely the belief in a “Ding an sich“, a thing in itself – and some philosophers actually insisted that this is superfluous: the German idealists, for instance -, he has to admit that time is a subjective illusion or that there is a dualism between an objective timeless world and a subjective arrow of time. Contrary to Kant’s thoughts: There are reasons to believe that it is possible, at least conceptually, that time has both a beginning – in the macroscopic sense with an arrow – and is eternal – in the microscopic notion of a steady state with statistical fluctuations.

Is there also some physical support for this proposal?

Surprisingly, quantum cosmology offers a possibility that the arrow has a beginning and that it nevertheless emerged out of an eternal state without any macroscopic time-direction. (Note that there are some parallels to a theistic conception of the creation of the world here, e.g. in the Augustinian tradition which claims that time together with the universe emerged out of a time-less God; but such a cosmological argument is quite controversial, especially in a modern form.) So this possible overcoming of the first antinomy is not only a philosophical conceivability but is already motivated by modern physics. At least some scenarios of quantum cosmology, quantum geometry/loop quantum gravity, and string cosmology can be interpreted as examples for such a local beginning of our macroscopic time out of a state with microscopic time, but with an eternal, global macroscopic timelessness.

To put it in a more general, but abstract framework and get a sketchy illustration, consider the figure.

Untitled

Physical dynamics can be described using “potential landscapes” of fields. For simplicity, here only the variable potential (or energy density) of a single field is shown. To illustrate the dynamics, one can imagine a ball moving along the potential landscape. Depressions stand for states which are stable, at least temporarily. Due to quantum effects, the ball can “jump over” or “tunnel through” the hills. The deepest depression represents the ground state.

In the common theories the state of the universe – the product of all its matter and energy fields, roughly speaking – evolves out of a metastable “false vacuum” into a “true vacuum” which has a state of lower energy (potential). There might exist many (perhaps even infinitely many) true vacua which would correspond to universes with different constants or laws of nature. It is more plausible to start with a ground state which is the minimum of what physically can exist. According to this view an absolute nothingness is impossible. There is something rather than nothing because something cannot come out of absolutely nothing, and something does obviously exist. Thus, something can only change, and this change might be described with physical laws. Hence, the ground state is almost “nothing”, but can become thoroughly “something”. Possibly, our universe – and, independent from this, many others, probably most of them having different physical properties – arose from such a phase transition out of a quasi atemporal quantum vacuum (and, perhaps, got disconnected completely). Tunneling back might be prevented by the exponential expansion of this brand new space. Because of this cosmic inflation the universe not only became gigantic but simultaneously the potential hill broadened enormously and got (almost) impassable. This preserves the universe from relapsing into its non-existence. On the other hand, if there is no physical mechanism to prevent the tunneling-back or makes it at least very improbable, respectively, there is still another option: If infinitely many universes originated, some of them could be long-lived only for statistical reasons. But this possibility is less predictive and therefore an inferior kind of explanation for not tunneling back.

Another crucial question remains even if universes could come into being out of fluctuations of (or in) a primitive substrate, i.e. some patterns of superposition of fields with local overdensities of energy: Is spacetime part of this primordial stuff or is it also a product of it? Or, more specifically: Does such a primordial quantum vacuum have a semi-classical spacetime structure or is it made up of more fundamental entities? Unique-universe accounts, especially the modified Eddington models – the soft bang/emergent universe – presuppose some kind of semi-classical spacetime. The same is true for some multiverse accounts describing our universe, where Minkowski space, a tiny closed, finite space or the infinite de Sitter space is assumed. The same goes for string theory inspired models like the pre-big bang account, because string and M- theory is still formulated in a background-dependent way, i.e. requires the existence of a semi-classical spacetime. A different approach is the assumption of “building-blocks” of spacetime, a kind of pregeometry also the twistor approach of Roger Penrose, and the cellular automata approach of Stephen Wolfram. The most elaborated accounts in this line of reasoning are quantum geometry (loop quantum gravity). Here, “atoms of space and time” are underlying everything.

Though the question whether semiclassical spacetime is fundamental or not is crucial, an answer might be nevertheless neutral with respect of the micro-/macrotime distinction. In both kinds of quantum vacuum accounts the macroscopic time scale is not present. And the microscopic time scale in some respect has to be there, because fluctuations represent change (or are manifestations of change). This change, reversible and relationally conceived, does not occur “within” microtime but constitutes it. Out of a total stasis nothing new and different can emerge, because an uncertainty principle – fundamental for all quantum fluctuations – would not be realized. In an almost, but not completely static quantum vacuum however, macroscopically nothing changes either, but there are microscopic fluctuations.

The pseudo-beginning of our universe (and probably infinitely many others) is a viable alternative both to initial and past-eternal cosmologies and philosophically very significant. Note that this kind of solution bears some resemblance to a possibility of avoiding the spatial part of Kant’s first antinomy, i.e. his claimed proof of both an infinite space without limits and a finite, limited space: The theory of general relativity describes what was considered logically inconceivable before, namely that there could be universes with finite, but unlimited space, i.e. this part of the antinomy also makes the error of the excluded third option. This offers a middle course between the Scylla of a mysterious, secularized creatio ex nihilo, and the Charybdis of an equally inexplicable eternity of the world.

In this context it is also possible to defuse some explanatory problems of the origin of “something” (or “everything”) out of “nothing” as well as a – merely assumable, but never provable – eternal cosmos or even an infinitely often recurring universe. But that does not offer a final explanation or a sufficient reason, and it cannot eliminate the ultimate contingency of the world.

Reductionism of Numerical Complexity: A Wittgensteinian Excursion

boyle10

Wittgenstein’s criticism of Russell’s logicist foundation of mathematics contained in (Remarks on the Foundation of Mathematics) consists in saying that it is not the formalized version of mathematical deduction which vouches for the validity of the intuitive version but conversely.

If someone tries to shew that mathematics is not logic, what is he trying to shew? He is surely trying to say something like: If tables, chairs, cupboards, etc. are swathed in enough paper, certainly they will look spherical in the end.

He is not trying to shew that it is impossible that, for every mathematical proof, a Russellian proof can be constructed which (somehow) ‘corresponds’ to it, but rather that the acceptance of such a correspondence does not lean on logic.

Taking up Wittgenstein’s criticism, Hao Wang (Computation, Logic, Philosophy) discusses the view that mathematics “is” axiomatic set theory as one of several possible answers to the question “What is mathematics?”. Wang points out that this view is epistemologically worthless, at least as far as the task of understanding the feature of cognition guiding is concerned:

Mathematics is axiomatic set theory. In a definite sense, all mathematics can be derived from axiomatic set theory. [ . . . ] There are several objections to this identification. [ . . . ] This view leaves unexplained why, of all the possible consequences of set theory, we select only those which happen to be our mathematics today, and why certain mathematical concepts are more interesting than others. It does not help to give us an intuitive grasp of mathematics such as that possessed by a powerful mathematician. By burying, e.g., the individuality of natural numbers, it seeks to explain the more basic and the clearer by the more obscure. It is a little analogous to asserting that all physical objects, such as tables, chairs, etc., are spherical if we swathe them with enough stuff.

Reductionism is an age-old project; a close forerunner of its incarnation in set theory was the arithmetization program of the 19th century. It is interesting that one of its prominent representatives, Richard Dedekind (Essays on the Theory of Numbers), exhibited a quite distanced attitude towards a consequent carrying out of the program:

It appears as something self-evident and not new that every theorem of algebra and higher analysis, no matter how remote, can be expressed as a theorem about natural numbers [ . . . ] But I see nothing meritorious [ . . . ] in actually performing this wearisome circumlocution and insisting on the use and recognition of no other than rational numbers.

Perec wrote a detective novel without using the letter ‘e’ (La disparition, English A void), thus proving not only that such an enormous enterprise is indeed possible but also that formal constraints sometimes have great aesthetic appeal. The translation of mathematical propositions into a poorer linguistic framework can easily be compared with such painful lipogrammatical exercises. In principle all logical connectives can be simulated in a framework exclusively using Sheffer’s stroke, and all cuts (in Gentzen’s sense) can be eliminated; one can do without common language at all in mathematics and formalize everything and so on: in principle, one could leave out a whole lot of things. However, in doing so one would depart from the true way of thinking employed by the mathematician (who really uses “and” and “not” and cuts and who does not reduce many things to formal systems). Obviously, it is the proof theorist as a working mathematician who is interested in things like the reduction to Sheffer’s stroke since they allow for more concise proofs by induction in the analysis of a logical calculus. Hence this proof theorist has much the same motives as a mathematician working on other problems who avoids a completely formalized treatment of these problems since he is not interested in the proof-theoretical aspect.

There might be quite similar reasons for the interest of some set theorists in expressing usual mathematical constructions exclusively with the expressive means of ZF (i.e., in terms of ∈). But beyond this, is there any philosophical interpretation of such a reduction? In the last analysis, mathematicians always transform (and that means: change) their objects of study in order to make them accessible to certain mathematical treatments. If one considers a mathematical concept as a tool, one does not only use it in a way different from the one in which it would be used if it were considered as an object; moreover, in semiotical representation of it, it is given a form which is different in both cases. In this sense, the proof theorist has to “change” the mathematical proof (which is his or her object of study to be treated with mathematical tools). When stating that something is used as object or as tool, we have always to ask: in which situation, or: by whom.

A second observation is that the translation of propositional formulæ in terms of Sheffer’s stroke in general yields quite complicated new formulæ. What is “simple” here is the particularly small number of symbols needed; but neither the semantics becomes clearer (p|q means “not both p and q”; cognitively, this looks more complex than “p and q” and so on), nor are the formulæ you get “short”. What is looked for in this case, hence, is a reduction of numerical complexity, while the primitive basis attained by the reduction cognitively looks less “natural” than the original situation (or, as Peirce expressed it, “the consciousness in the determined cognition is more lively than in the cognition which determines it”); similarly in the case of cut elimination. In contrast to this, many philosophers are convinced that the primitive basis of operating with sets constitutes really a “natural” basis of mathematical thinking, i.e., such operations are seen as the “standard bricks” of which this thinking is actually made – while no one will reasonably claim that expressions of the type p|q play a similar role for propositional logic. And yet: reduction to set theory does not really have the task of “explanation”. It is true, one thus reduces propositions about “complex” objects to propositions about “simple” objects; the propositions themselves, however, thus become in general more complex. Couched in Fregean terms, one can perhaps more easily grasp their denotation (since the denotation of a proposition is its truth value) but not their meaning. A more involved conceptual framework, however, might lead to simpler propositions (and in most cases has actually just been introduced in order to do so). A parallel argument concerns deductions: in its totality, a deduction becomes more complex (and less intelligible) by a decomposition into elementary steps.

Now, it will be subject to discussion whether in the case of some set operations it is admissible at all to claim that they are basic for thinking (which is certainly true in the case of the connectives of propositional logic). It is perfectly possible that the common sense which organizes the acceptance of certain operations as a natural basis relies on something different, not having the character of some eternal laws of thought: it relies on training.

Is it possible to observe that a surface is coloured red and blue; and not to observe that it is red? Imagine a kind of colour adjective were used for things that are half red and half blue: they are said to be ‘bu’. Now might not someone to be trained to observe whether something is bu; and not to observe whether it is also red? Such a man would then only know how to report: “bu” or “not bu”. And from the first report we could draw the conclusion that the thing was partly red.

Mathematical Reductionism: As Case Via C. S. Peirce’s Hypothetical Realism.

mathematical-beauty

During the 20th century, the following epistemology of mathematics was predominant: a sufficient condition for the possibility of the cognition of objects is that these objects can be reduced to set theory. The conditions for the possibility of the cognition of the objects of set theory (the sets), in turn, can be given in various manners; in any event, the objects reduced to sets do not need an additional epistemological discussion – they “are” sets. Hence, such an epistemology relies ultimately on ontology. Frege conceived the axioms as descriptions of how we actually manipulate extensions of concepts in our thinking (and in this sense as inevitable and intuitive “laws of thought”). Hilbert admitted the use of intuition exclusively in metamathematics where the consistency proof is to be done (by which the appropriateness of the axioms would be established); Bourbaki takes the axioms as mere hypotheses. Hence, Bourbaki’s concept of justification is the weakest of the three: “it works as long as we encounter no contradiction”; nevertheless, it is still epistemology, because from this hypothetical-deductive point of view, one insists that at least a proof of relative consistency (i.e., a proof that the hypotheses are consistent with the frequently tested and approved framework of set theory) should be available.

Doing mathematics, one tries to give proofs for propositions, i.e., to deduce the propositions logically from other propositions (premisses). Now, in the reductionist perspective, a proof of a mathematical proposition yields an insight into the truth of the proposition, if the premisses are already established (if one has already an insight into their truth); this can be done by giving in turn proofs for them (in which new premisses will occur which ask again for an insight into their truth), or by agreeing to put them at the beginning (to consider them as axioms or postulates). The philosopher tries to understand how the decision about what propositions to take as axioms is arrived at, because he or she is dissatisfied with the reductionist claim that it is on these axioms that the insight into the truth of the deduced propositions rests. Actually, this epistemology might contain a short-coming since Poincaré (and Wittgenstein) stressed that to have a proof of a proposition is by no means the same as to have an insight into its truth.

Attempts to disclose the ontology of mathematical objects reveal the following tendency in epistemology of mathematics: Mathematics is seen as suffering from a lack of ontological “determinateness”, namely that this science (contrarily to many others) does not concern material data such that the concept of material truth is not available (especially in the case of the infinite). This tendency is embarrassing since on the other hand mathematical cognition is very often presented as cognition of the “greatest possible certainty” just because it seems not to be bound to material evidence, let alone experimental check.

The technical apparatus developed by the reductionist and set-theoretical approach nowadays serves other purposes, partly for the reason that tacit beliefs about sets were challenged; the explanations of the science which it provides are considered as irrelevant by the practitioners of this science. There is doubt that the above mentioned sufficient condition is also necessary; it is not even accepted throughout as a sufficient one. But what happens if some objects, as in the case of category theory, do not fulfill the condition? It seems that the reductionist approach, so to say, has been undocked from the historical development of the discipline in several respects; an alternative is required.

Anterior to Peirce, epistemology was dominated by the idea of a grasp of objects; since Descartes, intuition was considered throughout as a particular, innate capacity of cognition (even if idealists thought that it concerns the general, and empiricists that it concerns the particular). The task of this particular capacity was the foundation of epistemology; already from Aristotle’s first premisses of syllogism, what was aimed at was to go back to something first. In this traditional approach, it is by the ontology of the objects that one hopes to answer the fundamental question concerning the conditions for the possibility of the cognition of these objects. One hopes that there are simple “basic objects” to which the more complex objects can be reduced and whose cognition is possible by common sense – be this an innate or otherwise distinguished capacity of cognition common to all human beings. Here, epistemology is “wrapped up” in (or rests on) ontology; to do epistemology one has to do ontology first.

Peirce shares Kant’s opinion according to which the object depends on the subject; however, he does not agree that reason is the crucial means of cognition to be criticised. In his paper “Questions concerning certain faculties claimed for man”, he points out the basic assumption of pragmatist philosophy: every cognition is semiotically mediated. He says that there is no immediate cognition (a cognition which “refers immediately to its object”), but that every cognition “has been determined by a previous cognition” of the same object. Correspondingly, Peirce replaces critique of reason by critique of signs. He thinks that Kant’s distinction between the world of things per se (Dinge an sich) and the world of apparition (Erscheinungswelt) is not fruitful; he rather distinguishes the world of the subject and the world of the object, connected by signs; his position consequently is a “hypothetical realism” in which all cognitions are only valid with reservations. This position does not negate (nor assert) that the object per se (with the semiotical mediation stripped off) exists, since such assertions of “pure” existence are seen as necessarily hypothetical (that means, not withstanding philosophical criticism).

By his basic assumption, Peirce was led to reveal a problem concerning the subject matter of epistemology, since this assumption means in particular that there is no intuitive cognition in the classical sense (which is synonymous to “immediate”). Hence, one could no longer consider cognitions as objects; there is no intuitive cognition of an intuitive cognition. Intuition can be no more than a relation. “All the cognitive faculties we know of are relative, and consequently their products are relations”. According to this new point of view, intuition cannot any longer serve to found epistemology, in departure from the former reductionist attitude. A central argument of Peirce against reductionism or, as he puts it,

the reply to the argument that there must be a first is as follows: In retracing our way from our conclusions to premisses, or from determined cognitions to those which determine them, we finally reach, in all cases, a point beyond which the consciousness in the determined cognition is more lively than in the cognition which determines it.

Peirce gives some examples derived from physiological observations about perception, like the fact that the third dimension of space is inferred, and the blind spot of the retina. In this situation, the process of reduction loses its legitimacy since it no longer fulfills the function of cognition justification. At such a place, something happens which I would like to call an “exchange of levels”: the process of reduction is interrupted in that the things exchange the roles performed in the determination of a cognition: what was originally considered as determining is now determined by what was originally considered as asking for determination.

The idea that contents of cognition are necessarily provisional has an effect on the very concept of conditions for the possibility of cognitions. It seems that one can infer from Peirce’s words that what vouches for a cognition is not necessarily the cognition which determines it but the livelyness of our consciousness in the cognition. Here, “to vouch for a cognition” means no longer what it meant before (which was much the same as “to determine a cognition”), but it still means that the cognition is (provisionally) reliable. This conception of the livelyness of our consciousness roughly might be seen as a substitute for the capacity of intuition in Peirce’s epistemology – but only roughly, since it has a different coverage.

Quantum Informational Biochemistry. Thought of the Day 71.0

el_net2

A natural extension of the information-theoretic Darwinian approach for biological systems is obtained taking into account that biological systems are constituted in their fundamental level by physical systems. Therefore it is through the interaction among physical elementary systems that the biological level is reached after increasing several orders of magnitude the size of the system and only for certain associations of molecules – biochemistry.

In particular, this viewpoint lies in the foundation of the “quantum brain” project established by Hameroff and Penrose (Shadows of the Mind). They tried to lift quantum physical processes associated with microsystems composing the brain to the level of consciousness. Microtubulas were considered as the basic quantum information processors. This project as well the general project of reduction of biology to quantum physics has its strong and weak sides. One of the main problems is that decoherence should quickly wash out the quantum features such as superposition and entanglement. (Hameroff and Penrose would disagree with this statement. They try to develop models of hot and macroscopic brain preserving quantum features of its elementary micro-components.)

However, even if we assume that microscopic quantum physical behavior disappears with increasing size and number of atoms due to decoherence, it seems that the basic quantum features of information processing can survive in macroscopic biological systems (operating on temporal and spatial scales which are essentially different from the scales of the quantum micro-world). The associated information processor for the mesoscopic or macroscopic biological system would be a network of increasing complexity formed by the elementary probabilistic classical Turing machines of the constituents. Such composed network of processors can exhibit special behavioral signatures which are similar to quantum ones. We call such biological systems quantum-like. In the series of works Asano and others (Quantum Adaptivity in Biology From Genetics to Cognition), there was developed an advanced formalism for modeling of behavior of quantum-like systems based on theory of open quantum systems and more general theory of adaptive quantum systems. This formalism is known as quantum bioinformatics.

The present quantum-like model of biological behavior is of the operational type (as well as the standard quantum mechanical model endowed with the Copenhagen interpretation). It cannot explain physical and biological processes behind the quantum-like information processing. Clarification of the origin of quantum-like biological behavior is related, in particular, to understanding of the nature of entanglement and its role in the process of interaction and cooperation in physical and biological systems. Qualitatively the information-theoretic Darwinian approach supplies an interesting possibility of explaining the generation of quantum-like information processors in biological systems. Hence, it can serve as the bio-physical background for quantum bioinformatics. There is an intriguing point in the fact that if the information-theoretic Darwinian approach is right, then it would be possible to produce quantum information from optimal flows of past, present and anticipated classical information in any classical information processor endowed with a complex enough program. Thus the unified evolutionary theory would supply a physical basis to Quantum Information Biology.

Suspicion on Consciousness as an Immanent Derivative

Untitled

The category of the subject (like that of the object) has no place in an immanent world. There can be no transcendent, subjective essence. What, then, is the ontological status of a body and its attendant instance of consciousness? In what would it exist? Sanford Kwinter (conjuncted here) here offers:

It would exist precisely in the ever-shifting pattern of mixtures or composites: both internal ones – the body as a site marked and traversed by forces that converge upon it in continuous variation; and external ones – the capacity of any individuated substance to combine and recombine with other bodies or elements (ensembles), both influencing their actions and undergoing influence by them. The ‘subject’ … is but a synthetic unit falling at the midpoint or interface of two more fundamental systems of articulation: the first composed of the fluctuating microscopic relations and mixtures of which the subject is made up, the second of the macro-blocs of relations or ensembles into which it enters. The image produced at the interface of these two systems – that which replaces, yet is too often mistaken for, subjective essence – may in turn have its own individuality characterized with a certain rigor. For each mixture at this level introduces into the bloc a certain number of defining capacities that determine both what the ‘subject’ is capable of bringing to pass outside of itself and what it is capable of receiving (undergoing) in terms of effects.

This description is sufficient to explain the immanent nature of the subjective bloc as something entirely embedded in and conditioned by its surroundings. What it does not offer – and what is not offered in any detail in the entirety of the work – is an in-depth account of what, exactly, these “defining capacities” are. To be sure, it would be unfair to demand a complete description of these capacities. Kwinter himself has elsewhere referred to the states of the nervous system as “magically complex”. Regardless of the specificity with which these capacities can presently be defined, we must nonetheless agree that it is at this interface, as he calls it, at this location where so many systems are densely overlaid, that consciousness is produced. We may be convinced that this consciousness, this apparent internal space of thought, is derived entirely from immanent conditions and can only be granted the ontological status of an effect, but this effect still manages to produce certain difficulties when attempting to define modes of behavior appropriate to an immanent world.

There is a palpable suspicion of the role of consciousness throughout Kwinter’s work, at least insofar as it is equated with some kind of internal, subjective space. (In one text he optimistically awaits the day when this space will “be left utterly in shreds.”) The basis of this suspicion is multiple and obvious. Among the capacities of consciousness is the ability to attribute to itself the (false) image of a stable and transcendent essence. The workings of consciousness are precisely what allow the subjective bloc to orient itself in a sequence of time, separating itself from an absolute experience of the moment. It is within consciousness that limiting and arbitrary moral categories seem to most stubbornly lodge themselves. (To be sure this is the location of all critical thought.) And, above all, consciousness may serve as the repository for conditioned behaviors which believe themselves to be free of external determination. Consciousness, in short, contains within itself an enormous number of limiting factors which would retard the production of novelty. Insofar as it appears to possess the capacity for self-determination, this capacity would seem most productively applied by turning on itself – that is, precisely by making the choice not to make conscious decisions and instead to permit oneself to be seized by extra-subjective forces.

The Concern for Historical Materialism. Thought of the Day 53.0

plaksin-a-spectrum-of-glass-1920

The concern for historical materialism, in spite of Marx’s differentiation between history and pre-history, is that totalisation might not be historically groundable after all, and must instead be constituted in other ways: whether logically, transcendentally or naturally. The ‘Consciousness’ chapter of the Phenomenology, a blend of all three, becomes a transcendent(al) logic of phenomena – individual, universal, particular – and ceases to provide any genuine phenomenology of ‘the experience of consciousness’. Natural consciousness is not strictly speaking a standpoint (no real opposition), so it can offer no critical grounds of itself to confer synthetic unity upon the universal, that which is taken to a higher level in ‘Self-Consciousness’ (only to be retrospectively confirmed). Yet Hegel does just this from the outset. In ‘Perception’, we read that, ‘[o]n account of the universality [Allgemeinheit] of the property, I must … take the objective essence to be on the whole a community [Gemeinschaft]’. Universality always sides with community, the Allgemeine with the Gemeinschaft, as if the synthetic operation had taken place prior to its very operability. Unfortunately for Hegel, the ‘free matters’ of all possible properties paves the way for the ‘interchange of forces’ in ‘Force and the Understanding’, and hence infinity, life and – spirit. In the midst of the master-slave dialectic, Hegel admits that, ‘[i]n this movement we see repeated the process which represented itself as the play of forces, but repeated now in consciousness [sic].