Suspicion on Consciousness as an Immanent Derivative

Untitled

The category of the subject (like that of the object) has no place in an immanent world. There can be no transcendent, subjective essence. What, then, is the ontological status of a body and its attendant instance of consciousness? In what would it exist? Sanford Kwinter (conjuncted here) here offers:

It would exist precisely in the ever-shifting pattern of mixtures or composites: both internal ones – the body as a site marked and traversed by forces that converge upon it in continuous variation; and external ones – the capacity of any individuated substance to combine and recombine with other bodies or elements (ensembles), both influencing their actions and undergoing influence by them. The ‘subject’ … is but a synthetic unit falling at the midpoint or interface of two more fundamental systems of articulation: the first composed of the fluctuating microscopic relations and mixtures of which the subject is made up, the second of the macro-blocs of relations or ensembles into which it enters. The image produced at the interface of these two systems – that which replaces, yet is too often mistaken for, subjective essence – may in turn have its own individuality characterized with a certain rigor. For each mixture at this level introduces into the bloc a certain number of defining capacities that determine both what the ‘subject’ is capable of bringing to pass outside of itself and what it is capable of receiving (undergoing) in terms of effects.

This description is sufficient to explain the immanent nature of the subjective bloc as something entirely embedded in and conditioned by its surroundings. What it does not offer – and what is not offered in any detail in the entirety of the work – is an in-depth account of what, exactly, these “defining capacities” are. To be sure, it would be unfair to demand a complete description of these capacities. Kwinter himself has elsewhere referred to the states of the nervous system as “magically complex”. Regardless of the specificity with which these capacities can presently be defined, we must nonetheless agree that it is at this interface, as he calls it, at this location where so many systems are densely overlaid, that consciousness is produced. We may be convinced that this consciousness, this apparent internal space of thought, is derived entirely from immanent conditions and can only be granted the ontological status of an effect, but this effect still manages to produce certain difficulties when attempting to define modes of behavior appropriate to an immanent world.

There is a palpable suspicion of the role of consciousness throughout Kwinter’s work, at least insofar as it is equated with some kind of internal, subjective space. (In one text he optimistically awaits the day when this space will “be left utterly in shreds.”) The basis of this suspicion is multiple and obvious. Among the capacities of consciousness is the ability to attribute to itself the (false) image of a stable and transcendent essence. The workings of consciousness are precisely what allow the subjective bloc to orient itself in a sequence of time, separating itself from an absolute experience of the moment. It is within consciousness that limiting and arbitrary moral categories seem to most stubbornly lodge themselves. (To be sure this is the location of all critical thought.) And, above all, consciousness may serve as the repository for conditioned behaviors which believe themselves to be free of external determination. Consciousness, in short, contains within itself an enormous number of limiting factors which would retard the production of novelty. Insofar as it appears to possess the capacity for self-determination, this capacity would seem most productively applied by turning on itself – that is, precisely by making the choice not to make conscious decisions and instead to permit oneself to be seized by extra-subjective forces.

Deleuzian Grounds. Thought of the Day 42.0

1g5bj4l_115_l

With difference or intensity instead of identity as the ultimate philosophical one could  arrive at the crux of Deleuze’s use of the Principle of Sufficient Reason in Difference and Repetition. At the beginning of the first chapter, he defines the quadruple yoke of conceptual representation identity, analogy, opposition, resemblance in correspondence with the four principle aspects of the Principle of Sufficient Reason: the form of the undetermined concept, the relation between ultimate determinable concepts, the relation between determinations within concepts, and the determined object of the concept itself. In other words, sufficient reason according to Deleuze is the very medium of representation, the element in which identity is conceptually determined. In itself, however, this medium or element remains different or unformed (albeit not formless): Difference is the state in which one can speak of determination as such, i.e. determination in its occurrent quality of a difference being made, or rather making itself in the sense of a unilateral distinction. It is with the event of difference that what appears to be a breakdown of representational reason is also a breakthrough of the rumbling ground as differential element of determination (or individuation). Deleuze illustrates this with an example borrowed from Nietzsche:

Instead of something distinguished from something else, imagine something which distinguishes itself and yet that from which it distinguishes itself, does not distinguish itself from it. Lightning, for example, distinguishes itself from the black sky but must also trail behind it . It is as if the ground rose to the surface without ceasing to be the ground.

Between the abyss of the indeterminate and the superficiality of the determined, there thus appears an intermediate element, a field potential or intensive depth, which perhaps in a way exceeds sufficient reason itself. This is a depth which Deleuze finds prefigured in Schelling’s and Schopenhauer’s differend conceptualization of the ground (Grund) as both ground (fond) and grounding (fondement). The ground attains an autonomous power that exceeds classical sufficient reason by including the grounding moment of sufficient reason for itself. Because this self-grounding ground remains groundless (sans-fond) in itself, however, Hegel famously ridiculed Schelling’s ground as the indeterminate night in which all cows are black. He opposed it to the surface of determined identities that are only negatively correlated to each other. By contrast, Deleuze interprets the self-grounding ground through Nietzsche’s eternal return of the same. Whereas the passive syntheses of habit (connective series) and memory (conjunctions of connective series) are the processes by which representational reason grounds itself in time, the eternal return (disjunctive synthesis of series) ungrounds (effonde) this ground by introducing the necessity of future becomings, i.e. of difference as ongoing differentiation. Far from being a denial of the Principle of Sufficient Reason, this threefold process of self-(un)grounding constitutes the positive, relational system that brings difference out of the night of the Identical, and with finer, more varied and more terrifying flashes of lightning than those of contradiction: progressivity.

The breakthrough of the ground in the process of ungrounding itself in sheer distinction-production of the multiple against the indistinguishable is what Deleuze calls violence or cruelty, as it determines being or nature in a necessary system of asymmetric relations of intensity by the acausal action of chance, like an ontological game in which the throw of the dice is the only rule or principle. But it is also the vigil, the insomnia of thought, since it is here that reason or thought achieves its highest power of determination. It becomes a pure creativity or virtuality in which no well-founded identity (God, World, Self) remains: [T]hought is that moment in which determination makes itself one, by virtue of maintaining a unilateral and precise relation to the indeterminate. Since it produces differential events without subjective or objective remainder, however, Deleuze argues that thought belongs to the pure and empty form of time, a time that is no longer subordinate to (cosmological, psychological, eternal) movement in space. Time qua form of transcendental synthesis is the ultimate ground of everything that is, reasons and acts. It is the formal element of multiple becoming, no longer in the sense of finite a priori conditioning, but in the sense of a transfinite a posteriori synthesizer: an empt interiority in ongoing formation and materialization. As Deleuze and Guattari define synthesizer in A Thousand Plateaus: The synthesizer, with its operation of consistency, has taken the place of the ground in a priori synthetic judgment: its synthesis is of the molecular and the cosmic, material and force, not form and matter, Grund and territory.

Some content on this page was disabled on May 4, 2020 as a result of a DMCA takedown notice from Columbia University Press. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

The Locus of Renormalization. Note Quote.

IMM-3-200-g005

Since symmetries and the related conservation properties have a major role in physics, it is interesting to consider the paradigmatic case where symmetry changes are at the core of the analysis: critical transitions. In these state transitions, “something is not preserved”. In general, this is expressed by the fact that some symmetries are broken or new ones are obtained after the transition (symmetry changes, corresponding to state changes). At the transition, typically, there is the passage to a new “coherence structure” (a non-trivial scale symmetry); mathematically, this is described by the non-analyticity of the pertinent formal development. Consider the classical para-ferromagnetic transition: the system goes from a disordered state to sudden common orientation of spins, up to the complete ordered state of a unique orientation. Or percolation, often based on the formation of fractal structures, that is the iteration of a statistically invariant motif. Similarly for the formation of a snow flake . . . . In all these circumstances, a “new physical object of observation” is formed. Most of the current analyses deal with transitions at equilibrium; the less studied and more challenging case of far form equilibrium critical transitions may require new mathematical tools, or variants of the powerful renormalization methods. These methods change the pertinent object, yet they are based on symmetries and conservation properties such as energy or other invariants. That is, one obtains a new object, yet not necessarily new observables for the theoretical analysis. Another key mathematical aspect of renormalization is that it analyzes point-wise transitions, that is, mathematically, the physical transition is seen as happening in an isolated mathematical point (isolated with respect to the interval topology, or the topology induced by the usual measurement and the associated metrics).

One can say in full generality that a mathematical frame completely handles the determination of the object it describes as long as no strong enough singularity (i.e. relevant infinity or divergences) shows up to break this very mathematical determination. In classical statistical fields (at criticality) and in quantum field theories this leads to the necessity of using renormalization methods. The point of these methods is that when it is impossible to handle mathematically all the interaction of the system in a direct manner (because they lead to infinite quantities and therefore to no relevant account of the situation), one can still analyze parts of the interactions in a systematic manner, typically within arbitrary scale intervals. This allows us to exhibit a symmetry between partial sets of “interactions”, when the arbitrary scales are taken as a parameter.

In this situation, the intelligibility still has an “upward” flavor since renormalization is based on the stability of the equational determination when one considers a part of the interactions occurring in the system. Now, the “locus of the objectivity” is not in the description of the parts but in the stability of the equational determination when taking more and more interactions into account. This is true for critical phenomena, where the parts, atoms for example, can be objectivized outside the system and have a characteristic scale. In general, though, only scale invariance matters and the contingent choice of a fundamental (atomic) scale is irrelevant. Even worse, in quantum fields theories, the parts are not really separable from the whole (this would mean to separate an electron from the field it generates) and there is no relevant elementary scale which would allow ONE to get rid of the infinities (and again this would be quite arbitrary, since the objectivity needs the inter-scale relationship).

In short, even in physics there are situations where the whole is not the sum of the parts because the parts cannot be summed on (this is not specific to quantum fields and is also relevant for classical fields, in principle). In these situations, the intelligibility is obtained by the scale symmetry which is why fundamental scale choices are arbitrary with respect to this phenomena. This choice of the object of quantitative and objective analysis is at the core of the scientific enterprise: looking only at molecules as the only pertinent observable of life is worse than reductionist, it is against the history of physics and its audacious unifications and invention of new observables, scale invariances and even conceptual frames.

As for criticality in biology, there exists substantial empirical evidence that living organisms undergo critical transitions. These are mostly analyzed as limit situations, either never really reached by an organism or as occasional point-wise transitions. Or also, as researchers nicely claim in specific analysis: a biological system, a cell genetic regulatory networks, brain and brain slices …are “poised at criticality”. In other words, critical state transitions happen continually.

Thus, as for the pertinent observables, the phenotypes, we propose to understand evolutionary trajectories as cascades of critical transitions, thus of symmetry changes. In this perspective, one cannot pre-give, nor formally pre-define, the phase space for the biological dynamics, in contrast to what has been done for the profound mathematical frame for physics. This does not forbid a scientific analysis of life. This may just be given in different terms.

As for evolution, there is no possible equational entailment nor a causal structure of determination derived from such entailment, as in physics. The point is that these are better understood and correlated, since the work of Noether and Weyl in the last century, as symmetries in the intended equations, where they express the underlying invariants and invariant preserving transformations. No theoretical symmetries, no equations, thus no laws and no entailed causes allow the mathematical deduction of biological trajectories in pre-given phase spaces – at least not in the deep and strong sense established by the physico-mathematical theories. Observe that the robust, clear, powerful physico-mathematical sense of entailing law has been permeating all sciences, including societal ones, economics among others. If we are correct, this permeating physico-mathematical sense of entailing law must be given up for unentailed diachronic evolution in biology, in economic evolution, and cultural evolution.

As a fundamental example of symmetry change, observe that mitosis yields different proteome distributions, differences in DNA or DNA expressions, in membranes or organelles: the symmetries are not preserved. In a multi-cellular organism, each mitosis asymmetrically reconstructs a new coherent “Kantian whole”, in the sense of the physics of critical transitions: a new tissue matrix, new collagen structure, new cell-to-cell connections . . . . And we undergo millions of mitosis each minute. More, this is not “noise”: this is variability, which yields diversity, which is at the core of evolution and even of stability of an organism or an ecosystem. Organisms and ecosystems are structurally stable, also because they are Kantian wholes that permanently and non-identically reconstruct themselves: they do it in an always different, thus adaptive, way. They change the coherence structure, thus its symmetries. This reconstruction is thus random, but also not random, as it heavily depends on constraints, such as the proteins types imposed by the DNA, the relative geometric distribution of cells in embryogenesis, interactions in an organism, in a niche, but also on the opposite of constraints, the autonomy of Kantian wholes.

In the interaction with the ecosystem, the evolutionary trajectory of an organism is characterized by the co-constitution of new interfaces, i.e. new functions and organs that are the proper observables for the Darwinian analysis. And the change of a (major) function induces a change in the global Kantian whole as a coherence structure, that is it changes the internal symmetries: the fish with the new bladder will swim differently, its heart-vascular system will relevantly change ….

Organisms transform the ecosystem while transforming themselves and they can stand/do it because they have an internal preserved universe. Its stability is maintained also by slightly, yet constantly changing internal symmetries. The notion of extended criticality in biology focuses on the dynamics of symmetry changes and provides an insight into the permanent, ontogenetic and evolutionary adaptability, as long as these changes are compatible with the co-constituted Kantian whole and the ecosystem. As we said, autonomy is integrated in and regulated by constraints, with an organism itself and of an organism within an ecosystem. Autonomy makes no sense without constraints and constraints apply to an autonomous Kantian whole. So constraints shape autonomy, which in turn modifies constraints, within the margin of viability, i.e. within the limits of the interval of extended criticality. The extended critical transition proper to the biological dynamics does not allow one to prestate the symmetries and the correlated phase space.

Consider, say, a microbial ecosystem in a human. It has some 150 different microbial species in the intestinal tract. Each person’s ecosystem is unique, and tends largely to be restored following antibiotic treatment. Each of these microbes is a Kantian whole, and in ways we do not understand yet, the “community” in the intestines co-creates their worlds together, co-creating the niches by which each and all achieve, with the surrounding human tissue, a task closure that is “always” sustained even if it may change by immigration of new microbial species into the community and extinction of old species in the community. With such community membership turnover, or community assembly, the phase space of the system is undergoing continual and open ended changes. Moreover, given the rate of mutation in microbial populations, it is very likely that these microbial communities are also co-evolving with one another on a rapid time scale. Again, the phase space is continually changing as are the symmetries.

Can one have a complete description of actual and potential biological niches? If so, the description seems to be incompressible, in the sense that any linguistic description may require new names and meanings for the new unprestable functions, where functions and their names make only sense in the newly co-constructed biological and historical (linguistic) environment. Even for existing niches, short descriptions are given from a specific perspective (they are very epistemic), looking at a purpose, say. One finds out a feature in a niche, because you observe that if it goes away the intended organisms dies. In other terms, niches are compared by differences: one may not be able to prove that two niches are identical or equivalent (in supporting life), but one may show that two niches are different. Once more, there are no symmetries organizing over time these spaces and their internal relations. Mathematically, no symmetry (groups) nor (partial-) order (semigroups) organize the phase spaces of phenotypes, in contrast to physical phase spaces.

Finally, here is one of the many logical challenges posed by evolution: the circularity of the definition of niches is more than the circularity in the definitions. The “in the definitions” circularity concerns the quantities (or quantitative distributions) of given observables. Typically, a numerical function defined by recursion or by impredicative tools yields a circularity in the definition and poses no mathematical nor logical problems, in contemporary logic (this is so also for recursive definitions of metabolic cycles in biology). Similarly, a river flow, which shapes its own border, presents technical difficulties for a careful equational description of its dynamics, but no mathematical nor logical impossibility: one has to optimize a highly non linear and large action/reaction system, yielding a dynamically constructed geodetic, the river path, in perfectly known phase spaces (momentum and space or energy and time, say, as pertinent observables and variables).

The circularity “of the definitions” applies, instead, when it is impossible to prestate the phase space, so the very novel interaction (including the “boundary conditions” in the niche and the biological dynamics) co-defines new observables. The circularity then radically differs from the one in the definition, since it is at the meta-theoretical (meta-linguistic) level: which are the observables and variables to put in the equations? It is not just within prestatable yet circular equations within the theory (ordinary recursion and extended non – linear dynamics), but in the ever changing observables, the phenotypes and the biological functions in a circularly co-specified niche. Mathematically and logically, no law entails the evolution of the biosphere.

Bayesianism in Game Theory. Thought of the Day 24.0

16f585c6707dae1b884ef409d0b5c7ef

Bayesianism in game theory can be characterised as the view that it is always possible to define probabilities for anything that is relevant for the players’ decision-making. In addition, it is usually taken to imply that the players use Bayes’ rule for updating their beliefs. If the probabilities are to be always definable, one also has to specify what players’ beliefs are before the play is supposed to begin. The standard assumption is that such prior beliefs are the same for all players. This common prior assumption (CPA) means that the players have the same prior probabilities for all those aspects of the game for which the description of the game itself does not specify different probabilities. Common priors are usually justified with the so called Harsanyi doctrine, according to which all differences in probabilities are to be attributed solely to differences in the experiences that the players have had. Different priors for different players would imply that there are some factors that affect the players’ beliefs even though they have not been explicitly modelled. The CPA is sometimes considered to be equivalent to the Harsanyi doctrine, but there seems to be a difference between them: the Harsanyi doctrine is best viewed as a metaphysical doctrine about the determination of beliefs, and it is hard to see why anybody would be willing to argue against it: if everything that might affect the determination of beliefs is included in the notion of ‘experience’, then it alone does determine the beliefs. The Harsanyi doctrine has some affinity to some convergence theorems in Bayesian statistics: if individuals are fed with similar information indefinitely, their probabilities will ultimately be the same, irrespective of the original priors.

The CPA however is a methodological injunction to include everything that may affect the players’ behaviour in the game: not just everything that motivates the players, but also everything that affects the players’ beliefs should be explicitly modelled by the game: if players had different priors, this would mean that the game structure would not be completely specified because there would be differences in players’ behaviour that are not explained by the model. In a dispute over the status of the CPA, Faruk Gul essentially argues that the CPA does not follow from the Harsanyi doctrine. He does this by distinguishing between two different interpretations of the common prior, the ‘prior view’ and the ‘infinite hierarchy view’. The former is a genuinely dynamic story in which it is assumed that there really is a prior stage in time. The latter framework refers to Mertens and Zamir’s construction in which prior beliefs can be consistently formulated. This framework however, is static in the sense that the players do not have any information on a prior stage, indeed, the ‘priors’ in this framework do not even pin down a player’s priors for his own types. Thus, the existence of a common prior in the latter framework does not have anything to do with the view that differences in beliefs reflect differences in information only.

It is agreed by everyone that for most (real-world) problems there is no prior stage in which the players know each other’s beliefs, let alone that they would be the same. The CPA, if understood as a modelling assumption, is clearly false. Robert Aumann, however, defends the CPA by arguing that whenever there are differences in beliefs, there must have been a prior stage in which the priors were the same, and from which the current beliefs can be derived by conditioning on the differentiating events. If players differ in their present beliefs, they must have received different information at some previous point in time, and they must have processed this information correctly. Based on this assumption, he further argues that players cannot ‘agree to disagree’: if a player knows that his opponents’ beliefs are different from his own, he should revise his beliefs to take the opponents’ information into account. The only case where the CPA would be violated, then, is when players have different beliefs, and have common knowledge about each others’ different beliefs and about each others’ epistemic rationality. Aumann’s argument seems perfectly legitimate if it is taken as a metaphysical one, but we do not see how it could be used as a justification for using the CPA as a modelling assumption in this or that application of game theory and Aumann does not argue that it should.

wpid-bilindustriella-a86478514b

Textual Temporality. Note Quote.

InSolitude-Sister

Time is essentially a self-opening and an expanding into the world. Heidegger says that it is, therefore, difficult to go any further here by comparisons. The interpretation of Dasein as temporality in a universal ontological way is an undecidable question which remains “completely unclear” to him. Time as a philosophical problem is a kind of question which no one knows how to raise because of its inseparability from our nature. As Gadamer notes, we can say what time is in virtue of a self-evident preconception of what is, for what is present is always understood by that preconception. Insofar as it makes no claim to provide a valid universality, philosophical discussion is not a systematic determination of time, i.e., one which requires going back beyond time (in its connection with other categories).

In his doctrine of the productivity of the hermeneutical circle in temporal being, Heidegger develops the primacy of futurity for possible recollection and retention of what is already presented by history. History is present to us only in the light of futurity. In Gadamer’s interpretation, it is rather our prejudices that necessarily constitute our being. His view that prejudices are biases in our openness to the world does not signify the character of prejudices which in turn themselves are regarded as an a priori text in the terms already assumed. Based upon this, prejudices in this sense are not empty, but rather carry a significance which refers to being. Thus we can say that prejudices are our openness to the being-in-the-world. That is, being destined to different openness, we face the reference of our hermeneutical attributions. Therefore, the historicity of the temporal being is anything except what is past.

Clearly, the past is not some occurrence, not some incident in my Dasein, but its past; it is not some ‘what’ about Dasein, some event that happens to Dasein and alters it. This past is not a ‘what,’ but a ‘how,’ indeed it is the authentic ‘how’ (wie) of any temporal being. The past brings all ‘what,’ all taking care of and making plans, back into the ‘how’ which is the basic stand of a historical investigation.

Rather than encountering a past-oriented object, hermeneutical experience is a concern towards the text (or texts) which has been presented to us. Understanding is not possible merely because our part of interpretation is realized only when a “text” is read as a fulfillment of all the requirements of the tradition.

For Gadamer and Ricoeur the past as a text always changes its meaning in relation to the ever-developing world of texts; so it seems that the future is recognized as textual or the textual character of the future. In this sense the text itself is not tradition, but expectation. Upon this text the hermeneutical difference essentially can be extended. Consequently, philosophy is no history of hermeneutical events, but philosophical question evokes the historicity of our thinking and knowing. It is not by accident that Hegel, who tried to write the history of philosophy, raised history itself to the state of absolute mind.

What matters in the question concerning time is attaining an answer in terms in which the different ways of being temporal become comprehensible. What matters is allowing a possible connection between that which is in time and authentic temporality to become visible from the very beginning. However, the problem behind this theory still remains even after long exposure of the Heideggerian interpretation of whether Being-in-the-world can result from temporal being or vice versa. After the more hermeneutical investigation, it seems that Being-in-the-world must be comprehensive only through Being-in-time.

But, in The Concept of Time, Heidegger has already taken into consideration the broader grasp of the text by considering Being as the origin of the hermeneutics of time. If human Being is in time in a distinctive sense, so that we can read from it what time is, then this Dasein must be characterized by the fundamental determinations of its Being. Indeed, then being temporal, correctly understood, would be the fundamental assertion of Dasein with respect to its Being.

As a result, only the interpretation of being as its reference by way of temporality can make clear why and how this feature of being earlier, of apriority, pertains to being. The a priori character of being as the origin of temporalization calls for a specific kind of approach to being-a-priori whose basic components constitute a phenomenology which is hermeneutical.

Heidegger notes that with regard to Dasein, self-understanding reopens the possibility for a theory of time that is not self-enclosed. Dasein comes back to that which it is and takes over as the being that it is. In coming back to itself, it brings everything that it is back again into its own most peculiar chosen can-be. It makes it clear that, although ontologically the text is closest to each and any of its interpretations in its own event, ontically it is closest to itself. But it must be remembered that this phenomenology does not determine completely references of the text by characterizing the temporalization of the text. Through phenomenological research regarding the text, in hermeneutics we are informed only of how the text gets exhibited and unveiled.