That philosophy must be an ontology of sense is a bold claim on Deleuze’s part, and although he takes it from a Hegelian philosophy, the direction in which he develops it across the rest of his work is resolutely, if not infamously, opposed to Hegel. Whereas Hegel will construct a logic of sense which is fundamentally a logic of the concept, Deleuze will deny that sense is reducible to signification and its universal or general concepts. Deleuze will later provide his own logic of the concept, but for him, although the concept will posit itself, this will not be as the immanent thought of the sense or the content of the matter itself, but will rather function to extract or capture a pure event, or the sense at the surface of things. Similarly, although Deleuze will agree that “sense is becoming”, this will not be a becoming in an atemporal logical time, opposed to a historical time that would play it out, but a pure becoming without present, always divided between past and future, without arrow or telos, and actualised in the present while never strictly ‘happening’. The most distinctive difference, however, will be Deleuze’s invocation of a nonsense that cannot be simply incorporated within sense, that will not be sublated and subsumed in the folds of the dialectic, a nonsense that is itself productive of sense. Moving beyond Hegel, Deleuze will deny the reducibility of sense not only to the universal meanings of signification, but also to the functions of reference or denotation. Moreover, he will deny its reducibility to the dimension of manifestation, or the meanings of the subject of enunciation – the ‘I’ who speaks. Sense can neither be found in universal concepts, nor reference to the individual, nor in the intentions of the subject, but is rather that which grounds all three.
Month: April 2017
New Critique: From Hyper-heteronomy to Autonomy. Thought of the Day 13.0
The new critique is an invention of a new form of autonomy from hyper-heteronomy, a therapeutics of the pharmakon. This critique is dimensional in that, it is pharmacological, a critique that consists in analyzing the specifics of the pharmaka, a critique that invests its energy in finding the toxic possibilities of individuation, through an approach that is both theoretical and absolute and that is without a context, but not totally context-free, since it is an organological approach, an approach which is always within a context, in the Nietzschean genealogical sense of the term, but is at the same time independent of any particular political situation.
Viral Load
Even if viruses have been quarantined on a user’s system, the user is often not allowed to access the quarantined files. The ostensible reason for this high level of secrecy is the claim that open access to computer virus code would result in people writing more computer viruses – a difficult claim for an antivirus company to make given that once they themselves have a copy of a virus then machines running their antivirus software should already be protected from that virus. A more believable explanation for antivirus companies’ unwillingness to release past virus programs is that a large part of their business model is predicated upon their ability to exclusively control stockpiles of past computer virus specimens as closely guarded intellectual property.
None of this absence of archival material is helped by the fact that the concept of a computer virus is itself an ontologically ambiguous category. The majority of so-called malicious software entities that have plagued Internet users in the past few years have technically not been viruses but worms. Additionally, despite attempts to define clear nosological and epidemiological categories for computer viruses and worms, there is still no consistent system for stabilizing the terms themselves, let alone assessing their relative populations. Elizabeth Grosz commented during an interview with the editors of Found Object journal that part of the reason for the ontological ambiguity of computer viruses is that they are an application of a biological metaphor that is largely indeterminate itself. According to Grosz, we are as mystified, if not more so, by biological viruses as we are by computer viruses. Perhaps we know even more about computer viruses than we do about biological viruses! The same obscurities are there at the biological level that exists at the computer level (…)
As Grosz suggests, it is no wonder that computer viruses are so ontologically uncertain, given that their biological namesakes threaten to undermine many of the binarisms that anchor modern Western technoscience, such as distinctions between organic and inorganic, dead and living, matter and form, and sexual and asexual reproduction.
Hyperstructures
In many areas of mathematics there is a need to have methods taking local information and properties to global ones. This is mostly done by gluing techniques using open sets in a topology and associated presheaves. The presheaves form sheaves when local pieces fit together to global ones. This has been generalized to categorical settings based on Grothendieck topologies and sites.
The general problem of going from local to global situations is important also outside of mathematics. Consider collections of objects where we may have information or properties of objects or subcollections, and we want to extract global information.
To illustrate our intuition let us think of a society organized into a hyperstructure. Through levelwise democratic elections leaders are elected and the democratic process will eventually give a “global” leader. In this sense democracy may be thought of as a sociological (or political) globalizer. This applies to decision making as well.
In “frustrated” spin systems in physics one may possibly think of the “frustation” being resolved by creating new levels and a suitable globalizer assigning a global state to the system corresponding to various exotic physical conditions like, for example, a kind of hyperstructured spin glass or magnet. Acting on both classical and quantum fields in physics may be facilitated by putting a hyperstructure on them.
There are also situations where we are given an object or a collection of objects with assignments of properties or states. To achieve a certain goal we need to change, let us say, the state. This may be very difficult and require a lot of resources. The idea is then to put a hyperstructure on the object or collection. By this we create levels of locality that we can glue together by a generalized Grothendieck topology.
It may often be much easier and require less resources to change the state at the lowest level and then use a globalizer to achieve the desired global change. Often it may be important to find a minimal hyperstructure needed to change a global state with minimal resources.
Again, to support our intuition let us think of the democratic society example. To change the global leader directly may be hard, but starting a “political” process at the lower individual levels may not require heavy resources and may propagate through the democratic hyperstructure leading to a change of leader.
Hence, hyperstructures facilitates local to global processes, but also global to local processes. Often these are called bottom up and top down processes. In the global to local or top down process we put a hyperstructure on an object or system in such a way that it is represented by a top level bond in the hyperstructure. This means that to an object or system X we assign a hyperstructure
H = {B0,B1,…,Bn} in such a way that X = bn for some bn ∈ B binding a family {bi1n−1} of Bn−1 bonds, each bi1n−1 binding a family {bi2n−2} of Bn−2 bonds, etc. down to B0 bonds in H. Similarly for a local to global process. To a system, set or collection of objects X, we assign a hyperstructure H such that X = B0. A hyperstructure on a set (space) will create “global” objects, properties and states like what we see in organized societies, organizations, organisms, etc. The hyperstructure is the “glue” or the “law” of the objects. In a way, the globalizer creates a kind of higher order “condensate”. Hyperstructures represent a conceptual tool for translating organizational ideas like for example democracy, political parties, etc. into a mathematical framework where new types of arguments may be carried through.
Deleuze on Right versus Left. Thought of the Day 12.0
Deleuze says to be on the Right is to perceive the world starting with identity, with self and family, and to move outward in concentric circles, to friends, city, nation, continent, world with diminishing affective investment in each circle, and with an abiding sense that the centre needs defending against the periphery. On the contrary to be on the Left is to start one’s perception on the periphery and to move inwards. It requires not the bolstering of the centre, but an appreciation that the centre is interlaced with the periphery, a process that undoes the distance between the two.
A Theosophist Reading of Spinoza and Bohm. For Whom the Bell Tolls.
To Spinoza mind and matter were parallel attributes of God or Substance, the great essence of the universe sometimes called in theosophical literature Svabhavat, primordial nature, mind-substance. Svabhavat (from the Sanskrit sva, “self” and bhu, “to become”) means self-becoming. Nothing can exist other than as an emanation from this primordial nature’s eternal action. Nothing, said Spinoza, can exist except this Substance and the unfolding of its attributes. This being so, “creation” had no beginning and will have no end; all things come forth from the Boundless and will therefore continue forever — theosophical ideas found also in Neoplatonism and Gnosticism.
With Spinoza we find emphasis on the essential unity and continuity of all existence, while Pythagoras, Plato, and Leibniz distinguish countless monads in it, centers of activity in every conceivable grade of self-expression. Combining the monad theory with Spinoza’s philosophy, a worldview emerges remarkably in accord with ideas from the Upanishads, Vedanta, Buddhism, and many a thinker from ancient Greece. We find corresponding ideas in the writings of theoretical physicist David Bohm, who also believed that the distinction between animate and inanimate nature is arbitrary, of use in some contexts but ultimately incorrect. He came to the conclusion that, far from being empty, space is an immense ocean of energy, and matter no more than a superficial ripple on that ocean. Everything lies concealed in an “implicate order” and comes forth from it. To illustrate this idea Bohm used the following experiment: the outer of two concentric cylinders is filled with a viscous fluid, such as glycerin, into which is placed a drop of insoluble ink. When the outer cylinder is rotated very slowly, the ink drop threads out, growing thinner and thinner, and eventually becomes invisible. The dye molecules become distributed among the molecules of the liquid as a grey shade. Rotating the cylinder in the opposite direction yields a surprising result: slender threads appear, growing thicker and thicker until, suddenly, the globule of ink is seen once more. This suggests that out of the “holomovement” of the ocean of energy comes forth the known universe with all that is in it.
From the “reality of the first order,” or implicate order, issues the explicate order, the world of forms and living things. In this “reality of the second order” these things have a relatively separate existence, as the Gulf Stream and other currents have a relatively separate existence within the Atlantic Ocean. From atoms to galaxies, all the phenomena of nature emerge from the ocean of the implicate order, make their appearance as “relatively autonomous subtotals,” and at the same time are linked with everything else.
Bohm regarded the universe as an undivided whole, a continuously ongoing process whose “ultimate ground of being is entirely unutterable, entirely implicit.” Space is not a nothingness but is in essence this ultimate ground of being. He employed the image of a crystal through which at absolute zero, according to quantum theory, electrons would pass as if it were empty space. The crystal would then be perfectly homogeneous and would seem nonexistent for the electrons, as space seems nonexistent for us. But when the temperature is raised, inhomogeneities appear, scattering the electrons. If one were to focus the electrons with an electron lens to make a picture of the crystal, it “would then appear that the inhomogeneities exist independently and that the main body of the crystal was sheer nothingness”. Like the school of Parmenides and Zeno in ancient Greece, Bohm regarded space as a plenum, utter fullness, the ground or substratum of all that exists. The matter that we sense is, like flaws in the crystal, inhomogeneities in space, which is the unity that includes both matter and consciousness.
Another physicist whose work endorsed the interconnectedness of things was John S. Bell. Two particles moving away from each other at the speed of light were thought to have lost contact forever, since no signal from one could overtake and influence the other. In 1964 Bell proposed his theory that particles like these do influence each other all the same and therefore, somehow, never lose contact. The theory was experimentally confirmed for the first time in 1972. Science seems to be overstepping its own boundaries, penetrating a realm where mystics have been long before. Not surprisingly modern thinkers are taking note of ancient ideas with amazement and admiration.
When H. P. Blavatsky published The Secret Doctrine in 1888, she stated that the ideas it contained were neither her own nor new. She sketched in bold strokes once again the existence of infinite Space, ground of countless universes, populated and ensouled by numberless monads: not as unconnected, separate things, but as differentiations within the whole. She spoke of “The fundamental identity of all Souls with the Universal Over-Soul,” and gave a vertiginous panorama of the evolutionary track, not of bodies, forms, but of centers of consciousness, monads, from their differentiation within the Oversoul to their grand consummatum est, the attainment of fully self-conscious realization of cosmic consciousness at the end of the world period. A work like The Secret Doctrine could not fail to cause a commotion in those days; recent developments have paved the way for us better to appreciate these thoughts and subscribe to the fundamental unity of man and universe.
The human mind is not extraneous to the mind of the universe. In fact, nothing is conceivable apart from the fundamental space-energy-mind to which the ancient Vedic poet would not give a name. Names indicate qualities, and so imply limitations because every quality excludes its opposite. So the Vedic sage spoke simply of tat, That. In the subtle logic of Buddhist thinking, the absolute fullness of space is called sunyata, emptiness: all that exists is as ripples in this boundless ocean which cannot be said to have this or that form, and which in that sense is “empty.” With their plenum or pleroma the Gnostics and other ancient Mediterranean thinkers emphasized its “fullness,” which comprises all worlds, our visible as well as numerous invisible ones. These worlds may be symbolized as rungs on the unending “ladder of being.” Whether the inhabitants of realms higher than ours are called aeons, angelic orders, or dhyani-buddhas makes no difference. The world is the interaction of a variety of monads, but not all monads necessarily express themselves on the physical level. Although in essence all monads are aspects of the ultimate ground of being, in their forms of manifestation they are infinitely varied. In their totality they constitute nature, the Jacob’s ladder of evolving beings, conjointly weaving the fabric of visible and invisible worlds, the multiplicity of “parallel universes” modern thinkers are beginning to surmise.
The Eclectics on Hyperstition. Collation Archives.
1. N u m o g r a m
Rigorous systematic unfolding of the Decimal Labyrinth and all its implexes (Zones, Currents, Gates, Lemurs, Pandemonium Matrix, Book of Paths …) and echoes (Atlantean Cross, Decadology …).
2. M y t h o s
Comprehensive attribution of all signal (discoveries, theories, problems and approaches) to artificial agencies, allegiances, cultures and continentities.
3. U n b e l i e f
Pragmatic skepticism or constructive escape from integrated thinking and all its forms of imposed unity (religious dogma, political ideology, scientific law, common sense …).
Each vortical sub-cycle of hyperstitional production announces itself through a communion with ‘the Thing’ coinciding with a “mystical consummation of uncertainty” or “attainment of positive unbelief.”
Conjuncted: Of Topos and Torsors
(Q1) Given a topological space X, what should we mean by a “sheaf of n-types” on X?
(Q3) What special features (if any) does Shv≤n(X) possess?
Our answers to questions (Q2) and (Q3) may be summarized as follows:
(A2) The collection Shv≤n(X) has the structure of an ∞-category.
Grothendieck’s vision has been realized in various ways, thanks to the work of a number of mathematicians (most notably Jardine), and their work can also be used to provide answers to questions (Q1) and (Q2). Question (Q3) has also been addressed (at least in limiting case n = ∞) by Toën and Vezzosi.
Of Topos and Torsors
Let X be a topological space. One goal of algebraic topology is to study the topology of X by means of algebraic invariants, such as the singular cohomology groups Hn(X;G) of X with coefficients in an abelian group G. These cohomology groups have proven to be an extremely useful tool, due largely to the fact that they enjoy excellent formal properties, and the fact that they tend to be very computable. However, the usual definition of Hn(X;G) in terms of singular G-valued cochains on X is perhaps somewhat unenlightening. This raises the following question: can we understand the cohomology group Hn(X;G) in more conceptual terms?
As a first step toward answering this question, we observe that Hn(X;G) is a representable functor of X. That is, there exists an Eilenberg-MacLane space K(G,n) and a universal cohomology class η ∈ Hn(K(G,n);G) such that, for any topological space X, pullback of η determines a bijection
[X, K(G, n)] → Hn(X; G)
Here [X,K(G,n)] denotes the set of homotopy classes of maps from X to K(G,n). The space K(G,n) can be characterized up to homotopy equivalence by the above property, or by the the formula πkK(G,n)≃ ∗ if k̸ ≠ n
or
G if k = n.
In the case n = 1, we can be more concrete. An Eilenberg MacLane space K(G,1) is called a classifying space for G, and is typically denoted by BG. The universal cover of BG is a contractible space EG, which carries a free action of the group G by covering transformations. We have a quotient map π : EG → BG. Each fiber of π is a discrete topological space, on which the group G acts simply transitively. We can summarize the situation by saying that EG is a G-torsor over the classifying space BG. For every continuous map X → BG, the fiber product X~ : EG × BG X has the structure of a G-torsor on X: that is, it is a space endowed with a free action of G and a homeomorphism X~/G ≃ X. This construction determines a map from [X,BG] to the set of isomorphism classes of G-torsors on X. If X is a well-behaved space (such as a CW complex), then this map is a bijection. We therefore have (at least) three different ways of thinking about a cohomology class η ∈ H1(X; G):
(1) As a G-valued singular cocycle on X, which is well-defined up to coboundaries.
(2) As a continuous map X → BG, which is well-defined up to homotopy.
(3) As a G-torsor on X, which is well-defined up to isomorphism.
The singular cohomology of a space X is constructed using continuous maps from simplices ∆k into X. If there are not many maps into X (for example if every path in X is constant), then we cannot expect singular cohomology to tell us very much about X. The second definition uses maps from X into the classifying space BG, which (ultimately) relies on the existence of continuous real-valued functions on X. If X does not admit many real-valued functions, then the set of homotopy classes [X,BG] is also not a very useful invariant. For such spaces, the third approach is the most powerful: there is a good theory of G-torsors on an arbitrary topological space X.
There is another reason for thinking about H1(X;G) in the language of G-torsors: it continues to make sense in situations where the traditional ideas of topology break down. If X is a G-torsor on a topological space X, then the projection map X → X is a local homeomorphism; we may therefore identify X with a sheaf of sets F on X. The action of G on X determines an action of G on F. The sheaf F (with its G-action) and the space X (with its G-action) determine each other, up to canonical isomorphism. Consequently, we can formulate the definition of a G-torsor in terms of the category ShvSet(X) of sheaves of sets on X without ever mentioning the topological space X itself. The same definition makes sense in any category which bears a sufficiently strong resemblance to the category of sheaves on a topological space: for example, in any Grothendieck topos. This observation allows us to construct a theory of torsors in a variety of nonstandard contexts, such as the étale topology of algebraic varieties.
Describing the cohomology of X in terms of the sheaf theory of X has still another advantage, which comes into play even when the space X is assumed to be a CW complex. For a general space X, isomorphism classes of G-torsors on X are classified not by the singular cohomology H1sing(X;G), but by the sheaf cohomology H1sheaf(X; G) of X with coefficients in the constant sheaf G associated to G. This sheaf cohomology is defined more generally for any sheaf of groups G on X. Moreover, we have a conceptual interpretation of H1sheaf(X; G) in general: it classifies G-torsors on X (that is, sheaves F on X which carry an action of G and locally admit a G-equivariant isomorphism F ≃ G) up to isomorphism. The general formalism of sheaf cohomology is extremely useful, even if we are interested only in the case where X is a nice topological space: it includes, for example, the theory of cohomology with coefficients in a local system on X.
Let us now attempt to obtain a similar interpretation for cohomology classes η ∈ H2 (X ; G). What should play the role of a G-torsor in this case? To answer this question, we return to the situation where X is a CW complex, so that η can be identified with a continuous map X → K(G,2). We can think of K(G,2) as the classifying space of a group: not the discrete group G, but instead the classifying space BG (which, if built in a sufficiently careful way, comes equipped with the structure of a topological abelian group). Namely, we can identify K(G, 2) with the quotient E/BG, where E is a contractible space with a free action of BG. Any cohomology class η ∈ H2(X;G) determines a map X → K(G,2), and we can form the pullback X~ = E × BG X. We now think of X as a torsor over X: not for the discrete group G, but instead for its classifying space BG.
To complete the analogy with our analysis in the case n = 1, we would like to interpret the fibration X → X as defining some kind of sheaf F on the space X. This sheaf F should have the property that for each x ∈ X, the stalk Fx can be identified with the fiber X~x ≃ BG. Since the space BG is not discrete (or homotopy equivalent to a discrete space), the situation cannot be adequately described in the usual language of set-valued sheaves. However, the classifying space BG is almost discrete: since the homotopy groups πiBG vanish for i > 1, we can recover BG (up to homotopy equivalence) from its fundamental groupoid. This suggests that we might try to think about F as a “groupoid-valued sheaf” on X, or a stack (in groupoids) on X.
Political Ideology Chart