Hochschild Cohomology Tethers to Closed String Algebra by way of Cyclicity.


When we have an open and closed Topological Field Theory (TFT) each element ξ of the closed algebra C defines an endomorphism ξa = ia(ξ) ∈ Oaa of each object a of B, and η ◦ ξa = ξb ◦ η for each morphism η ∈ Oba from a to b. The family {ξa} thus constitutes a natural transformation from the identity functor 1B : B → B to itself.

For any C-linear category B we can consider the ring E of natural transformations of 1B. It is automatically commutative, for if {ξa}, {ηa} ∈ E then ξa ◦ ηa = ηa ◦ ξa by the definition of naturality. (A natural transformation from 1B to 1B is a collection of elements {ξa ∈ Oaa} such that ξa ◦ f = f ◦ ξb for each morphism f ∈ Oab from b to a. But we can take a = b and f = ηa.) If B is a Frobenius category then there is a map πab : Obb → Oaa for each pair of objects a, b, and we can define jb : Obb → E by jb(η)a = πab(η) for η ∈ Obb. In other words, jb is defined so that the Cardy condition ιa ◦ jb = πab holds. But the question arises whether we can define a trace θ : E → C to make E into a Frobenius algebra, and with the property that

θaa(ξ)η) = θ(ξja(η)) —– (1)

∀ ξ ∈ E and η ∈ Oaa. This is certainly true if B is a semisimple Frobenius category with finitely many simple objects, for then E is just the ring of complex-valued functions on the set of classes of these simple elements, and we can readily define θ : E → C by θ(εa) = θa(1a)2, where a is an irreducible object, and εa ∈ E is the characteristic function of the point a in the spectrum of E. Nevertheless, a Frobenius category need not be semisimple, and we cannot, unfortunately, take E as the closed string algebra in the general case. If, for example, B has just one object a, and Oaa is a commutative local ring of dimension greater than 1, then E = Oaa, and so ιa : E → Oaa is an isomorphism, and its adjoint map ja ought to be an isomorphism too. But that contradicts the Cardy condition, as πaa is multiplication by ∑ψiψi, which must be nilpotent.

The commutative algebra E of natural endomorphisms of the identity functor of a linear category B is called the Hochschild cohomology HH0(B) of B in degree 0. The groups HHp(B) for p > 0, vanish if B is semisimple, but in the general case they appear to be relevant to the construction of a closed string algebra from B. For any Frobenius category B there is a natural homomorphism K(B) → HH0(B) from the Grothendieck group of B, which assigns to an object a the transformation whose value on b is πba(1a) ∈ Obb. In the semisimple case this homomorphism induces an isomorphism K(B) ⊗ C → HH0(B).

For any additive category B the Hochschild cohomology is defined as the cohomology of the cochain complex in which a k-cochain F is a rule that to each composable k-tuple of morphisms

Y0φ1 Y1φ2 ··· →φk Yk —– (2)

assigns F(φ1,…,φk) ∈ Hom(Y0,Yk). The differential in the complex is defined by

(dF)(φ1,…,φk+1) = F(φ2,…,φk+1) ◦ φ1 + ∑i=1k(−1)i F(φ1,…,φi+1 ◦ φi,…,φk+1) + (−1)k+1φk+1 ◦ F(φ1,…,φk) —– (3)

(Notice, in particular, that a 0-cochain assigns an endomorphism FY to each object Y, and is a cocycle if the endomorphisms form a natural transformation. Similarly, a 2-cochain F gives a possible infinitesimal deformation F(φ1, φ2) of the composition law (φ1, φ2) ↦ φ2 ◦ φ1 of the category, and the deformation preserves the associativity of composition iff F is a cocycle.)

In the case of a category B with a single object whose algebra of endomorphisms is O the cohomology just described is usually called the Hochschild cohomology of the algebra O with coefficients in O regarded as a O-bimodule. This must be carefully distinguished from the Hochschild cohomology with coefficients in the dual O-bimodule O. But if O is a Frobenius algebra it is isomorphic as a bimodule to O, and the two notions of Hochschild cohomology need not be distinguished. The same applies to a Frobenius category B: because Hom(Yk, Y0) is the dual space of Hom(Y0, Yk) we can think of a k-cochain as a rule which associates to each composable k-tuple of morphisms a linear function of an element φ0 ∈ Hom(Yk, Y0). In other words, a k-cochain is a rule which to each “circle” of k + 1 morphisms

···→φ0 Y0φ1 Y1 →φ2···→φk Ykφ0··· —– (4)

assigns a complex number F(φ01,…,φk).

If in this description we restrict ourselves to cochains which are cyclically invariant under rotating the circle of morphisms (φ01,…,φk) then we obtain a sub-cochain complex of the Hochschild complex whose cohomology is called the cyclic cohomology HC(B) of the category B. The cyclic cohomology, which evidently maps to the Hochschild cohomology is a more natural candidate for the closed string algebra associated to B than is the Hochschild cohomology. A very natural Frobenius category on which to test these ideas is the category of holomorphic vector bundles on a compact Calabi-Yau manifold.

Individuation. Thought of the Day 91.0


The first distinction is between two senses of the word “individuation” – one semantic, the other metaphysical. In the semantic sense of the word, to individuate an object is to single it out for reference in language or in thought. By contrast, in the metaphysical sense of the word, the individuation of objects has to do with “what grounds their identity and distinctness.” Sets are often used to illustrate the intended notion of “grounding.” The identity or distinctness of sets is said to be “grounded” in accordance with the principle of extensionality, which says that two sets are identical iff they have precisely the same elements:

SET(x) ∧ SET(y) → [x = y ↔ ∀u(u ∈ x ↔ u ∈ y)]

The metaphysical and semantic senses of individuation are quite different notions, neither of which appears to be reducible to or fully explicable in terms of the other. Since sufficient sense cannot be made of the notion of “grounding of identity” on which the metaphysical notion of individuation is based, focusing on the semantic notion of individuation is an easy way out. This choice of focus means that our investigation is a broadly empirical one drawn on empirical linguistics and psychology.

What is the relation between the semantic notion of individuation and the notion of a criterion of identity? It is by means of criteria of identity that semantic individuation is effected. Singling out an object for reference involves being able to distinguish this object from other possible referents with which one is directly presented. The final distinction is between two types of criteria of identity. A one-level criterion of identity says that two objects of some sort F are identical iff they stand in some relation RF:

Fx ∧ Fy → [x = y ↔ RF(x,y)]

Criteria of this form operate at just one level in the sense that the condition for two objects to be identical is given by a relation on these objects themselves. An example is the set-theoretic principle of extensionality.

A two-level criterion of identity relates the identity of objects of one sort to some condition on entities of another sort. The former sort of objects are typically given as functions of items of the latter sort, in which case the criterion takes the following form:

f(α) = f(β) ↔ α ≈ β

where the variables α and β range over the latter sort of item and ≈ is an equivalence relation on such items. An example is Frege’s famous criterion of identity for directions:

d(l1) = d(l2) ↔ l1 || l2

where the variables l1 and l2 range over lines or other directed items. An analogous two-level criterion relates the identity of geometrical shapes to the congruence of things or figures having the shapes in question. The decision to focus on the semantic notion of individuation makes it natural to focus on two-level criteria. For two-level criteria of identity are much more useful than one-level criteria when we are studying how objects are singled out for reference. A one-level criterion provides little assistance in the task of singling out objects for reference. In order to apply a one-level criterion, one must already be capable of referring to objects of the sort in question. By contrast, a two-level criterion promises a way of singling out an object of one sort in terms of an item of another and less problematic sort. For instance, when Frege investigated how directions and other abstract objects “are given to us”, although “we cannot have any ideas or intuitions of them”, he proposed that we relate the identity of two directions to the parallelism of the two lines in terms of which these directions are presented. This would be explanatory progress since reference to lines is less puzzling than reference to directions.

Rants of the Undead God: Instrumentalism. Thought of the Day 68.1


Hilbert’s program has often been interpreted as an instrumentalist account of mathematics. This reading relies on the distinction Hilbert makes between the finitary part of mathematics and the non-finitary rest which is in need of grounding (via finitary meta-mathematics). The finitary part Hilbert calls “contentual,” i.e., its propositions and proofs have content. The infinitary part, on the other hand, is “not meaningful from a finitary point of view.” This distinction corresponds to a distinction between formulas of the axiomatic systems of mathematics for which consistency proofs are being sought. Some of the formulas correspond to contentual, finitary propositions: they are the “real” formulas. The rest are called “ideal.” They are added to the real part of our mathematical theories in order to preserve classical inferences such as the principle of the excluded middle for infinite totalities, i.e., the principle that either all numbers have a given property or there is a number which does not have it.

It is the extension of the real part of the theory by the ideal, infinitary part that is in need of justification by a consistency proof – for there is a condition, a single but absolutely necessary one, to which the use of the method of ideal elements is subject, and that is the proof of consistency; for, extension by the addition of ideals is legitimate only if no contradiction is thereby brought about in the old, narrower domain, that is, if the relations that result for the old objects whenever the ideal objects are eliminated are valid in the old domain. Weyl described Hilbert’s project as replacing meaningful mathematics by a meaningless game of formulas. He noted that Hilbert wanted to “secure not truth, but the consistency of analysis” and suggested a criticism that echoes an earlier one by Frege – why should we take consistency of a formal system of mathematics as a reason to believe in the truth of the pre-formal mathematics it codifies? Is Hilbert’s meaningless inventory of formulas not just “the bloodless ghost of analysis? Weyl suggested that if mathematics is to remain a serious cultural concern, then some sense must be attached to Hilbert’s game of formulae. In theoretical physics we have before us the great example of a [kind of] knowledge of completely different character than the common or phenomenal knowledge that expresses purely what is given in intuition. While in this case every judgment has its own sense that is completely realizable within intuition, this is by no means the case for the statements of theoretical physics. Hilbert suggested that consistency is not the only virtue ideal mathematics has –  transfinite inference simplifies and abbreviates proofs, brevity and economy of thought are the raison d’être of existence proofs.

Hilbert’s treatment of philosophical questions is not meant as a kind of instrumentalist agnosticism about existence and truth and so forth. On the contrary, it is meant to provide a non-skeptical and positive solution to such problems, a solution couched in cognitively accessible terms. And, it appears, the same solution holds for both mathematical and physical theories. Once new concepts or “ideal elements” or new theoretical terms have been accepted, then they exist in the sense in which any theoretical entities exist. When Weyl eventually turned away from intuitionism, he emphasized the purpose of Hilbert’s proof theory, not to turn mathematics into a meaningless game of symbols, but to turn it into a theoretical science which codifies scientific (mathematical) practice. The reading of Hilbert as an instrumentalist goes hand in hand with a reading of the proof-theoretic program as a reductionist project. The instrumentalist reading interprets ideal mathematics as a meaningless formalism, which simplifies and “rounds out” mathematical reasoning. But a consistency proof of ideal mathematics by itself does not explain what ideal mathematics is an instrument for.

On this picture, classical mathematics is to be formalized in a system which includes formalizations of all the directly verifiable (by calculation) propositions of contentual finite number theory. The consistency proof should show that all real propositions which can be proved by ideal methods are true, i.e., can be directly verified by finite calculation. Actual proofs such as the ε-substitution procedure are of such a kind: they provide finitary procedures which eliminate transfinite elements from proofs of real statements. In particular, they turn putative ideal derivations of 0 = 1 into derivations in the real part of the theory; the impossibility of such a derivation establishes consistency of the theory. Indeed, Hilbert saw that something stronger is true: not only does a consistency proof establish truth of real formulas provable by ideal methods, but it yields finitary proofs of finitary general propositions if the corresponding free-variable formula is derivable by ideal methods.

Derivability from Relational Logic of Charles Sanders Peirce to Essential Laws of Quantum Mechanics


Charles Sanders Peirce made important contributions in logic, where he invented and elaborated novel system of logical syntax and fundamental logical concepts. The starting point is the binary relation SiRSj between the two ‘individual terms’ (subjects) Sj and Si. In a short hand notation we represent this relation by Rij. Relations may be composed: whenever we have relations of the form Rij, Rjl, a third transitive relation Ril emerges following the rule

RijRkl = δjkRil —– (1)

In ordinary logic the individual subject is the starting point and it is defined as a member of a set. Peirce considered the individual as the aggregate of all its relations

Si = ∑j Rij —– (2)

The individual Si thus defined is an eigenstate of the Rii relation

RiiSi = Si —– (3)

The relations Rii are idempotent

R2ii = Rii —– (4)

and they span the identity

i Rii = 1 —– (5)

The Peircean logical structure bears resemblance to category theory. In categories the concept of transformation (transition, map, morphism or arrow) enjoys an autonomous, primary and irreducible role. A category consists of objects A, B, C,… and arrows (morphisms) f, g, h,… . Each arrow f is assigned an object A as domain and an object B as codomain, indicated by writing f : A → B. If g is an arrow g : B → C with domain B, the codomain of f, then f and g can be “composed” to give an arrow gof : A → C. The composition obeys the associative law ho(gof) = (hog)of. For each object A there is an arrow 1A : A → A called the identity arrow of A. The analogy with the relational logic of Peirce is evident, Rij stands as an arrow, the composition rule is manifested in equation (1) and the identity arrow for A ≡ Si is Rii.

Rij may receive multiple interpretations: as a transition from the j state to the i state, as a measurement process that rejects all impinging systems except those in the state j and permits only systems in the state i to emerge from the apparatus, as a transformation replacing the j state by the i state. We proceed to a representation of Rij

Rij = |ri⟩⟨rj| —– (6)

where state ⟨ri | is the dual of the state |ri⟩ and they obey the orthonormal condition

⟨ri |rj⟩ = δij —– (7)

It is immediately seen that our representation satisfies the composition rule equation (1). The completeness, equation (5), takes the form

n|ri⟩⟨ri|=1 —– (8)

All relations remain satisfied if we replace the state |ri⟩ by |ξi⟩ where

i⟩ = 1/√N ∑n |ri⟩⟨rn| —– (9)

with N the number of states. Thus we verify Peirce’s suggestion, equation (2), and the state |ri⟩ is derived as the sum of all its interactions with the other states. Rij acts as a projection, transferring from one r state to another r state

Rij |rk⟩ = δjk |ri⟩ —– (10)

We may think also of another property characterizing our states and define a corresponding operator

Qij = |qi⟩⟨qj | —– (11)


Qij |qk⟩ = δjk |qi⟩ —– (12)


n |qi⟩⟨qi| = 1 —– (13)

Successive measurements of the q-ness and r-ness of the states is provided by the operator

RijQkl = |ri⟩⟨rj |qk⟩⟨ql | = ⟨rj |qk⟩ Sil —– (14)


Sil = |ri⟩⟨ql | —– (15)

Considering the matrix elements of an operator A as Anm = ⟨rn |A |rm⟩ we find for the trace

Tr(Sil) = ∑n ⟨rn |Sil |rn⟩ = ⟨ql |ri⟩ —– (16)

From the above relation we deduce

Tr(Rij) = δij —– (17)

Any operator can be expressed as a linear superposition of the Rij

A = ∑i,j AijRij —– (18)


Aij =Tr(ARji) —– (19)

The individual states could be redefined

|ri⟩ → ei |ri⟩ —– (20)

|qi⟩ → ei |qi⟩ —– (21)

without affecting the corresponding composition laws. However the overlap number ⟨ri |qj⟩ changes and therefore we need an invariant formulation for the transition |ri⟩ → |qj⟩. This is provided by the trace of the closed operation RiiQjjRii

Tr(RiiQjjRii) ≡ p(qj, ri) = |⟨ri |qj⟩|2 —– (22)

The completeness relation, equation (13), guarantees that p(qj, ri) may assume the role of a probability since

j p(qj, ri) = 1 —– (23)

We discover that starting from the relational logic of Peirce we obtain all the essential laws of Quantum Mechanics. Our derivation underlines the outmost relational nature of Quantum Mechanics and goes in parallel with the analysis of the quantum algebra of microscopic measurement.

Conjuncted: Operations of Truth. Thought of the Day 47.1


Conjuncted here.

Let us consider only the power set of the set of all natural numbers, which is the smallest infinite set – the countable infinity. By a model of set theory we understand a set in which  – if we restrict ourselves to its elements only – all axioms of set theory are satisfied. It follows from Gödel’s completeness theorem that as long as set theory is consistent, no statement which is true in some model of set theory can contradict logical consequences of its axioms. If the cardinality of p(N) was such a consequence, there would exist a cardinal number κ such that the sentence the cardinality of p(N) is κ would be true in all the models. However, for every cardinal κ the technique of forcing allows for finding a model M where the cardinality of p(N) is not equal to κ. Thus, for no κ, the sentence the cardinality of p(N) is κ is a consequence of the axioms of set theory, i.e. they do not decide the cardinality of p(N).

The starting point of forcing is a model M of set theory – called the ground model – which is countably infinite and transitive. As a matter of fact, the existence of such a model cannot be proved but it is known that there exists a countable and transitive model for every finite subset of axioms.

A characteristic subtlety can be observed here. From the perspective of an inhabitant of the universe, that is, if all the sets are considered, the model M is only a small part of this universe. It is deficient in almost every respect; for example all of its elements are countable, even though the existence of uncountable sets is a consequence of the axioms of set theory. However, from the point of view of an inhabitant of M, that is, if elements outside of M are disregarded, everything is in order. Some of M because in this model there are no functions establishing a one-to-one correspondence between them and ω0. One could say that M simulates the properties of the whole universe.

The main objective of forcing is to build a new model M[G] based on M, which contains M, and satisfies certain additional properties. The model M[G] is called the generic extension of M. In order to accomplish this goal, a particular set is distinguished in M and its elements are referred to as conditions which will be used to determine basic properties of the generic extension. In case of the forcing that proves the undecidability of the cardinality of p(N), the set of conditions codes finite fragments of a function witnessing the correspondence between p(N) and a fixed cardinal κ.

In the next step, an appropriately chosen set G is added to M as well as other sets that are indispensable in order for M[G] to satisfy the axioms of set theory. This set – called generic – is a subset of the set of conditions that always lays outside of M. The construction of M[G] is exceptional in the sense that its key properties can be described and proved using M only, or just the conditions, thus, without referring to the generic set. This is possible for three reasons. First of all, every element x of M[G] has a name existing already in M (that is, an element in M that codes x in some particular way). Secondly, based on these names, one can design a language called the forcing language or – as Badiou terms it – the subject language that is powerful enough to express every sentence of set theory referring to the generic extension. Finally, it turns out that the validity of sentences of the forcing language in the extension M[G] depends on the set of conditions: the conditions force validity of sentences of the forcing language in a precisely specified sense. As it has already been said, the generic set G consists of some of the conditions, so even though G is outside of M, its elements are in M. Recognizing which of them will end up in G is not possible for an inhabitant of M, however in some cases the following can be proved: provided that the condition p is an element of G, the sentence S is true in the generic extension constructed using this generic set G. We say then that p forces S.

In this way, with an aid of the forcing language, one can prove that every generic set of the Cohen forcing codes an entire function defining a one-to-one correspondence between elements of p(N) and a fixed (uncountable) cardinal number – it turns out that all the conditions force the sentence stating this property of G, so regardless of which conditions end up in the generic set, it is always true in the generic extension. On the other hand, the existence of a generic set in the model M cannot follow from axioms of set theory, otherwise they would decide the cardinality of p(N).

The method of forcing is of fundamental importance for Badious philosophy. The event escapes ontology; it is “that-which-is-not-being-qua-being”, so it has no place in set theory or the forcing construction. However, the post-evental truth that enters, and modifies the situation, is presented by forcing in the form of a generic set leading to an extension of the ground model. In other words, the situation, understood as the ground model M, is transformed by a post-evental truth identified with a generic set G, and becomes the generic model M[G]. Moreover, the knowledge of the situation is interpreted as the language of set theory, serving to discern elements of the situation; and as axioms of set theory, deciding validity of statements about the situation. Knowledge, understood in this way, does not decide the existence of a generic set in the situation nor can it point to its elements. A generic set is always undecidable and indiscernible.

Therefore, from the perspective of knowledge, it is not possible to establish, whether a situation is still the ground-model or it has undergone a generic extension resulting from the occurrence of an event; only the subject can interventionally decide this. And it is only the subject who decides about the belonging of particular elements to the generic set (i.e. the truth). A procedure of truth or procedure of fidelity (Alain Badiou – Being and Event) supported in this way gives rise to the subject language. It consists of sentences of set theory, so in this respect it is a part of knowledge, although the veridicity of the subject language originates from decisions of the faithful subject. Consequently, a procedure of fidelity forces statements about the situation as it is after being extended, and modified by the operation of truth.

Rhizomatic Topology and Global Politics. A Flirtatious Relationship.



Deleuze and Guattari see concepts as rhizomes, biological entities endowed with unique properties. They see concepts as spatially representable, where the representation contains principles of connection and heterogeneity: any point of a rhizome must be connected to any other. Deleuze and Guattari list the possible benefits of spatial representation of concepts, including the ability to represent complex multiplicity, the potential to free a concept from foundationalism, and the ability to show both breadth and depth. In this view, geometric interpretations move away from the insidious understanding of the world in terms of dualisms, dichotomies, and lines, to understand conceptual relations in terms of space and shapes. The ontology of concepts is thus, in their view, appropriately geometric, a multiplicity defined not by its elements, nor by a center of unification and comprehension and instead measured by its dimensionality and its heterogeneity. The conceptual multiplicity, is already composed of heterogeneous terms in symbiosis, and is continually transforming itself such that it is possible to follow, and map, not only the relationships between ideas but how they change over time. In fact, the authors claim that there are further benefits to geometric interpretations of understanding concepts which are unavailable in other frames of reference. They outline the unique contribution of geometric models to the understanding of contingent structure:

Principle of cartography and decalcomania: a rhizome is not amenable to any structural or generative model. It is a stranger to any idea of genetic axis or deep structure. A genetic axis is like an objective pivotal unity upon which successive stages are organized; deep structure is more like a base sequence that can be broken down into immediate constituents, while the unity of the product passes into another, transformational and subjective, dimension. (Deleuze and Guattari)

The word that Deleuze and Guattari use for ‘multiplicities’ can also be translated to the topological term ‘manifold.’ If we thought about their multiplicities as manifolds, there are a virtually unlimited number of things one could come to know, in geometric terms, about (and with) our object of study, abstractly speaking. Among those unlimited things we could learn are properties of groups (homological, cohomological, and homeomorphic), complex directionality (maps, morphisms, isomorphisms, and orientability), dimensionality (codimensionality, structure, embeddedness), partiality (differentiation, commutativity, simultaneity), and shifting representation (factorization, ideal classes, reciprocity). Each of these functions allows for a different, creative, and potentially critical representation of global political concepts, events, groupings, and relationships. This is how concepts are to be looked at: as manifolds. With such a dimensional understanding of concept-formation, it is possible to deal with complex interactions of like entities, and interactions of unlike entities. Critical theorists have emphasized the importance of such complexity in representation a number of times, speaking about it in terms compatible with mathematical methods if not mathematically. For example, Foucault’s declaration that: practicing criticism is a matter of making facile gestures difficult both reflects and is reflected in many critical theorists projects of revealing the complexity in (apparently simple) concepts deployed both in global politics.  This leads to a shift in the concept of danger as well, where danger is not an objective condition but “an effect of interpretation”. Critical thinking about how-possible questions reveals a complexity to the concept of the state which is often overlooked in traditional analyses, sending a wave of added complexity through other concepts as well. This work seeking complexity serves one of the major underlying functions of critical theorizing: finding invisible injustices in (modernist, linear, structuralist) givens in the operation and analysis of global politics.

In a geometric sense, this complexity could be thought about as multidimensional mapping. In theoretical geometry, the process of mapping conceptual spaces is not primarily empirical, but for the purpose of representing and reading the relationships between information, including identification, similarity, differentiation, and distance. The reason for defining topological spaces in math, the essence of the definition, is that there is no absolute scale for describing the distance or relation between certain points, yet it makes sense to say that an (infinite) sequence of points approaches some other (but again, no way to describe how quickly or from what direction one might be approaching). This seemingly weak relationship, which is defined purely ‘locally’, i.e., in a small locale around each point, is often surprisingly powerful: using only the relationship of approaching parts, one can distinguish between, say, a balloon, a sheet of paper, a circle, and a dot.

To each delineated concept, one should distinguish and associate a topological space, in a (necessarily) non-explicit yet definite manner. Whenever one has a relationship between concepts (here we think of the primary relationship as being that of constitution, but not restrictively, we ‘specify’ a function (or inclusion, or relation) between the topological spaces associated to the concepts). In these terms, a conceptual space is in essence a multidimensional space in which the dimensions represent qualities or features of that which is being represented. Such an approach can be leveraged for thinking about conceptual components, dimensionality, and structure. In these terms, dimensions can be thought of as properties or qualities, each with their own (often-multidimensional) properties or qualities. A key goal of the modeling of conceptual space being representation means that a key (mathematical and theoretical) goal of concept space mapping is

associationism, where associations between different kinds of information elements carry the main burden of representation. (Conceptual_Spaces_as_a_Framework_for_Knowledge_Representation)

To this end,

objects in conceptual space are represented by points, in each domain, that characterize their dimensional values. A concept geometry for conceptual spaces

These dimensional values can be arranged in relation to each other, as Gardenfors explains that

distances represent degrees of similarity between objects represented in space and therefore conceptual spaces are “suitable for representing different kinds of similarity relation. Concept

These similarity relationships can be explored across ideas of a concept and across contexts, but also over time, since “with the aid of a topological structure, we can speak about continuity, e.g., a continuous change” a possibility which can be found only in treating concepts as topological structures and not in linguistic descriptions or set theoretic representations.

Biogrammatic Vir(Ac)tuality. Note Quote.

In Foucault’s most famous example, the prison acts as the confluence of content (prisoners) and expression (law, penal code) (Gilles Deleuze, Sean Hand-Foucault). Informal Diagrams are proliferate. As abstract machines they contain the transversal vectors that cut across a panoply of features (such as institutions, classes, persons, economic formation, etc), mapping from point to relational point, the generalized features of power economies. The disciplinary diagram explored by Foucault, imposes “a particular conduct upon a particular human multiplicity”. The imposition of force upon force affects and effectuates the felt experience of a life, a living. Deleuze has called the abstract machine “pure matter/function” in which relations between forces are nonetheless very real.

[…] the diagram acts as a non-unifying immanent cause that is co-extensive with the whole social field: the abstract machine is like the cause of the concrete assemblages that execute its relations; and these relations between forces take place ‘not above’ but within the very tissue of the assemblages they produce.

The processual conjunction of content and expression; the cutting edge of deterritorialization:

The relations of power and resistance between theory and practice resonate – becoming-form; diagrammatics as praxis, integrates and differentiates the immanent cause and quasi-cause of the actualized occasions of research/creation. What do we mean by immanent cause? It is a cause which is realized, integrated and distinguished in its effect. Or rather, the immanent cause is realized, integrated and distinguished by its effect. In this way there is a correlation or mutual presupposition between cause and effect, between abstract machine and concrete assemblages

Memory is the real name of the relation to oneself, or the affect of self by self […] Time becomes a subject because it is the folding of the outside…forces every present into forgetting but preserves the whole of the past within memory: forgetting is the impossibiltiy of return and memory is the necessity of renewal.


The figure on the left is Henri Bergson’s diagram of an infinitely contracted past that directly intersects with the body at point S – a mobile, sensorimotor present where memory is closest to action. Plane P represents the actual present; plane of contact with objects. The AB segments represent repetitive compressions of memory. As memory contracts it gets closer to action. In it’s more expanded forms it is closer to dreams. The figure on the right extrapolates from Bergson’s memory model to describe the Biogrammatic ontological vector of the Diagram as it moves from abstract (informal) machine in the most expanded form “A” through the cone “tissue” to the phase-shifting (formal), arriving at the Strata of the P plane to become artefact. The ontological vector passes through the stratified, through the interval of difference created in the phase shift (the same phase shift that separates and folds content and expression to move vertically, transversally, back through to the abstract diagram.)

A spatio-temporal-material contracting-expanding of the abstract machine is the processual thinking-feeling-articulating of the diagram becoming-cartographic; synaesthetic conceptual mapping. A play of forces, a series of relays, affecting a tendency toward an inflection of the informal diagram becoming-form. The inflected diagram/biogram folds and unfolds perception, appearances; rides in the gap of becoming between content and expression; intuitively transduces the actualizing (thinking, drawing, marking, erasing) of matter-movement, of expressivity-movement. “To follow the flow of matter… is intuition in action.” A processual stage that prehends the process of the virtual actualizing;

the creative construction of a new reality. The biogrammatic stage of the diagrammatic is paradoxically double in that it is both the actualizing of the abstract machine (contraction) and the recursive counter-actualization of the formal diagram (détournement); virtual and actual.

It is the event-dimension of potential – that is the effective dimension of the interrelating of elements, of their belonging to each other. That belonging is a dynamic corporeal “abstraction” – the “drawing off” (transductive conversion) of the corporeal into its dynamism (yielding the event) […] In direct channeling. That is, in a directional channeling: ontological vector. The transductive conversion is an ontological vector that in-gathers a heterogeneity of substantial elements along with the already-constituted abstractions of language (“meaning”) and delivers them together to change. (Brian Massumi Parables for the Virtual Movement, Affect, Sensation)

Skin is the space of the body the BwO that is interior and exterior. Interstitial matter of the space of the body.


The material markings and traces of a diagrammatic process, a ‘capturing’ becoming-form. A diagrammatic capturing involves a transductive process between a biogrammatic form of content and a form of expression. The formal diagram is thus an individuating phase-shift as Simondon would have it, always out-of-phase with itself. A becoming-form that inhabits the gap, the difference, between the wave phase of the biogrammatic that synaesthetically draws off the intermix of substance and language in the event-dimension and the drawing of wave phase in which partial capture is formalized. The phase shift difference never acquires a vectorial intention. A pre-decisive, pre-emptive drawing of phase-shifting with a “drawing off” the biogram.


If effects realize something this is because the relations between forces or power relations, are merely virtual, potential, unstable vanishing and molecular, and define only possibilities of interaction so long as they do not enter a macroscopic whole capable of giving form to their fluid manner and diffuse function. But realization is equally an integration, a collection of progressive integrations that are initially local and then become or tend to become global, aligning, homogenizing and summarizing relations between forces: here law is the integration of illegalisms.