# Revisiting Catastrophes. Thought of the Day 134.0

The most explicit influence from mathematics in semiotics is probably René Thom’s controversial theory of catastrophes (here and here), with philosophical and semiotic support from Jean Petitot. Catastrophe theory is but one of several formalisms in the broad field of qualitative dynamics (comprising also chaos theory, complexity theory, self-organized criticality, etc.). In all these cases, the theories in question are in a certain sense phenomenological because the focus is different types of qualitative behavior of dynamic systems grasped on a purely formal level bracketing their causal determination on the deeper level. A widespread tool in these disciplines is phase space – a space defined by the variables governing the development of the system so that this development may be mapped as a trajectory through phase space, each point on the trajectory mapping one global state of the system. This space may be inhabited by different types of attractors (attracting trajectories), repellors (repelling them), attractor basins around attractors, and borders between such basins characterized by different types of topological saddles which may have a complicated topology.

Catastrophe theory has its basis in differential topology, that is, the branch of topology keeping various differential properties in a function invariant under transformation. It is, more specifically, the so-called Whitney topology whose invariants are points where the nth derivative of a function takes the value 0, graphically corresponding to minima, maxima, turning tangents, and, in higher dimensions, different complicated saddles. Catastrophe theory takes its point of departure in singularity theory whose object is the shift between types of such functions. It thus erects a distinction between an inner space – where the function varies – and an outer space of control variables charting the variation of that function including where it changes type – where, e.g. it goes from having one minimum to having two minima, via a singular case with turning tangent. The continuous variation of control parameters thus corresponds to a continuous variation within one subtype of the function, until it reaches a singular point where it discontinuously, ‘catastrophically’, changes subtype. The philosophy-of-science interpretation of this formalism now conceives the stable subtype of function as representing the stable state of a system, and the passage of the critical point as the sudden shift to a new stable state. The configuration of control parameters thus provides a sort of map of the shift between continuous development and discontinuous ‘jump’. Thom’s semiotic interpretation of this formalism entails that typical catastrophic trajectories of this kind may be interpreted as stable process types phenomenologically salient for perception and giving rise to basic verbal categories.

One of the simpler catastrophes is the so-called cusp (a). It constitutes a meta-diagram, namely a diagram of the possible type-shifts of a simpler diagram (b), that of the equation ax4 + bx2 + cx = 0. The upper part of (a) shows the so-called fold, charting the manifold of solutions to the equation in the three dimensions a, b and c. By the projection of the fold on the a, b-plane, the pointed figure of the cusp (lower a) is obtained. The cusp now charts the type-shift of the function: Inside the cusp, the function has two minima, outside it only one minimum. Different paths through the cusp thus corresponds to different variations of the equation by the variation of the external variables a and b. One such typical path is the path indicated by the left-right arrow on all four diagrams which crosses the cusp from inside out, giving rise to a diagram of the further level (c) – depending on the interpretation of the minima as simultaneous states. Here, thus, we find diagram transformations on three different, nested levels.

The concept of transformation plays several roles in this formalism. The most spectacular one refers, of course, to the change in external control variables, determining a trajectory through phase space where the function controlled changes type. This transformation thus searches the possibility for a change of the subtypes of the function in question, that is, it plays the role of eidetic variation mapping how the function is ‘unfolded’ (the basic theorem of catastrophe theory refers to such unfolding of simple functions). Another transformation finds stable classes of such local trajectory pieces including such shifts – making possible the recognition of such types of shifts in different empirical phenomena. On the most empirical level, finally, one running of such a trajectory piece provides, in itself, a transformation of one state into another, whereby the two states are rationally interconnected. Generally, it is possible to make a given transformation the object of a higher order transformation which by abstraction may investigate aspects of the lower one’s type and conditions. Thus, the central unfolding of a function germ in Catastrophe Theory constitutes a transformation having the character of an eidetic variation making clear which possibilities lie in the function germ in question. As an abstract formalism, the higher of these transformations may determine the lower one as invariant in a series of empirical cases.

Complexity theory is a broader and more inclusive term covering the general study of the macro-behavior of composite systems, also using phase space representation. The theoretical biologist Stuart Kauffman (intro) argues that in a phase space of all possible genotypes, biological evolution must unfold in a rather small and specifically qualified sub-space characterized by many, closely located and stable states (corresponding to the possibility of a species to ‘jump’ to another and better genotype in the face of environmental change) – as opposed to phase space areas with few, very stable states (which will only be optimal in certain, very stable environments and thus fragile when exposed to change), and also opposed, on the other hand, to sub-spaces with a high plurality of only metastable states (here, the species will tend to merge into neighboring species and hence never stabilize). On the base of this argument, only a small subset of the set of virtual genotypes possesses ‘evolvability’ as this special combination between plasticity and stability. The overall argument thus goes that order in biology is not a pure product of evolution; the possibility of order must be present in certain types of organized matter before selection begins – conversely, selection requires already organized material on which to work. The identification of a species with a co-localized group of stable states in genome space thus provides a (local) invariance for the transformation taking a trajectory through space, and larger groups of neighboring stabilities – lineages – again provide invariants defined by various more or less general transformations. Species, in this view, are in a certain limited sense ‘natural kinds’ and thus naturally signifying entities. Kauffman’s speculations over genotypical phase space have a crucial bearing on a transformation concept central to biology, namely mutation. On this basis far from all virtual mutations are really possible – even apart from their degree of environmental relevance. A mutation into a stable but remotely placed species in phase space will be impossible (evolution cannot cross the distance in phase space), just like a mutation in an area with many, unstable proto-species will not allow for any stabilization of species at all and will thus fall prey to arbitrary small environment variations. Kauffman takes a spontaneous and non-formalized transformation concept (mutation) and attempts a formalization by investigating its condition of possibility as movement between stable genomes in genotype phase space. A series of constraints turn out to determine type formation on a higher level (the three different types of local geography in phase space). If the trajectory of mutations must obey the possibility of walking between stable species, then the space of possibility of trajectories is highly limited. Self-organized criticality as developed by Per Bak (How Nature Works the science of self-organized criticality) belongs to the same type of theories. Criticality is here defined as that state of a complicated system where sudden developments in all sizes spontaneously occur.

# Morphed Ideologies. Thought of the Day 105.0

The sense of living in a post-fascist world is not shared by Marxists, of course, who ever since the first appearance of Mussolini’s virulently anti-communist squadrismo have instinctively assumed fascism to be be endemic to capitalism. No matter how much it may appear to be an autonomous force, it is for them inextricably bound up with the defensive reaction of bourgeoisie elites or big business to the attempts by revolutionary socialists to bring about the fundamental changes needed to assure social justice through a radical redistribution of wealth and power. According to which school or current of Marxism is carrying out the analysis, the precise sector or agency within capitalism that is the protagonist or backer of fascism’s elaborate pseudo-revolutionary pre-emptive strike, its degree of independence from the bourgeoisie elements who benefit from it, and the amount of genuine support it can win within the working class varies appreciably. But for all concerned, fascism is a copious taxonomic pot into which is thrown without too much intellectual agonizing over definitional or taxonomic niceties. For them, Brecht’s warning at the end of Arturo Ui has lost none of its topicality: “The womb that produced him is still fertile”.

The fact that two such conflicting perspectives can exist on the same subject can be explained as a consequence of the particular nature of all generic concepts within the human sciences. To go further into this phenomenon means entering a field of studies where philosophy of the social sciences has again proliferated conflicting positions, this time concerning the complex and largely subliminal processes involved in conceptualization and modeling in the pursuit of definite, if not definitive, knowledge. According to Max Weber, terms such as capitalism and socialism are ideal types, heuristic devices created by an act of idealizing abstraction. This cognitive process, which in good social scientific practice is carried out as consciously and scrupulously as possible, extracts a small group of salient features perceived as common to a particular generic phenomenon and assembles them into a definitional minimum which is at bottom a utopia.

The result of idealizing abstraction is a conceptually pure, artificially tidy model which does not correspond exactly to any concrete manifestation of the generic phenomenon being investigated, since in reality these are always inextricably mixed up with features, attributes, and surface details which are not considered definitional or as unique to that example of it. The dominant paradigm of the social sciences at any one time, the hegemonic political values and academic tradition prevailing in a particular geography, the political and moral values of the individual researcher all contribute to determining what common features are regarded as salient or definitional. There is no objective reality or objective definition of any aspect of it, and no simple correspondence between a word and what it means, the signifier and the signified, since it is axiomatic to Weber’s world-view that the human mind attaches significance to an essentially absurd universe and thus literally creates value and meaning, even when attempting to understand the world objectively. The basic question to be asked about any definition of fascism therefore, is not whether it is true, but whether it is heuristically useful: what can be seen or understood about concrete human phenomenon when it is applied that could not otherwise be seen, and what is obscured by it.

In his theory of ideological morphology, the British political scientist Michael Freeden has elaborated a nominalist and hence anti-essentialist approach to the definition of generic ideological terms that is deeply compatible with Weberian heuristics. He distinguishes between the ineliminable attributes or properties with which conventional usage endows them and those adjacent and peripheral to them which vary according to specific national, cultural or historical context. To cite the example he gives, liberalism can be argued to contain axiomatically, and hence at its definitional core, the idea of individual, rationally defensible liberty. however, the precise relationship of such liberty to laissez-faire capitalism, nationalism, the sanctuary, or the right of the state to override individual human rights in the defense of collective liberty or the welfare of the majority is infinitely negotiable and contestable. So are the ideal political institutions and policies that a state should adopt in order to guarantee liberty, which explains why democratic politics can never be fully consensual across a range of issues without there being something seriously wrong. It is the fact that each ideology is a cluster of concepts comprising ineliminable with eliminable ones that accounts for the way ideologies are able to evolve over time while still remaining recognizably the same and why so many variants of the same ideology can arise in different societies and historical contexts. It also explains why every concrete permutation of an ideology is simultaneously unique and the manifestation of the generic “ism”, which may assume radical morphological transformations in its outward appearance without losing its definitional ideological core.

# Is General Theory of Relativity a Gauge Theory? Trajectories of Diffeomorphism.

Historically the problem of observables in classical and quantum gravity is closely related to the so-called Einstein hole problem, i.e. to some of the consequences of general covariance in general relativity (GTR).

The central question is the physical meaning of the points of the event manifold underlying GTR. In contrast to pure mathematics this is a non-trivial point in physics. While in pure differential geometry one simply decrees the existence of, for example, a (pseudo-) Riemannian manifold with a differentiable structure (i.e., an appropriate cover with coordinate patches) plus a (pseudo-) Riemannian metric, g, the relation to physics is not simply one-one. In popular textbooks about GTR, it is frequently stated that all diffeomorphic (space-time) manifolds, M are physically indistinguishable. Put differently:

S − T = Riem/Diff —– (1)

This becomes particularly virulent in the Einstein hole problem. i.e., assuming that we have a region of space-time, free of matter, we can apply a local diffeomorphism which only acts within this hole, letting the exterior invariant. We get thus in general two different metric tensors

g(x) , g′(x) := Φ ◦ g(x) —– (2)

in the hole while certain inital conditions lying outside of the hole are unchanged, thus yielding two different solutions of the Einstein field equations.

Many physicists consider this to be a violation of determinism (which it is not!) and hence argue that the class of observable quantities have to be drastically reduced in (quantum) gravity theory. They follow the line of reasoning developed by Dirac in the context of gauge theory, thus implying that GTR is essentially also a gauge theory. This then winds up to the conclusion:

Dirac observables in quantum gravity are quantities which are diffeomorphism invariant with the diffeomorphism group, Diff acting from M to M, i.e.

Φ : M → M —– (3)

One should note that with respect to physical observations there is no violation of determinism. An observer can never really observe two different metric fields on one and the same space-time manifold. This can only happen on the mathematical paper. He will use a fixed measurement protocol, using rods and clocks in e.g. a local inertial frame where special relativity locally applies and then extend the results to general coordinate frames.

We get a certain orbit under Diff if we start from a particular manifold M with a metric tensor g and take the orbit

{M, Φ ◦g} —– (4)

In general we have additional fields and matter distributions on M which are transformd accordingly.

Note that not even scalars are invariant in general in the above sense, i.e., not even the Ricci scalar is observable in the Dirac sense:

R(x) ≠ Φ ◦ R(x) —– (5)

in the generic case. Thus, this would imply that the class of admissible observables can be pretty small (even empty!). Furthermore, it follows that points of M are not a priori distinguishable. On the other hand, many consider the Ricci scalar at a point to be an observable quantity.

This winds up to the question whether GTR is a true gauge theory or perhaps only apparently so at a first glance, while on a more fundamental level it is something different. In the words of Kuchar (What is observable..),

Quantities non-invariant under the full diffeomorphism group are observable in gravity.

The reason for these apparently diverging opinions stems from the role reference systems are assumed to play in GTR with some arguing that the gauge property of general coordinate invariance is only of a formal nature.

In the hole argument it is for example argued that it is important to add some particle trajectories which cross each other, thus generating concrete events on M. As these point events transform accordingly under a diffeomorphism, the distance between the corresponding coordinates x, y equals the distance between the transformed points Φ(x), Φ(y), thus being a Dirac observable. On the other hand, the coordinates x or y are not observable.

One should note that this observation is somewhat tautological in the realm of Riemannian geometry as the metric is an absolute quantity, put differently (and somewhat sloppily), ds2 is invariant under passive and by the same token active coordinate transformation (diffeomorphisms) because, while conceptually different, the transformation properties under the latter operations are defined as in the passive case. In the case of GTR this absolute quantity enters via the equivalence principle i.e., distances are measured for example in a local inertial frame (LIF) where special relativity holds and are then generalized to arbitrary coordinate systems.

# Grothendieck’s Abstract Homotopy Theory

Let E be a Grothendieck topos (think of E as the category, Sh(X), of set valued sheaves on a space X). Within E, we can pick out a subcategory, C, of locally finite, locally constant objects in E. (If X is a space with E = Sh(X), C corresponds to those sheaves whose espace étale is a finite covering space of X.) Picking a base point in X generalises to picking a ‘fibre functor’ F : C → Setsfin, a functor satisfying various conditions implying that it is pro-representable. (If x0 ∈ X is a base point {x0} → X induces a ‘fibre functor’ Sh(X) → Sh{x0} ≅ Sets, by pullback.)

If F is pro-representable by P, then π1(E, F) is defined to be Aut(P), which is a profinite group. Grothendieck proves there is an equivalence of categories C ≃ π1(E) − Setsfin, the category of finite π1(E)-sets. If X is a locally nicely behaved space such as a CW-complex and E = Sh(X), then π1(E) is the profinite completion of π1(X). This profinite completion occurs only because Grothendieck considers locally finite objects. Without this restriction, a covering space Y of X would correspond to a π1(X) – set, Y′, but if Y is a finite covering of X then the homomorphism from π1(X) to the finite group of transformations of Y factors through the profinite completion of π1(X). This is defined by : if G is a group, Gˆ = lim(G/H : H ◅ G, H of finite index) is its profinite completion. This idea of using covering spaces or their analogue in E raises several important points:

a) These are homotopy theoretic results, but no paths are used. The argument involving sheaf theory, the theory of (pro)representable functors, etc., is of a purely categorical nature. This means it is applicable to spaces where the use of paths, and other homotopies is impossible because of bad (or unknown) local properties. Such spaces have been studied within Shape Theory and Strong Shape Theory, although not by using Grothendieck’s fundamental group, nor using sheaf theory.

b) As no paths are used, these methods can also be applied to non-spaces, e.g. locales and possibly to their non-commutative analogues, quantales. For instance, classically one could consider a field k and an algebraic closure K of k and then choose C to be a category of étale algebras over k, in such a way that π1(E) ≅ Gal(K/k), the Galois group of k. It, in fact, leads to a classification theorem for Grothendieck toposes. From this viewpoint, low dimensional homotopy theory is ssen as being part of Galois theory, or vice versa.

c) This underlines the fact that π1(X) classifies covering spaces – but for i > 1, πi(X) does not seem to classify anything other than maps from Si into X!

This is abstract homotopy theory par excellence.

# Derivability from Relational Logic of Charles Sanders Peirce to Essential Laws of Quantum Mechanics

Charles Sanders Peirce made important contributions in logic, where he invented and elaborated novel system of logical syntax and fundamental logical concepts. The starting point is the binary relation SiRSj between the two ‘individual terms’ (subjects) Sj and Si. In a short hand notation we represent this relation by Rij. Relations may be composed: whenever we have relations of the form Rij, Rjl, a third transitive relation Ril emerges following the rule

RijRkl = δjkRil —– (1)

In ordinary logic the individual subject is the starting point and it is defined as a member of a set. Peirce considered the individual as the aggregate of all its relations

Si = ∑j Rij —– (2)

The individual Si thus defined is an eigenstate of the Rii relation

RiiSi = Si —– (3)

The relations Rii are idempotent

R2ii = Rii —– (4)

and they span the identity

i Rii = 1 —– (5)

The Peircean logical structure bears resemblance to category theory. In categories the concept of transformation (transition, map, morphism or arrow) enjoys an autonomous, primary and irreducible role. A category consists of objects A, B, C,… and arrows (morphisms) f, g, h,… . Each arrow f is assigned an object A as domain and an object B as codomain, indicated by writing f : A → B. If g is an arrow g : B → C with domain B, the codomain of f, then f and g can be “composed” to give an arrow gof : A → C. The composition obeys the associative law ho(gof) = (hog)of. For each object A there is an arrow 1A : A → A called the identity arrow of A. The analogy with the relational logic of Peirce is evident, Rij stands as an arrow, the composition rule is manifested in equation (1) and the identity arrow for A ≡ Si is Rii.

Rij may receive multiple interpretations: as a transition from the j state to the i state, as a measurement process that rejects all impinging systems except those in the state j and permits only systems in the state i to emerge from the apparatus, as a transformation replacing the j state by the i state. We proceed to a representation of Rij

Rij = |ri⟩⟨rj| —– (6)

where state ⟨ri | is the dual of the state |ri⟩ and they obey the orthonormal condition

⟨ri |rj⟩ = δij —– (7)

It is immediately seen that our representation satisfies the composition rule equation (1). The completeness, equation (5), takes the form

n|ri⟩⟨ri|=1 —– (8)

All relations remain satisfied if we replace the state |ri⟩ by |ξi⟩ where

i⟩ = 1/√N ∑n |ri⟩⟨rn| —– (9)

with N the number of states. Thus we verify Peirce’s suggestion, equation (2), and the state |ri⟩ is derived as the sum of all its interactions with the other states. Rij acts as a projection, transferring from one r state to another r state

Rij |rk⟩ = δjk |ri⟩ —– (10)

We may think also of another property characterizing our states and define a corresponding operator

Qij = |qi⟩⟨qj | —– (11)

with

Qij |qk⟩ = δjk |qi⟩ —– (12)

and

n |qi⟩⟨qi| = 1 —– (13)

Successive measurements of the q-ness and r-ness of the states is provided by the operator

RijQkl = |ri⟩⟨rj |qk⟩⟨ql | = ⟨rj |qk⟩ Sil —– (14)

with

Sil = |ri⟩⟨ql | —– (15)

Considering the matrix elements of an operator A as Anm = ⟨rn |A |rm⟩ we find for the trace

Tr(Sil) = ∑n ⟨rn |Sil |rn⟩ = ⟨ql |ri⟩ —– (16)

From the above relation we deduce

Tr(Rij) = δij —– (17)

Any operator can be expressed as a linear superposition of the Rij

A = ∑i,j AijRij —– (18)

with

Aij =Tr(ARji) —– (19)

The individual states could be redefined

|ri⟩ → ei |ri⟩ —– (20)

|qi⟩ → ei |qi⟩ —– (21)

without affecting the corresponding composition laws. However the overlap number ⟨ri |qj⟩ changes and therefore we need an invariant formulation for the transition |ri⟩ → |qj⟩. This is provided by the trace of the closed operation RiiQjjRii

Tr(RiiQjjRii) ≡ p(qj, ri) = |⟨ri |qj⟩|2 —– (22)

The completeness relation, equation (13), guarantees that p(qj, ri) may assume the role of a probability since

j p(qj, ri) = 1 —– (23)

We discover that starting from the relational logic of Peirce we obtain all the essential laws of Quantum Mechanics. Our derivation underlines the outmost relational nature of Quantum Mechanics and goes in parallel with the analysis of the quantum algebra of microscopic measurement.

# Conjuncted: Occam’s Razor and Nomological Hypothesis. Thought of the Day 51.1.1

Conjuncted here, here and here.

A temporally evolving system must possess a sufficiently rich set of symmetries to allow us to infer general laws from a finite set of empirical observations. But what justifies this hypothesis?

This question is central to the entire scientific enterprise. Why are we justified in assuming that scientific laws are the same in different spatial locations, or that they will be the same from one day to the next? Why should replicability of other scientists’ experimental results be considered the norm, rather than a miraculous exception? Why is it normally safe to assume that the outcomes of experiments will be insensitive to irrelevant details? Why, for that matter, are we justified in the inductive generalizations that are ubiquitous in everyday reasoning?

In effect, we are assuming that the scientific phenomena under investigation are invariant under certain symmetries – both temporal and spatial, including translations, rotations, and so on. But where do we get this assumption from? The answer lies in the principle of Occam’s Razor.

Roughly speaking, this principle says that, if two theories are equally consistent with the empirical data, we should prefer the simpler theory:

Occam’s Razor: Given any body of empirical evidence about a temporally evolving system, always assume that the system has the largest possible set of symmetries consistent with that evidence.

Making it more precise, we begin by explaining what it means for a particular symmetry to be “consistent” with a body of empirical evidence. Formally, our total body of evidence can be represented as a subset E of H, i.e., namely the set of all logically possible histories that are not ruled out by that evidence. Note that we cannot assume that our evidence is a subset of Ω; when we scientifically investigate a system, we do not normally know what Ω is. Hence we can only assume that E is a subset of the larger set H of logically possible histories.

Now let ψ be a transformation of H, and suppose that we are testing the hypothesis that ψ is a symmetry of the system. For any positive integer n, let ψn be the transformation obtained by applying ψ repeatedly, n times in a row. For example, if ψ is a rotation about some axis by angle θ, then ψn is the rotation by the angle nθ. For any such transformation ψn, we write ψ–n(E) to denote the inverse image in H of E under ψn. We say that the transformation ψ is consistent with the evidence E if the intersection

E ∩ ψ–1(E) ∩ ψ–2(E) ∩ ψ–3(E) ∩ …

is non-empty. This means that the available evidence (i.e., E) does not falsify the hypothesis that ψ is a symmetry of the system.

For example, suppose we are interested in whether cosmic microwave background radiation is isotropic, i.e., the same in every direction. Suppose we measure a background radiation level of x1 when we point the telescope in direction d1, and a radiation level of x2 when we point it in direction d2. Call these events E1 and E2. Thus, our experimental evidence is summarized by the event E = E1 ∩ E2. Let ψ be a spatial rotation that rotates d1 to d2. Then, focusing for simplicity just on the first two terms of the infinite intersection above,

E ∩ ψ–1(E) = E1 ∩ E2 ∩ ψ–1(E1) ∩ ψ–1(E2).

If x1 = x2, we have E1 = ψ–1(E2), and the expression for E ∩ ψ–1(E) simplifies to E1 ∩ E2 ∩ ψ–1(E1), which has at least a chance of being non-empty, meaning that the evidence has not (yet) falsified isotropy. But if x1 ≠ x2, then E1 and ψ–1(E2) are disjoint. In that case, the intersection E ∩ ψ–1(E) is empty, and the evidence is inconsistent with isotropy. As it happens, we know from recent astronomy that x1 ≠ x2 in some cases, so cosmic microwave background radiation is not isotropic, and ψ is not a symmetry.

Our version of Occam’s Razor now says that we should postulate as symmetries of our system a maximal monoid of transformations consistent with our evidence. Formally, a monoid Ψ of transformations (where each ψ in Ψ is a function from H into itself) is consistent with evidence E if the intersection

ψ∈Ψ ψ–1(E)

is non-empty. This is the generalization of the infinite intersection that appeared in our definition of an individual transformation’s consistency with the evidence. Further, a monoid Ψ that is consistent with E is maximal if no proper superset of Ψ forms a monoid that is also consistent with E.

Occam’s Razor (formal): Given any body E of empirical evidence about a temporally evolving system, always assume that the set of symmetries of the system is a maximal monoid Ψ consistent with E.

What is the significance of this principle? We define Γ to be the set of all symmetries of our temporally evolving system. In practice, we do not know Γ. A monoid Ψ that passes the test of Occam’s Razor, however, can be viewed as our best guess as to what Γ is.

Furthermore, if Ψ is this monoid, and E is our body of evidence, the intersection

ψ∈Ψ ψ–1(E)

can be viewed as our best guess as to what the set of nomologically possible histories is. It consists of all those histories among the logically possible ones that are not ruled out by the postulated symmetry monoid Ψ and the observed evidence E. We thus call this intersection our nomological hypothesis and label it Ω(Ψ,E).

To see that this construction is not completely far-fetched, note that, under certain conditions, our nomological hypothesis does indeed reflect the truth about nomological possibility. If the hypothesized symmetry monoid Ψ is a subset of the true symmetry monoid Γ of our temporally evolving system – i.e., we have postulated some of the right symmetries – then the true set Ω of all nomologically possible histories will be a subset of Ω(Ψ,E). So, our nomological hypothesis will be consistent with the truth and will, at most, be logically weaker than the truth.

Given the hypothesized symmetry monoid Ψ, we can then assume provisionally (i) that any empirical observation we make, corresponding to some event D, can be generalized to a Ψ-invariant law and (ii) that unconditional and conditional probabilities can be estimated from empirical frequency data using a suitable version of the Ergodic Theorem.

# Rhizomatic Topology and Global Politics. A Flirtatious Relationship.

Deleuze and Guattari see concepts as rhizomes, biological entities endowed with unique properties. They see concepts as spatially representable, where the representation contains principles of connection and heterogeneity: any point of a rhizome must be connected to any other. Deleuze and Guattari list the possible benefits of spatial representation of concepts, including the ability to represent complex multiplicity, the potential to free a concept from foundationalism, and the ability to show both breadth and depth. In this view, geometric interpretations move away from the insidious understanding of the world in terms of dualisms, dichotomies, and lines, to understand conceptual relations in terms of space and shapes. The ontology of concepts is thus, in their view, appropriately geometric, a multiplicity defined not by its elements, nor by a center of unification and comprehension and instead measured by its dimensionality and its heterogeneity. The conceptual multiplicity, is already composed of heterogeneous terms in symbiosis, and is continually transforming itself such that it is possible to follow, and map, not only the relationships between ideas but how they change over time. In fact, the authors claim that there are further benefits to geometric interpretations of understanding concepts which are unavailable in other frames of reference. They outline the unique contribution of geometric models to the understanding of contingent structure:

Principle of cartography and decalcomania: a rhizome is not amenable to any structural or generative model. It is a stranger to any idea of genetic axis or deep structure. A genetic axis is like an objective pivotal unity upon which successive stages are organized; deep structure is more like a base sequence that can be broken down into immediate constituents, while the unity of the product passes into another, transformational and subjective, dimension. (Deleuze and Guattari)

The word that Deleuze and Guattari use for ‘multiplicities’ can also be translated to the topological term ‘manifold.’ If we thought about their multiplicities as manifolds, there are a virtually unlimited number of things one could come to know, in geometric terms, about (and with) our object of study, abstractly speaking. Among those unlimited things we could learn are properties of groups (homological, cohomological, and homeomorphic), complex directionality (maps, morphisms, isomorphisms, and orientability), dimensionality (codimensionality, structure, embeddedness), partiality (differentiation, commutativity, simultaneity), and shifting representation (factorization, ideal classes, reciprocity). Each of these functions allows for a different, creative, and potentially critical representation of global political concepts, events, groupings, and relationships. This is how concepts are to be looked at: as manifolds. With such a dimensional understanding of concept-formation, it is possible to deal with complex interactions of like entities, and interactions of unlike entities. Critical theorists have emphasized the importance of such complexity in representation a number of times, speaking about it in terms compatible with mathematical methods if not mathematically. For example, Foucault’s declaration that: practicing criticism is a matter of making facile gestures difficult both reflects and is reflected in many critical theorists projects of revealing the complexity in (apparently simple) concepts deployed both in global politics.  This leads to a shift in the concept of danger as well, where danger is not an objective condition but “an effect of interpretation”. Critical thinking about how-possible questions reveals a complexity to the concept of the state which is often overlooked in traditional analyses, sending a wave of added complexity through other concepts as well. This work seeking complexity serves one of the major underlying functions of critical theorizing: finding invisible injustices in (modernist, linear, structuralist) givens in the operation and analysis of global politics.

In a geometric sense, this complexity could be thought about as multidimensional mapping. In theoretical geometry, the process of mapping conceptual spaces is not primarily empirical, but for the purpose of representing and reading the relationships between information, including identification, similarity, differentiation, and distance. The reason for defining topological spaces in math, the essence of the definition, is that there is no absolute scale for describing the distance or relation between certain points, yet it makes sense to say that an (infinite) sequence of points approaches some other (but again, no way to describe how quickly or from what direction one might be approaching). This seemingly weak relationship, which is defined purely ‘locally’, i.e., in a small locale around each point, is often surprisingly powerful: using only the relationship of approaching parts, one can distinguish between, say, a balloon, a sheet of paper, a circle, and a dot.

To each delineated concept, one should distinguish and associate a topological space, in a (necessarily) non-explicit yet definite manner. Whenever one has a relationship between concepts (here we think of the primary relationship as being that of constitution, but not restrictively, we ‘specify’ a function (or inclusion, or relation) between the topological spaces associated to the concepts). In these terms, a conceptual space is in essence a multidimensional space in which the dimensions represent qualities or features of that which is being represented. Such an approach can be leveraged for thinking about conceptual components, dimensionality, and structure. In these terms, dimensions can be thought of as properties or qualities, each with their own (often-multidimensional) properties or qualities. A key goal of the modeling of conceptual space being representation means that a key (mathematical and theoretical) goal of concept space mapping is

associationism, where associations between different kinds of information elements carry the main burden of representation. (Conceptual_Spaces_as_a_Framework_for_Knowledge_Representation)

To this end,

objects in conceptual space are represented by points, in each domain, that characterize their dimensional values. A concept geometry for conceptual spaces

These dimensional values can be arranged in relation to each other, as Gardenfors explains that

distances represent degrees of similarity between objects represented in space and therefore conceptual spaces are “suitable for representing different kinds of similarity relation. Concept

These similarity relationships can be explored across ideas of a concept and across contexts, but also over time, since “with the aid of a topological structure, we can speak about continuity, e.g., a continuous change” a possibility which can be found only in treating concepts as topological structures and not in linguistic descriptions or set theoretic representations.

# Excessive Subjective Transversalities. Thought of the Day 33.0

In other words, object and subject, in their mutual difference and reciprocal trajectories, emerge and re-emerge together, from transformation. The everything that has already happened is emergence, registered after its fact in a subject-object relation. Where there was classically and in modernity an external opposition between object and subject, there is now a double distinction internal to the transformation. 1) After-the-fact: subject-object is to emergence as stoppage is to process. 2) In-fact: “objective” and “subjective” are inseparable, as matter of transformation to manner of transformation… (Brian Massumi Deleuze Guattari and Philosophy of Expression)

Massumi makes the case, after Simondon and Deleuze and Guattari, for a dynamic process of subjectivity in which subject and object are other but their relation is transformative to their terms. That relation is emergence. In Felix Guattari’s last book, Chaosmosis, he outlines the production of subjectivity as transversal. He states that subjectivity is

the ensemble of conditions which render possible the emergence of individual and/or collective instances as self-referential existential Territories, adjacent, or in a delimiting relation, to an alterity that is itself subjective.

This is the subject in excess (Simondon; Deleuze), overpowering the transcendental. The subject as constituted by all the forces that simultaneously impinge upon it; are in relation to it. Similarly, Simondon characterises this subjectivity as the transindividual, which refers to

a relation to others, which is not determined by a constituted subject position, but by pre-individuated potentials only experienced as affect (Adrian Mackenzie-Transductions_ bodies and machines at speed).

Equating this proposition to technologically enabled relations exerts a strong attraction on the experience of felt presence and interaction in distributed networks. Simondon’s principle of individuation, an ontogenetic process similar to Deleuze’s morphogenetic process, is committed to the guiding principle

of the conservation of being through becoming. This conservation is effected by means of the exchanges made between structure and process… (Simondon).

Or think of this as structure and organisation, which is autopoietic process; the virtual organisation of the affective interval. These leanings best situate ideas circulating through collectives and their multiple individuations. These approaches reflect one of Bergson’s lasting contributions to philosophical practice: his anti-dialectical methodology that debunks duality and the synthesised composite for a differentiated multiplicity that is also a unified (yet heterogeneous) continuity of duration. Multiplicities replace the transcendental concept of essences.

# Egyptology

The ancient Egyptians conceived man and kosmos to be dual: firstly, the High God or Divine Mind arose out of the Primeval Waters of space at the beginning of manifestation; secondly, the material aspect expressing what is in the Divine Mind must be in a process of ever-becoming. In other words, the kosmos consists of body and soul. Man emanated in the image of divinity is similarly dual and his evolutionary goal is a fully conscious return to the Divine Mind.

Space, symbolized by the Primeval Waters, contains the seeds and possibilities of all living things in their quiescent state. At the right moment for awakenment, all will take up forms in accordance with inherent qualities. Or to express it in another way: the Word uttered by the Divine Mind calls manifested life to begin once more.

Growth is effected through a succession of lives, a concept that is found in texts and implied in symbolism. Herodotus, the Greek historian (5th century B.C.), wrote that

the Egyptians were the first to teach that the human soul is immortal, and at the death of the body enters into some other living thing then coming to birth; and after passing through all creatures of land, sea, and air (which cycle it completes in three thousand years) it enters once more into a human body, at birth.

The theory of reincarnation is often ascribed to Pythagoras, since he spent some time in Egypt studying its philosophy and, according to Herodotus, “adopted this opinion as if it were his own.”

Margaret A. Murray, who worked with Flinders Petrie, illustrates the Egyptian belief by referring to the ka-names of three kings (The ka-name relates to the vital essence of an individual); the first two of the twelfth dynasty: that of Amonemhat I means “He who repeats births,” Senusert I: “He whose births live,” and the ka-name of Setekhy I of the nineteenth dynasty was “Repeater of births.” (The Splendour That Was Egypt)

Reincarnation has been connected with the rites of Osiris, one of the Mysteries or cycles of initiation perpetuated in Egypt. The concept of transformation as recorded in the Egyptian texts has been interpreted in various ways. De Briere expresses it in astronomical terms: “The sensitive soul re-entered by the gate of the gods, or the Capricorn, into the Amenthe, the watery heavens, where it dwelt always with pleasure; until, descending by the gate of men, or the Cancer, it came to animate a new body.” Herodotus writes of transmigration, i.e., that the soul passes through various animals before being reborn in human form. This refers not to the human soul but to the molecules, atoms, and other components that clothe it. They gravitate to vehicles similar in qualities to their former host’s, drawn magnetically to the new milieu by the imprint made by the human soul, whether it be fine or gross. It is quite clear from the Book of the Dead and other texts that the soul itself after death undergoes experiences in the Duat (Dwat) or Underworld, the realm and condition between heaven and earth, or beneath the earth, supposedly traversed by the sun from sunset to sunrise.

The evolution of consciousness is symbolized by the Solar Barque moving through the Duat. In this context the “hours” of travel represent stages of development. Bika Reed states that at a certain “hour” the individual meets the “Rebel in the Soul,”  that is, at the “hour of spiritual transformation.” And translating from the scroll Reed gives: “the soul warns, only if a man is allowed to continue evolving, can the intellect reach the heart.”

Not only does the scripture deal with rituals assumed to apply to after-death conditions — in some respects similar to the Book of the Dead — but also it seems quite patently a ritual connected with initiation from one level of self-becoming to another. Nevertheless the picture that emerges is that of the “deceased” or candidate for initiation reaching a fork offering two paths called “The Two Paths of Liberation” and, while each may take the neophyte to the abode of the Akhu (the “Blessed”) — a name for the gods, and also for the successful initiates — they involve different experiences. One path, passing over land and water, is that of Osiris or cyclic nature and involves many incarnations. The other way leads through fire in a direct or shortened passage along the route of Horus who in many texts symbolizes the divine spark in the heart.

In the Corpus Hermeticum, Thoth — Tehuti — was the Mind of the Deity, whom the Alexandrian Greeks identified with Hermes. For example, one of the chief books in the Hermetica is the Poimandres treatise, or Pymander. The early trinity Atum-Ptah-Thoth was rendered into Greek as theos (god) — demiourgos or demourgos-nous (Demiurge or Demiurgic Mind) — nous and logos (Mind and Word). The text states that Thoth, after planning and engineering the kosmos, unites himself with the Demiurgic Mind. There are other expressions proving that the Poimandres text is a Hellenized version of Egyptian doctrine. An important concept therein is that of “making-new-again.” The treatise claims that all animal and vegetable forms contain in themselves “the seed of again-becoming” — a clear reference to reimbodiment — “every birth of flesh ensouled . . . shall of necessity renew itself.” G. R. S. Mead interprets this as palingenesis or reincarnation — “the renewal on the karmic wheel of birth-and-death.” (Thrice-Greatest Hermes)

The Corpus Hermeticum or Books of Hermes are believed by some scholars to have been borrowed from Christian texts, but their concepts are definitely ancient Egyptian in origin, translated into Alexandrian Greek, and Latin.

Looking at Walter Scott’s translation of Poimandres, it states that “At the dissolution of your material body, you first yield up the body itself to be changed,” and it will be absorbed by nature. The rest of the individual’s components return to “their own sources, becoming parts of the universe, and entering into fresh combinations to do other work.” After this, the real or inner man “mounts upward through the structure of the heavens,” leaving off in each of the seven zones certain energies and related substances. The first zone is that of the Moon; the second, the planet Mercury; the third, Venus; fourth, the Sun; fifth, Mars; sixth, Jupiter; and seventh, Saturn. “Having been stripped of all that was wrought upon him” in his previous descent into incarnation on Earth, he ascends to the highest sphere, “being now possessed of his own proper power.” Finally, he enters into divinity. “This is the Good; this is the consummation, for those who have got gnosis.” (According to Scott, gnosis in this context means not only knowledge of divinity but also the relationship between man’s real self and the godhead.)

Further on, the Poimandres explains that the mind and soul can be conjoined only by means of an earth-body, because the mind by itself cannot do so, and an earthly body would not be able to endure

the presence of that mighty and immortal being, nor could so great a power submit to contact with a body defiled by passion. And so the mind takes to itself the soul for a wrap

In Hermetica, Isis to Horus, there is the statement:

. . . . For there are [in the world above, two gods] who are attendants of the Providence that governs all. One of them is Keeper of souls; the other is Conductor of souls. The Keeper is he that has in his charge the unembodied souls; the Conductor is he that sends down to earth the souls that are from time to time embodied, and assigns to them their several places. And both he that keeps watch over the souls, and he that sends them forth, act in accordance with God’s will.

There are many texts using the term “transformations” and a good commentary on the concept by R. T. Rundle Clark follows:

In order to reach the heights of the sky the soul had to undergo those transformations which the High God had gone through as he developed from a spirit in the Primeval Waters to his final position as Sun God . . .” — Myth-And-Symbol-In-Ancient-Egypt

This would appear to mean that in entering upon physical manifestation human souls follow the path of the divine and spiritual artificers of the universe.

There is reason to believe that the after-death adventures met with by the soul through the Duat or Underworld were also undergone by a neophyte during initiation. If the trial ends in success, the awakened human being thereafter speaks with the authority of direct experience. In the most ancient days of Egypt, such an initiate was called a “Son of the Sun” for he embodied the solar splendour. For the rest of mankind, the way is slower, progressing certainly, but more gradually, through many lives. The ultimate achievement is the same: to radiate the highest qualities of the spiritual element locked within the aspiring soul.