Stochasticities. Lévy processes. Part 2.

Define the characteristic function of Xt:

Φt(z) ≡ ΦXt(z) ≡ E[eiz.Xt], z ∈ Rd

For t > s, by writing Xt+s = Xs + (Xt+s − Xs) and using the fact that Xt+s − Xs is independent of Xs, we obtain that t ↦ Φt(z) is a multiplicative function.

Φt+s(z) = ΦXt+s(z) = ΦXs(z) ΦXt+s − Xs(z) = ΦXs(z) ΦXt(z) = ΦsΦt

The stochastic continuity of t ↦ Xt implies in particular that Xt → Xs in distribution when s → t. Therefore, ΦXs(z) → ΦXt(z) when s → t so t ↦ Φt(z) is a continuous function of t. Together with the multiplicative property Φs+t(z) = Φs(z).Φt(z), this implies that t ↦ Φt(z) is an exponential function.

Let (Xt)t≥0 be a Lévy process on Rd. ∃ a continuous function ψ : Rd ↦ R called the characteristic exponent of X, such that:

E[eiz.Xt] = etψ(z), z ∈ Rd

ψ is the cumulant generating function of X1 : ψ = ΨX1 and that the cumulant generating function of Xt varies linearly in t: ΨXt = tΨX1 = tψ. The law of Xt is therefore determined by the knowledge of the law of X1 : the only degree of freedom we have in specifying a Lévy process is to specify the distribution of Xt for a single time (say, t = 1).

This lecture covers stochastic processes, including continuous-time stochastic processes and standard Brownian motion by Choongbum Lee

Capital As Power.

DYxMn9QXcAIh8-W.jpg-large

One has the Eric Fromm angle of consciousness as linear and directly proportional to exploitation as one of the strands of Marxian thinking, the non-linearity creeps up from epistemology on the technological side, with, something like, say Moore’s Law, where ascension of conscious thought is or could be likened to exponentials. Now, these exponentials are potent in ridding of the pronouns, as in the “I” having a compossibility with the “We”, for if these aren’t gotten rid of, there is asphyxiation in continuing with them, an effort, an energy expendable into the vestiges of waste, before Capitalism comes sweeping in over such deliberately pronounced islands of pronouns. This is where the sweep is of the “IT”. And this is emancipation of the highest order, where teleology would be replaced by Eschatology. Alienation would be replaced with emancipation. Teleology is alienating, whereas eschatology is emancipating. Agency would become un-agency. An emancipation from alienation, from being, into the arms of becoming, for the former is a mere snapshot of the illusory order, whereas the latter is a continuum of fluidity, the fluid dynamics of the deracinated from the illusory order. The “IT” is pure and brute materialism, the cosmic unfoldings beyond our understanding and importantly mirrored in on the terrestrial. “IT” is not to be realized. “It” is what engulfs us, kills us, and in the process emancipates us from alienation. “IT” is “Realism”, a philosophy without “we”, Capitalism’s excessive power. “IT” enslaves “us” to the point of us losing any identification. In a nutshell, theory of capital is a catalogue of heresies to be welcomed to set free from the vantage of an intention to emancipate economic thought from the etherealized spheres of choice and behaviors or from the paradigm of the disembodied minds.

Jonathan Nitzan and Shimshon Bichler‘s Capital as Power A Study of Order and Creorder

Econophysics: Financial White Noise Switch. Thought of the Day 115.0

circle24

What is the cause of large market fluctuation? Some economists blame irrationality behind the fat-tail distribution. Some economists observed that social psychology might create market fad and panic, which can be modeled by collective behavior in statistical mechanics. For example, the bi-modular distribution was discovered from empirical data in option prices. One possible mechanism of polarized behavior is collective action studied in physics and social psychology. Sudden regime switch or phase transition may occur between uni-modular and bi-modular distribution when field parameter changes across some threshold. The Ising model in equilibrium statistical mechanics was borrowed to study social psychology. Its phase transition from uni-modular to bi-modular distribution describes statistical features when a stable society turns into a divided society. The problem of the Ising model is that its key parameter, the social temperature, has no operational definition in social system. A better alternative parameter is the intensity of social interaction in collective action.

A difficult issue in business cycle theory is how to explain the recurrent feature of business cycles that is widely observed from macro and financial indexes. The problem is: business cycles are not strictly periodic and not truly random. Their correlations are not short like random walk and have multiple frequencies that changing over time. Therefore, all kinds of math models are tried in business cycle theory, including deterministic, stochastic, linear and nonlinear models. We outline economic models in terms of their base function, including white noise with short correlations, persistent cycles with long correlations, and color chaos model with erratic amplitude and narrow frequency band like biological clock.

 

Untitled

The steady state of probability distribution function in the Ising Model of Collective Behavior with h = 0 (without central propaganda field). a. Uni-modular distribution with low social stress (k = 0). Moderate stable behavior with weak interaction and high social temperature. b. Marginal distribution at the phase transition with medium social stress (k = 2). Behavioral phase transition occurs between stable and unstable society induced by collective behavior. c. Bi-modular distribution with high social stress (k = 2.5). The society splits into two opposing groups under low social temperature and strong social interactions in unstable society. 

Deterministic models are used by Keynesian economists for endogenous mechanism of business cycles, such as the case of the accelerator-multiplier model. The stochastic models are used by the Frisch model of noise-driven cycles that attributes external shocks as the driving force of business fluctuations. Since 1980s, the discovery of economic chaos and the application of statistical mechanics provide more advanced models for describing business cycles. Graphically,

Untitled

The steady state of probability distribution function in socio-psychological model of collective choice. Here, “a” is the independent parameter; “b” is the interaction parameter. a Centered distribution with b < a (denoted by short dashed curve). It happens when independent decision rooted in individualistic orientation overcomes social pressure through mutual communication. b Horizontal flat distribution with b = a (denoted by long dashed line). Marginal case when individualistic orientation balances the social pressure. c Polarized distribution with b > a (denoted by solid line). It occurs when social pressure through mutual communication is stronger than independent judgment. 

Untitled

Numerical 1 autocorrelations from time series generated by random noise and harmonic wave. The solid line is white noise. The broken line is a sine wave with period P = 1. 

Linear harmonic cycles with unique frequency are introduced in business cycle theory. The auto-correlations from harmonic cycle and white noise are shown in the above figure. Auto-correlation function from harmonic cycles is a cosine wave. The amplitude of cosine wave is slightly decayed because of limited data points in numerical experiment. Auto-correlations from a random series are an erratic series with rapid decade from one to residual fluctuations in numerical calculation. The auto-regressive (AR) model in discrete time is a combination of white noise term for simulating short-term auto-correlations from empirical data.

The deterministic model of chaos can be classified into white chaos and color chaos. White chaos is generated by nonlinear difference equation in discrete-time, such as one-dimensional logistic map and two-dimensional Henon map. Its autocorrelations and power spectra look like white noise. Its correlation dimension can be less than one. White noise model is simple in mathematical analysis but rarely used in empirical analysis, since it needs intrinsic time unit.

Color chaos is generated by nonlinear differential equations in continuous-time, such as three-dimensional Lorenz model and one-dimensional model with delay-differential model in biology and economics. Its autocorrelations looks like a decayed cosine wave, and its power spectra seem a combination of harmonic cycles and white noise. The correlation dimension is between one and two for 3D differential equations, and varying for delay-differential equation.

Untitled

History shows the remarkable resilience of a market that experienced a series of wars and crises. The related issue is why the economy can recover from severe damage and out of equilibrium? Mathematically speaking, we may exam the regime stability under parameter change. One major weakness of the linear oscillator model is that the regime of periodic cycle is fragile or marginally stable under changing parameter. Only nonlinear oscillator model is capable of generating resilient cycles within a finite area under changing parameters. The typical example of linear models is the Samuelson model of multiplier-accelerator. Linear stochastic models have similar problem like linear deterministic models. For example, the so-called unit root solution occurs only at the borderline of the unit root. If a small parameter change leads to cross the unit circle, the stochastic solution will fall into damped (inside the unit circle) or explosive (outside the unit circle) solution.

Intuition

intuition-psychology

During his attempt to axiomatize the category of all categories, Lawvere says

Our intuition tells us that whenever two categories exist in our world, then so does the corresponding category of all natural transformations between the functors from the first category to the second (The Category of Categories as a Foundation).

However, if one tries to reduce categorial constructions to set theory, one faces some serious problems in the case of a category of functors. Lawvere (who, according to his aim of axiomatization, is not concerned by such a reduction) relies here on “intuition” to stress that those working with categorial concepts despite these problems have the feeling that the envisaged construction is clear, meaningful and legitimate. Not the reducibility to set theory, but an “intuition” to be specified answers for clarity, meaningfulness and legitimacy of a construction emerging in a mathematical working situation. In particular, Lawvere relies on a collective intuition, a common sense – for he explicitly says “our intuition”. Further, one obviously has to deal here with common sense on a technical level, for the “we” can only extend to a community used to the work with the concepts concerned.

In the tradition of philosophy, “intuition” means immediate, i.e., not conceptually mediated cognition. The use of the term in the context of validity (immediate insight in the truth of a proposition) is to be thoroughly distinguished from its use in the sensual context (the German Anschauung). Now, language is a manner of representation, too, but contrary to language, in the context of images the concept of validity is meaningless.

Obviously, the aspect of cognition guiding is touched on here. Especially the sensual intuition can take the guiding (or heuristic) function. There have been many working situations in history of mathematics in which making the objects of investigation accessible to a sensual intuition (by providing a Veranschaulichung) yielded considerable progress in the development of the knowledge concerning these objects. As an example, take the following account by Emil Artin of Emmy Noether’s contribution to the theory of algebras:

Emmy Noether introduced the concept of representation space – a vector space upon which the elements of the algebra operate as linear transformations, the composition of the linear transformation reflecting the multiplication in the algebra. By doing so she enables us to use our geometric intuition.

Similarly, Fréchet thinks to have really “powered” research in the theory of functions and functionals by the introduction of a “geometrical” terminology:

One can [ …] consider the numbers of the sequence [of coefficients of a Taylor series] as coordinates of a point in a space [ …] of infinitely many dimensions. There are several advantages to proceeding thus, for instance the advantage which is always present when geometrical language is employed, since this language is so appropriate to intuition due to the analogies it gives birth to.

Mathematical terminology often stems from a current language usage whose (intuitive, sensual) connotation is welcomed and serves to give the user an “intuition” of what is intended. While Category Theory is often classified as a highly abstract matter quite remote from intuition, in reality it yields, together with its applications, a multitude of examples for the role of current language in mathematical conceptualization.

This notwithstanding, there is naturally also a tendency in contemporary mathematics to eliminate as much as possible commitments to (sensual) intuition in the erection of a theory. It seems that algebraic geometry fulfills only in the language of schemes that essential requirement of all contemporary mathematics: to state its definitions and theorems in their natural abstract and formal setting in which they can be considered independent of geometric intuition (Mumford D., Fogarty J. Geometric Invariant Theory).

In the pragmatist approach, intuition is seen as a relation. This means: one uses a piece of language in an intuitive manner (or not); intuitive use depends on the situation of utterance, and it can be learned and transformed. The reason for this relational point of view, consists in the pragmatist conviction that each cognition of an object depends on the means of cognition employed – this means that for pragmatism there is no intuitive (in the sense of “immediate”) cognition; the term “intuitive” has to be given a new meaning.

What does it mean to use something intuitively? Heinzmann makes the following proposal: one uses language intuitively if one does not even have the idea to question validity. Hence, the term intuition in the Heinzmannian reading of pragmatism takes a different meaning, no longer signifies an immediate grasp. However, it is yet to be explained what it means for objects in general (and not only for propositions) to “question the validity of a use”. One uses an object intuitively, if one is not concerned with how the rules of constitution of the object have been arrived at, if one does not focus the materialization of these rules but only the benefits of an application of the object in the present context. “In principle”, the cognition of an object is determined by another cognition, and this determination finds its expression in the “rules of constitution”; one uses it intuitively (one does not bother about the being determined of its cognition), if one does not question the rules of constitution (does not focus the cognition which determines it). This is precisely what one does when using an object as a tool – because in doing so, one does not (yet) ask which cognition determines the object. When something is used as a tool, this constitutes an intuitive use, whereas the use of something as an object does not (this defines tool and object). Here, each concept in principle can play both roles; among two concepts, one may happen to be used intuitively before and the other after the progress of insight. Note that with respect to a given cognition, Peirce when saying “the cognition which determines it” always thinks of a previous cognition because he thinks of a determination of a cognition in our thought by previous thoughts. In conceptual history of mathematics, however, one most often introduced an object first as a tool and only after having done so did it come to one’s mind to ask for “the cognition which determines the cognition of this object” (that means, to ask how the use of this object can be legitimized).

The idea that it could depend on the situation whether validity is questioned or not has formerly been overlooked, perhaps because one always looked for a reductionist epistemology where the capacity called intuition is used exclusively at the last level of regression; in a pragmatist epistemology, to the contrary, intuition is used at every level in form of the not thematized tools. In classical systems, intuition was not simply conceived as a capacity; it was actually conceived as a capacity common to all human beings. “But the power of intuitively distinguishing intuitions from other cognitions has not prevented men from disputing very warmly as to which cognitions are intuitive”. Moreover, Peirce criticises strongly cartesian individualism (which has it that the individual has the capacity to find the truth). We could sum up this philosophy thus: we cannot reach definite truth, only provisional; significant progress is not made individually but only collectively; one cannot pretend that the history of thought did not take place and start from scratch, but every cognition is determined by a previous cognition (maybe by other individuals); one cannot uncover the ultimate foundation of our cognitions; rather, the fact that we sometimes reach a new level of insight, “deeper” than those thought of as fundamental before, merely indicates that there is no “deepest” level. The feeling that something is “intuitive” indicates a prejudice which can be philosophically criticised (even if this does not occur to us at the beginning).

In our approach, intuitive use is collectively determined: it depends on the particular usage of the community of users whether validity criteria are or are not questioned in a given situation of language use. However, it is acknowledged that for example scientific communities develop usages making them communities of language users on their own. Hence, situations of language use are not only partitioned into those where it comes to the users’ mind to question validity criteria and those where it does not, but moreover this partition is specific to a particular community (actually, the community of language users is established partly through a peculiar partition; this is a definition of the term “community of language users”). The existence of different communities with different common senses can lead to the following situation: something is used intuitively by one group, not intuitively by another. In this case, discussions inside the discipline occur; one has to cope with competing common senses (which are therefore not really “common”). This constitutes a task for the historian.

Universal Turing Machine: Algorithmic Halting

169d342be4ac9fdca10d1c8c9c04c3df

A natural number x will be identified with the x’th binary string in lexicographic order (Λ,0,1,00,01,10,11,000…), and a set X of natural numbers will be identified with its characteristic sequence, and with the real number between 0 and 1 having that sequence as its dyadic expansion. The length of a string x will be denoted |x|, the n’th bit of an infinite sequence X will be noted X(n), and the initial n bits of X will be denoted Xn. Concatenation of strings p and q will be denoted pq.

We now define the information content (and later the depth) of finite strings using a universal Turing machine U. A universal Turing machine may be viewed as a partial recursive function of two arguments. It is universal in the sense that by varying one argument (“program”) any partial recursive function of the other argument (“data”) can be obtained. In the usual machine formats, program, data and output are all finite strings, or, equivalently, natural numbers. However, it is not possible to take a uniformly weighted average over a countably infinite set. Chaitin’s universal machine has two tapes: a read-only one-way tape containing the infinite program; and an ordinary two-way read/write tape, which is used for data input, intermediate work, and output, all of which are finite strings. Our machine differs from Chaitin’s in having some additional auxiliary storage (e.g. another read/write tape) which is needed only to improve the time efficiency of simulations.

We consider only terminating computations, during which, of course, only a finite portion of the program tape can be read. Therefore, the machine’s behavior can still be described by a partial recursive function of two string arguments U(p, w), if we use the first argument to represent that portion of the program that is actually read in the course of a particular computation. The expression U (p, w) = x will be used to indicate that the U machine, started with any infinite sequence beginning with p on its program tape and the finite string w on its data tape, performs a halting computation which reads exactly the initial portion p of the program, and leaves output data x on the data tape at the end of the computation. In all other cases (reading less than p, more than p, or failing to halt), the function U(p, w) is undefined. Wherever U(p, w) is defined, we say that p is a self-delimiting program to compute x from w, and we use T(p, w) to represent the time (machine cycles) of the computation. Often we will consider computations without input data; in that case we abbreviate U(p, Λ) and T(p, Λ) as U(p) and T(p) respectively.

The self-delimiting convention for the program tape forces the domain of U and T, for each data input w, to be a prefix set, that is, a set of strings no member of which is the extension of any other member. Any prefix set S obeys the Kraft inequality

p∈S 2−|p| ≤ 1

Besides being self-delimiting with regard to its program tape, the U machine must be efficiently universal in the sense of being able to simulate any other machine of its kind (Turing machines with self-delimiting program tape) with at most an additive constant constant increase in program size and a linear increase in execution time.

Without loss of generality we assume that there exists for the U machine a constant prefix r which has the effect of stacking an instruction to restart the computation when it would otherwise end. This gives the machine the ability to concatenate programs to run consecutively: if U(p, w) = x and U(q, x) = y, then U(rpq, w) = y. Moreover, this concatenation should be efficient in the sense that T (rpq, w) should exceed T (p, w) + T (q, x) by at most O(1). This efficiency of running concatenated programs can be realized with the help of the auxiliary storage to stack the restart instructions.

Sometimes we will generalize U to have access to an “oracle” A, i.e. an infinite look-up table which the machine can consult in the course of its computation. The oracle may be thought of as an arbitrary 0/1-valued function A(x) which the machine can cause to be evaluated by writing the argument x on a special tape and entering a special state of the finite control unit. In the next machine cycle the oracle responds by sending back the value A(x). The time required to evaluate the function is thus linear in the length of its argument. In particular we consider the case in which the information in the oracle is random, each location of the look-up table having been filled by an independent coin toss. Such a random oracle is a function whose values are reproducible, but otherwise unpredictable and uncorrelated.

Let {φAi (p, w): i = 0,1,2…} be an acceptable Gödel numbering of A-partial recursive functions of two arguments and {φAi (p, w)} an associated Blum complexity measure, henceforth referred to as time. An index j is called self-delimiting iff, for all oracles A and all values w of the second argument, the set { x : φAj (x, w) is defined} is a prefix set. A self-delimiting index has efficient concatenation if there exists a string r such that for all oracles A and all strings w, x, y, p, and q,if φAj (p, w) = x and φAj (q, x) = y, then φAj(rpq, w) = y and φAj (rpq, w) = φAj (p, w) + φAj (q, x) + O(1). A self-delimiting index u with efficient concatenation is called efficiently universal iff, for every self-delimiting index j with efficient concatenation, there exists a simulation program s and a linear polynomial L such that for all oracles A and all strings p and w, and

φAu(sp, w) = φAj (p, w)

and

ΦAu(sp, w) ≤ L(ΦAj (p, w))

The functions UA(p,w) and TA(p,w) are defined respectively as φAu(p, w) and ΦAu(p, w), where u is an efficiently universal index.

For any string x, the minimal program, denoted x∗, is min{p : U(p) = x}, the least self-delimiting program to compute x. For any two strings x and w, the minimal program of x relative to w, denoted (x/w)∗, is defined similarly as min{p : U(p,w) = x}.

By contrast to its minimal program, any string x also has a print program, of length |x| + O(log|x|), which simply transcribes the string x from a verbatim description of x contained within the program. The print program is logarithmically longer than x because, being self-delimiting, it must indicate the length as well as the contents of x. Because it makes no effort to exploit redundancies to achieve efficient coding, the print program can be made to run quickly (e.g. linear time in |x|, in the present formalism). Extra information w may help, but cannot significantly hinder, the computation of x, since a finite subprogram would suffice to tell U to simply erase w before proceeding. Therefore, a relative minimal program (x/w)∗ may be much shorter than the corresponding absolute minimal program x∗, but can only be longer by O(1), independent of x and w.

A string is compressible by s bits if its minimal program is shorter by at least s bits than the string itself, i.e. if |x∗| ≤ |x| − s. Similarly, a string x is said to be compressible by s bits relative to a string w if |(x/w)∗| ≤ |x| − s. Regardless of how compressible a string x may be, its minimal program x∗ is compressible by at most an additive constant depending on the universal computer but independent of x. [If (x∗)∗ were much smaller than x∗, then the role of x∗ as minimal program for x would be undercut by a program of the form “execute the result of executing (x∗)∗.”] Similarly, a relative minimal program (x/w)∗ is compressible relative to w by at most a constant number of bits independent of x or w.

The algorithmic probability of a string x, denoted P(x), is defined as {∑2−|p| : U(p) = x}. This is the probability that the U machine, with a random program chosen by coin tossing and an initially blank data tape, will halt with output x. The time-bounded algorithmic probability, Pt(x), is defined similarly, except that the sum is taken only over programs which halt within time t. We use P(x/w) and Pt(x/w) to denote the analogous algorithmic probabilities of one string x relative to another w, i.e. for computations that begin with w on the data tape and halt with x on the data tape.

The algorithmic entropy H(x) is defined as the least integer greater than −log2P(x), and the conditional entropy H(x/w) is defined similarly as the least integer greater than − log2P(x/w). Among the most important properties of the algorithmic entropy is its equality, to within O(1), with the size of the minimal program:

∃c∀x∀wH(x/w) ≤ |(x/w)∗| ≤ H(x/w) + c

The first part of the relation, viz. that algorithmic entropy should be no greater than minimal program size, is obvious, because of the minimal program’s own contribution to the algorithmic probability. The second half of the relation is less obvious. The approximate equality of algorithmic entropy and minimal program size means that there are few near-minimal programs for any given input/output pair (x/w), and that every string gets an O(1) fraction of its algorithmic probability from its minimal program.

Finite strings, such as minimal programs, which are incompressible or nearly so are called algorithmically random. The definition of randomness for finite strings is necessarily a little vague because of the ±O(1) machine-dependence of H(x) and, in the case of strings other than self-delimiting programs, because of the question of how to count the information encoded in the string’s length, as opposed to its bit sequence. Roughly speaking, an n-bit self-delimiting program is considered random (and therefore not ad-hoc as a hypothesis) iff its information content is about n bits, i.e. iff it is incompressible; while an externally delimited n-bit string is considered random iff its information content is about n + H(n) bits, enough to specify both its length and its contents.

For infinite binary sequences (which may be viewed also as real numbers in the unit interval, or as characteristic sequences of sets of natural numbers) randomness can be defined sharply: a sequence X is incompressible, or algorithmically random, if there is an O(1) bound to the compressibility of its initial segments Xn. Intuitively, an infinite sequence is random if it is typical in every way of sequences that might be produced by tossing a fair coin; in other words, if it belongs to no informally definable set of measure zero. Algorithmically random sequences constitute a larger class, including sequences such as Ω which can be specified by ineffective definitions.

The busy beaver function B(n) is the greatest number computable by a self-delimiting program of n bits or fewer. The halting set K is {x : φx(x) converges}. This is the standard representation of the halting problem.

The self-delimiting halting set K0 is the (prefix) set of all self-delimiting programs for the U machine that halt: {p : U(p) converges}.

K and K0 are readily computed from one another (e.g. by regarding the self-delimiting programs as a subset of ordinary programs, the first 2n bits of K0 can be recovered from the first 2n+O(1) bits of K; by encoding each n-bit ordinary program as a self-delimiting program of length n + O(log n), the first 2n bits of K can be recovered from the first 2n+O(log n) bits of K0.)

The halting probability Ω is defined as {2−|p| : U(p) converges}, the probability that the U machine would halt on an infinite input supplied by coin tossing. Ω is thus a real number between 0 and 1.

The first 2n bits of K0 can be computed from the first n bits of Ω, by enumerating halting programs until enough have halted to account for all but 2−n of the total halting probability. The time required for this decoding (between B(n − O(1)) and B(n + H(n) + O(1)) grows faster than any computable function of n. Although K0 is only slowly computable from Ω, the first n bits of Ω can be rapidly computed from the first 2n+H(n)+O(1) bits of K0, by asking about the halting of programs of the form “enumerate halting programs until (if ever) their cumulative weight exceeds q, then halt”, where q is an n-bit rational number…

Rhizomatic Topology and Global Politics. A Flirtatious Relationship.

 

rhizome

Deleuze and Guattari see concepts as rhizomes, biological entities endowed with unique properties. They see concepts as spatially representable, where the representation contains principles of connection and heterogeneity: any point of a rhizome must be connected to any other. Deleuze and Guattari list the possible benefits of spatial representation of concepts, including the ability to represent complex multiplicity, the potential to free a concept from foundationalism, and the ability to show both breadth and depth. In this view, geometric interpretations move away from the insidious understanding of the world in terms of dualisms, dichotomies, and lines, to understand conceptual relations in terms of space and shapes. The ontology of concepts is thus, in their view, appropriately geometric, a multiplicity defined not by its elements, nor by a center of unification and comprehension and instead measured by its dimensionality and its heterogeneity. The conceptual multiplicity, is already composed of heterogeneous terms in symbiosis, and is continually transforming itself such that it is possible to follow, and map, not only the relationships between ideas but how they change over time. In fact, the authors claim that there are further benefits to geometric interpretations of understanding concepts which are unavailable in other frames of reference. They outline the unique contribution of geometric models to the understanding of contingent structure:

Principle of cartography and decalcomania: a rhizome is not amenable to any structural or generative model. It is a stranger to any idea of genetic axis or deep structure. A genetic axis is like an objective pivotal unity upon which successive stages are organized; deep structure is more like a base sequence that can be broken down into immediate constituents, while the unity of the product passes into another, transformational and subjective, dimension. (Deleuze and Guattari)

The word that Deleuze and Guattari use for ‘multiplicities’ can also be translated to the topological term ‘manifold.’ If we thought about their multiplicities as manifolds, there are a virtually unlimited number of things one could come to know, in geometric terms, about (and with) our object of study, abstractly speaking. Among those unlimited things we could learn are properties of groups (homological, cohomological, and homeomorphic), complex directionality (maps, morphisms, isomorphisms, and orientability), dimensionality (codimensionality, structure, embeddedness), partiality (differentiation, commutativity, simultaneity), and shifting representation (factorization, ideal classes, reciprocity). Each of these functions allows for a different, creative, and potentially critical representation of global political concepts, events, groupings, and relationships. This is how concepts are to be looked at: as manifolds. With such a dimensional understanding of concept-formation, it is possible to deal with complex interactions of like entities, and interactions of unlike entities. Critical theorists have emphasized the importance of such complexity in representation a number of times, speaking about it in terms compatible with mathematical methods if not mathematically. For example, Foucault’s declaration that: practicing criticism is a matter of making facile gestures difficult both reflects and is reflected in many critical theorists projects of revealing the complexity in (apparently simple) concepts deployed both in global politics.  This leads to a shift in the concept of danger as well, where danger is not an objective condition but “an effect of interpretation”. Critical thinking about how-possible questions reveals a complexity to the concept of the state which is often overlooked in traditional analyses, sending a wave of added complexity through other concepts as well. This work seeking complexity serves one of the major underlying functions of critical theorizing: finding invisible injustices in (modernist, linear, structuralist) givens in the operation and analysis of global politics.

In a geometric sense, this complexity could be thought about as multidimensional mapping. In theoretical geometry, the process of mapping conceptual spaces is not primarily empirical, but for the purpose of representing and reading the relationships between information, including identification, similarity, differentiation, and distance. The reason for defining topological spaces in math, the essence of the definition, is that there is no absolute scale for describing the distance or relation between certain points, yet it makes sense to say that an (infinite) sequence of points approaches some other (but again, no way to describe how quickly or from what direction one might be approaching). This seemingly weak relationship, which is defined purely ‘locally’, i.e., in a small locale around each point, is often surprisingly powerful: using only the relationship of approaching parts, one can distinguish between, say, a balloon, a sheet of paper, a circle, and a dot.

To each delineated concept, one should distinguish and associate a topological space, in a (necessarily) non-explicit yet definite manner. Whenever one has a relationship between concepts (here we think of the primary relationship as being that of constitution, but not restrictively, we ‘specify’ a function (or inclusion, or relation) between the topological spaces associated to the concepts). In these terms, a conceptual space is in essence a multidimensional space in which the dimensions represent qualities or features of that which is being represented. Such an approach can be leveraged for thinking about conceptual components, dimensionality, and structure. In these terms, dimensions can be thought of as properties or qualities, each with their own (often-multidimensional) properties or qualities. A key goal of the modeling of conceptual space being representation means that a key (mathematical and theoretical) goal of concept space mapping is

associationism, where associations between different kinds of information elements carry the main burden of representation. (Conceptual_Spaces_as_a_Framework_for_Knowledge_Representation)

To this end,

objects in conceptual space are represented by points, in each domain, that characterize their dimensional values. A concept geometry for conceptual spaces

These dimensional values can be arranged in relation to each other, as Gardenfors explains that

distances represent degrees of similarity between objects represented in space and therefore conceptual spaces are “suitable for representing different kinds of similarity relation. Concept

These similarity relationships can be explored across ideas of a concept and across contexts, but also over time, since “with the aid of a topological structure, we can speak about continuity, e.g., a continuous change” a possibility which can be found only in treating concepts as topological structures and not in linguistic descriptions or set theoretic representations.

Sustainability of Debt

death scythe

For economies with fractional reserve-generated fiat money, balancing the budget is characterized by an exponential growth D(t) ≈ D0(1 + r)t of any initial debt D0 subjected to interest r as a function of time t due to the compound interest; a fact known since antiquity. At the same time, besides default, this increasing debt can only be reduced by the following five mostly linear, measures:

(i) more income or revenue I (in the case of sovereign debt: higher taxation or higher tax base);

(ii) less spending S;

(iii) increase of borrowing L;

(iv) acquisition of external resources, and

(v) inflation; that is, devaluation of money.

Whereas (i), (ii) and (iv) without inflation are essentially measures contributing linearly (or polynomially) to the acquisition or compensation of debt, inflation also grows exponentially with time t at some (supposedly constant) rate f ≥ 1; that is, the value of an initial debt D0, without interest (r = 0), in terms of the initial values, gets reduced to F(t) = D0/ft. Conversely, the capacity of an economy to compensate debt will increase with compound inflation: for instance, the initial income or revenue I will, through adaptions, usually increase exponentially with time in an inflationary regime by Ift.

Because these are the only possibilities, we can consider such economies as closed systems (with respect to money flows), characterized by the (continuity) equation

Ift + S + L ≈ D0(1+r)t, or

L ≈ D0(1 + r)t − Ift − S.

Let us concentrate on sovereign debt and briefly discuss the fiscal, social and political options. With regards to the five ways to compensate debt the following assumptions will be made: First, in non-despotic forms of governments (e.g., representative democracies and constitutional monarchies), increases of taxation, related to (i), as well as spending cuts, related to (ii), are very unpopular, and can thus be enforced only in very limited, that is polynomial, forms.

Second, the acquisition of external resources, related to (iv), are often blocked for various obvious reasons; including military strategy limitations, and lack of opportunities. We shall therefore disregard the acquisition of external resources entirely and set A = 0.

As a consequence, without inflation (i.e., for f = 1), the increase of debt

L ≈ D0(1 + r)t − I − S

grows exponentially. This is only “felt” after trespassing a quasi-linear region for which, due to a Taylor expansion around t = 0, D(t) = D0(1 + r)t ≈ D0 + D0rt.

So, under the political and social assumptions made, compound debt without inflation is unsustainable. Furthermore, inflation, with all its inconvenient consequences and re-appropriation, seems inevitable for the continuous existence of economies based on fractional reserve generated fiat money; at least in the long run.

Weyl’s Lagrange Density of General Relativistic Maxwell Theory

Weyl pondered on the reasons why the structure group of the physical automorphisms still contained the “Euclidean rotation group” (respectively the Lorentz group) in such a prominent role:

The Euclidean group of rotations has survived even such radical changes of our concepts of the physical world as general relativity and quantum theory. What then are the peculiar merits of this group to which it owes its elevation to the basic group pattern of the universe? For what ‘sufficient reasons’ did the Creator choose this group and no other?”

He reminded that Helmholtz had characterized ∆o ≅ SO (3, ℜ) by the “fact that it gives to a rotating solid what we may call its just degrees of freedom” of a rotating solid body; but this method “breaks down for the Lorentz group that in the four-dimensional world takes the place of the orthogonal group in 3-space”. In the early 1920s he himself had given another characterization living up to the new demands of the theories of relativity in his mathematical analysis of the problem of space.

He mentioned the idea that the Lorentz group might play its prominent role for the physical automorphisms because it expresses deep lying matter structures; but he strongly qualified the idea immediately after having stated it:

Since we have the dualism of invariance with respect to two groups and Ω certainly refers to the manifold of space points, it is a tempting idea to ascribe ∆o to matter and see in it a characteristic of the localizable elementary particles of matter. I leave it undecided whether this idea, the very meaning of which is somewhat vague, has any real merits.

. . . But instead of analysing the structure of the orthogonal group of transformations ∆o, it may be wiser to look for a characterization of the group ∆o as an abstract group. Here we know that the homogeneous n-dimensional orthogonal groups form one of 3 great classes of simple Lie groups. This is at least a partial solution of the problem.

He left it open why it ought to be “wiser” to look for abstract structure properties in order to answer a natural philosophical question. Could it be that he wanted to indicate an open-mindedness toward the more structuralist perspective on automorphism groups, preferred by the young algebraists around him at Princetion in the 1930/40s? Today the classification of simple Lie groups distinguishes 4 series, Ak,Bk,Ck,Dk. Weyl apparently counted the two orthogonal series Bk and Dk as one. The special orthogonal groups in even complex space dimension form the series of simple Lie groups of type Dk, with complex form (SO 2k,C) and real compact form (SO 2k,ℜ). The special orthogonal group in odd space dimension form the series type Bk, with complex form SO(2k + 1, C) and compact real form SO(2k + 1, ℜ).

But even if one accepted such a general structuralist view as a starting point there remained a question for the specification of the space dimension of the group inside the series.

But the number of the dimensions of the world is 4 and not an indeterminate n. It is a fact that the structure of ∆o is quite different for the various dimensionalities n. Hence the group may serve as a clue by which to discover some cogent reason for the di- mensionality 4 of the world. What must be brought to light, is the distinctive character of one definite group, the four-dimensional Lorentz group, either as a group of linear transformations, or as an abstract group.

The remark that the “structure of ∆o is quite different for the various dimensionalities n” with regard to even or odd complex space dimensions (type Dk, resp. Bk) strongly qualifies the import of the general structuralist characterization. But already in the 1920s Weyl had used the fact that for the (real) space dimension n “4 the universal covering of the unity component of the Lorentz group SO (1, 3)o is the realification of SL (2, C). The latter belongs to the first of the Ak series (with complex form SL (k + 1,C). Because of the isomorphism of the initial terms of the series, A1 ≅ B1, this does not imply an exception of Weyl’s general statement. We even may tend to interpret Weyl’s otherwise cryptic remark that the structuralist perspective gives a “at least a partial solution of the problem” by the observation that the Lorentz group in dimension n “4 is, in a rather specific way, the realification of the complex form of one of the three most elementary non-commutative simple Lie groups of type A1 ≅ B1. Its compact real form is SO (3, ℜ), respectively the latter’s universal cover SU (2, C).

Weyl stated clearly that the answer cannot be expected by structural considerations alone. The problem is only “partly one of pure mathematics”, the other part is “empirical”. But the question itself appeared of utmost importance to him

We can not claim to have understood Nature unless we can establish the uniqueness of the four-dimensional Lorentz group in this sense. It is a fact that many of the known laws of nature can at once be generalized to n dimensions. We must dig deep enough until we hit a layer where this is no longer the case.

In 1918 he had given an argument why, in the framework of his new scale gauge geometry, the “world” had to be of dimension 4. His argument had used the construction of the Lagrange density of general relativistic Maxwell theory Lf = fμν fμν √(|detg|), with fμν the components of curvature of his newly introduced scale/length connection, physically interpreted by him as the electromagnetic field. Lf is scale invariant only in spacetime dimension n = 4. The shift from scale gauge to phase gauge undermined the importance of this argument. Although it remained correct mathematically, it lost its convincing power once the scale gauge transformations were relegated from physics to the mathematical automorphism group of the theory only.

Weyl said:

Our question has this in common with most questions of philosophical nature: it depends on the vague distinction between essential and non-essential. Several competing solutions are thinkable; but it may also happen that, once a good solution of the problem is found, it will be of such cogency as to command general recognition.

Manifold(s) of Deleuzean/De Landian Intensity(ies): The Liquid Flowing Chaos of Complexity

fluid_dynamics_by_aetasserenus

The potential for emergence is pregnant with that which emerges from it, even if as pure potential, lest emergence would be just a gobbledygook of abstraction and obscurity. Some aspects of emergence harness more potential or even more intensity in emergence. What would intensity mean here? Emergence in its most abstract form is described by differentiation, which is the perseverance of differing by extending itself into the world. Thus intensity or potentiality would be proportional to intensity/quality, and degree/quantity of differentiation. The obvious question is the origin of this differentiation. This comes through what has already been actualized, thus putting forth a twist. The twist is in potential being not just abstract, but also relative. Abstract, because potential can come to mean anything other than what it has a potential for, and relative, since, it is dependent upon intertwining within which it can unfold. So, even if intensity for the potential of emergence is proportional to differentiation, an added dimension of meta-differentiation is introduced that not only deals with the intensity of the potential emergence it actualizes, but also in turn the potential, which, its actualization gives rise to. This complexification of emergence is termed complexity.

Complexity is that by which emergence intertwines itself with intensity, thus laden with potentiality itself. This, in a way, could mean that complexity is a sort of meta-emergence, in that, it contains potential for the emergence of greater intensity of emergence. This implies that complexity and emergence require each other’s presence to co-exist and co-evolve. If emergence is, by which, complexity manifests itself in actuality in the world, then complexity is, by which, emergence surfaces as potential through intertwining. Where would Deleuze and Guattari fit in here? This is crucial, since complexity for the said thinkers is different from the way it has been analyzed above. And let us note where the difference rests. To have to better cope with the ideas of Deleuze and Guattari, it is mandated to invite Manuel De Landa with his intense reading of the thinkers in question. The point is proved in these words of John Protevi,

According to Manuel DeLanda, in the late 60s, Gilles Deleuze began to formulate some of the philosophical significance of what is now sometimes referred to as “chaos/complexity theory,” the study of “open” matter/energy systems which move from simple to complex patterning and from complex to simple patterning. Though not a term used by contemporary scientists in everyday work (“non-linear dynamics” is preferred), it can be a useful term for a collection of studies of phenomena whose complexity is such that Laplacean determinism no longer holds beyond a limited time and space scale. Thus the formula of chaos/complexity might be “short-term predictability, long-term unpredictability.

Here, potentiality is seen as creative for philosophy within materialism. An expansion on the notion of unity through assemblages of multiple singularities is on the cards, that facilitate the dislodging of anthropocentric view points, since such views are at best limited, with over-insistence on the rationale of world as a stable and solid structure. The solidity of structures is to be rethought in terms that open vistas for potential creation. The only way out to accomplish this is in terms of liquid structures that are always vulnerable to chaos and disorder considered a sine qua non for this creative potential to emerge. In this liquidity, De Landa witnesses the power to self-organize and further, the ability to form an ethics of sorts, one untouched by human static control, and which allows an existence at the edge of creative, flowing chaos. Such a position is tangible in history as a confluence of infinite variations, a rationale that doesn’t exist when processes are dynamic, thus wanting history to be rooted in materialism of a revived form. Such a history is one of flowing articulations not determined by linear and static constructions, but by infinite bifurcations, of the liquid unfolding, thus exposing a collective identity from a myriad of points and perspectives. This is complexity for Deleuze and Guattari, which enables a re-look at material systems through their powers of immanent autopoiesis or self-organization.