Conjuncted: Integer Pivoting as a Polynomial-Time Algorithm

hqdefault1

The Lemke-Howson Algorithm follows the edges of a polyhedron, which is implemented algebraically by pivoting as used by the simplex algorithm for solving a linear program. Let us see, if there is an efficient implementation that has no numerical errors by storing integers of arbitrary precision. The constraints defining the polyhedron are thereby represented as linear equations with nonnegative slack variables. For the polytopes P and Q in

P = {x ∈ RM| x ≥ 0, Bx ≤ 1},

Q = {y ∈ RN |Ay ≤ 1, y ≥ 0}

these slack variables are nonnegative vectors s ∈ RN and r ∈ RM so that x ∈ P and y ∈ Q iff

Bx + s = 1, r + Ay = 1 —– (1)

and

x ≥ 0, s ≥ 0, r ≥ 0, y ≥ 0 —— (2)

A binding inequality corresponds to a zero slack variable. The pair (x, y) is completely labeled iff xiri = 0 ∀ i ∈ M and yjsj = 0 ∀ j ∈ N, which by (2) can be written as the orthogonality condition

xr = 0, ys = 0

A basic solution to (1) is given by n basic (linearly independent) columns of Bx + s = 1 and m basic columns of r + Ay = 1, where the nonbasic variables that correspond to the m respectively n other (nonbasic) columns are set to zero, so that the basic variables are uniquely determined. A basic feasible solution also fulfills (2), and defines a vertex x of P and y of Q. The labels of such a vertex are given by the respective nonbasic columns.

Pivoting is a change of the basis where a nonbasic variable enters and a basic variable leaves the set of basic variables, while preserving feasibility (2).

Integer pivoting always maintains an integer matrix (or “tableau”) of coefficients of a system of linear equations that is equivalent to the original system Bx + s = 1, in the form

CBx + Cs = C1 —– (3)

In (3), C is the inverse of the basis matrix given by the basic columns of the original system, multiplied by the determinant of the basis matrix. The matrix C is given by the (integer) cofactors of the basis matrix; the cofactor of a matrix entry is the determinant of the matrix when the row and column of that element are deleted. When each entry has a bounded number of digits (by at most a factor of n log n compared to the original matrix entries), then integer pivoting is a polynomial-time algorithm. It is also superior to using fractions of integers or rational numbers because their cancelation requires greatest common divisor computations that take the bulk of computation time.

Super Lie Algebra

JacobiatorIdentity

A super Lie algebra L is an object in the category of super vector spaces together with a morphism [ , ] : L ⊗ L → L, often called the super bracket, or simply, the bracket, which satisfies the following conditions

Anti-symmetry,

[ , ] + [ , ] ○ cL,L = 0

which is the same as

[x, y] + (-1)|x||y|[y, x] = 0 for x, y ∈ L homogenous.

Jacobi identity,

[, [ , ]] + [, [ , ]] ○ σ + [, [ , ]] ○ σ2 = 0,

where σ ∈ S3 is a three-cycle, i.e. taking the first entity of [, [ , ]] to the second, and the second to the third, and then the third to the first. So, for x, y, z ∈ L homogenous, this reads

[x + [y, z]] + (-1)|x||y| + |x||z|[y, [z, x]] + (-1)|y||z| + |x||z|[z, [x, y]] = 0

It is important to note that in the super category, these conditions are modifications of the properties of the bracket in a Lie algebra, designed to accommodate the odd variables. We can immediately extend this definition to the case where L is an A-module for A a commutative superalgebra, thus defining a Lie superalgebra in the category of A-modules. In fact, we can make any associative superalgebra A into a Lie superalgebra by taking the bracket to be

[a, b] = ab – (-1)|a||b|ba,

i.e., we take the bracket to be the difference τ – τ ○ cA,A, where τ is the multiplication morphism on A.

A left A-module is a super vector space M with a morphism A ⊗ M → M, a ⊗ m ↦ am, of super vector spaces obeying the usual identities; that is, ∀ a, b ∈ A and x, y ∈ M, we have

a (x + y) = ax + ay

(a + b)x = ax + bx

(ab)x  = a(bx)

1x = x

A right A-module is defined similarly. Note that if A is commutative, a left A-module is also a right A-module if we define (the sign rule)

m . a = (-1)|m||a|a . m

for m ∈ M and a ∈ A. Morphisms of A-modules are defined in the obvious manner: super vector space morphisms φ: M → N such that φ(am) = aφ(m) ∀ a ∈ A and m ∈ M. So, we have the category of A-modules. For A commutative, the category of A-modules admits tensor products: for M1, M2 A-modules, M1 ⊗ M2 is taken as the tensor product of M1 as a right module with M2 as a left module.

Turning our attention to free A-modules, we have the notion of super vector kp|q over k, and so we define Ap|q := A ⊗ kp|q where

(Ap|q)0 = A0 ⊗ (kp|q)0 ⊕ A1 ⊗ (kp|q)1

(Ap|q)1 = A1 ⊗ (kp|q)0 ⊕ A0 ⊗ (kp|q)1

We say that an A-module M is free if it is isomorphic (in the category of A-modules) to Ap|q for some (p, q). This is equivalent to saying that M contains p even elements {e1, …, ep} and q odd elements {ε1, …, εq} such that

M0 = spanA0{e1, …, ep} ⊕ spanA11, …, εq}

M1 = spanA1{e1, …, ep} ⊕ spanA01, …, εq}

We shall also say M as the free module generated over A by the even elements e1, …, eand the odd elements ε1, …, εq.

Let T: Ap|q → Ar|s be a morphism of free A-modules and then write ep+1, …., ep+q for the odd basis elements ε1, …, εq. Then T is defined on the basis elements {e1, …, ep+q} by

T(ej) = ∑i=1p+q eitij

Hence T can be represented as a matrix of size (r + s) x (p + q)

T = (T1 T2 T3 T4)

where T1 is an r x p matrix consisting of even elements of A, T2 is an r x q matrix of odd elements, T3 is an s x p matrix of odd elements, and T4 is an s x q matrix of even elements. When we say that T is a morphism of super A-modules, it means that it must preserve parity, and therefore the parity of the blocks, T1 & T4, which are even and T2 & T3, which are odd, is determined. When we define T on the basis elements, the basis elements precedes the coordinates tij. This is important to keep the signs in order and comes naturally from composing morphisms. In other words if the module is written as a right module with T acting from the left, composition becomes matrix product in the usual manner:

(S . T)(ej) = S(∑i eitij) = ∑i,keksiktij

hence for any x ∈ Ap|q , we can express x as the column vector x = ∑eixi and so T(x) is given by the matrix product T x.

Derivability from Relational Logic of Charles Sanders Peirce to Essential Laws of Quantum Mechanics

Charles_Sanders_Peirce

Charles Sanders Peirce made important contributions in logic, where he invented and elaborated novel system of logical syntax and fundamental logical concepts. The starting point is the binary relation SiRSj between the two ‘individual terms’ (subjects) Sj and Si. In a short hand notation we represent this relation by Rij. Relations may be composed: whenever we have relations of the form Rij, Rjl, a third transitive relation Ril emerges following the rule

RijRkl = δjkRil —– (1)

In ordinary logic the individual subject is the starting point and it is defined as a member of a set. Peirce considered the individual as the aggregate of all its relations

Si = ∑j Rij —– (2)

The individual Si thus defined is an eigenstate of the Rii relation

RiiSi = Si —– (3)

The relations Rii are idempotent

R2ii = Rii —– (4)

and they span the identity

i Rii = 1 —– (5)

The Peircean logical structure bears resemblance to category theory. In categories the concept of transformation (transition, map, morphism or arrow) enjoys an autonomous, primary and irreducible role. A category consists of objects A, B, C,… and arrows (morphisms) f, g, h,… . Each arrow f is assigned an object A as domain and an object B as codomain, indicated by writing f : A → B. If g is an arrow g : B → C with domain B, the codomain of f, then f and g can be “composed” to give an arrow gof : A → C. The composition obeys the associative law ho(gof) = (hog)of. For each object A there is an arrow 1A : A → A called the identity arrow of A. The analogy with the relational logic of Peirce is evident, Rij stands as an arrow, the composition rule is manifested in equation (1) and the identity arrow for A ≡ Si is Rii.

Rij may receive multiple interpretations: as a transition from the j state to the i state, as a measurement process that rejects all impinging systems except those in the state j and permits only systems in the state i to emerge from the apparatus, as a transformation replacing the j state by the i state. We proceed to a representation of Rij

Rij = |ri⟩⟨rj| —– (6)

where state ⟨ri | is the dual of the state |ri⟩ and they obey the orthonormal condition

⟨ri |rj⟩ = δij —– (7)

It is immediately seen that our representation satisfies the composition rule equation (1). The completeness, equation (5), takes the form

n|ri⟩⟨ri|=1 —– (8)

All relations remain satisfied if we replace the state |ri⟩ by |ξi⟩ where

i⟩ = 1/√N ∑n |ri⟩⟨rn| —– (9)

with N the number of states. Thus we verify Peirce’s suggestion, equation (2), and the state |ri⟩ is derived as the sum of all its interactions with the other states. Rij acts as a projection, transferring from one r state to another r state

Rij |rk⟩ = δjk |ri⟩ —– (10)

We may think also of another property characterizing our states and define a corresponding operator

Qij = |qi⟩⟨qj | —– (11)

with

Qij |qk⟩ = δjk |qi⟩ —– (12)

and

n |qi⟩⟨qi| = 1 —– (13)

Successive measurements of the q-ness and r-ness of the states is provided by the operator

RijQkl = |ri⟩⟨rj |qk⟩⟨ql | = ⟨rj |qk⟩ Sil —– (14)

with

Sil = |ri⟩⟨ql | —– (15)

Considering the matrix elements of an operator A as Anm = ⟨rn |A |rm⟩ we find for the trace

Tr(Sil) = ∑n ⟨rn |Sil |rn⟩ = ⟨ql |ri⟩ —– (16)

From the above relation we deduce

Tr(Rij) = δij —– (17)

Any operator can be expressed as a linear superposition of the Rij

A = ∑i,j AijRij —– (18)

with

Aij =Tr(ARji) —– (19)

The individual states could be redefined

|ri⟩ → ei |ri⟩ —– (20)

|qi⟩ → ei |qi⟩ —– (21)

without affecting the corresponding composition laws. However the overlap number ⟨ri |qj⟩ changes and therefore we need an invariant formulation for the transition |ri⟩ → |qj⟩. This is provided by the trace of the closed operation RiiQjjRii

Tr(RiiQjjRii) ≡ p(qj, ri) = |⟨ri |qj⟩|2 —– (22)

The completeness relation, equation (13), guarantees that p(qj, ri) may assume the role of a probability since

j p(qj, ri) = 1 —– (23)

We discover that starting from the relational logic of Peirce we obtain all the essential laws of Quantum Mechanics. Our derivation underlines the outmost relational nature of Quantum Mechanics and goes in parallel with the analysis of the quantum algebra of microscopic measurement.

US Stock Market Interaction Network as Learned by the Boltzmann Machine

Untitled

Price formation on a financial market is a complex problem: It reflects opinion of investors about true value of the asset in question, policies of the producers, external regulation and many other factors. Given the big number of factors influencing price, many of which unknown to us, describing price formation essentially requires probabilistic approaches. In the last decades, synergy of methods from various scientific areas has opened new horizons in understanding the mechanisms that underlie related problems. One of the popular approaches is to consider a financial market as a complex system, where not only a great number of constituents plays crucial role but also non-trivial interaction properties between them. For example, related interdisciplinary studies of complex financial systems have revealed their enhanced sensitivity to fluctuations and external factors near critical events with overall change of internal structure. This can be complemented by the research devoted to equilibrium and non-equilibrium phase transitions.

In general, statistical modeling of the state space of a complex system requires writing down the probability distribution over this space using real data. In a simple version of modeling, the probability of an observable configuration (state of a system) described by a vector of variables s can be given in the exponential form

p(s) = Z−1 exp {−βH(s)} —– (1)

where H is the Hamiltonian of a system, β is inverse temperature (further β ≡ 1 is assumed) and Z is a statistical sum. Physical meaning of the model’s components depends on the context and, for instance, in the case of financial systems, s can represent a vector of stock returns and H can be interpreted as the inverse utility function. Generally, H has parameters defined by its series expansion in s. Basing on the maximum entropy principle, expansion up to the quadratic terms is usually used, leading to the pairwise interaction models. In the equilibrium case, the Hamiltonian has form

H(s) = −hTs − sTJs —– (2)

where h is a vector of size N of external fields and J is a symmetric N × N matrix of couplings (T denotes transpose). The energy-based models represented by (1) play essential role not only in statistical physics but also in neuroscience (models of neural networks) and machine learning (generative models, also known as Boltzmann machines). Given topological similarities between neural and financial networks, these systems can be considered as examples of complex adaptive systems, which are characterized by the adaptation ability to changing environment, trying to stay in equilibrium with it. From this point of view, market structural properties, e.g. clustering and networks, play important role for modeling of the distribution of stock prices. Adaptation (or learning) in these systems implies change of the parameters of H as financial and economic systems evolve. Using statistical inference for the model’s parameters, the main goal is to have a model capable of reproducing the same statistical observables given time series for a particular historical period. In the pairwise case, the objective is to have

⟨sidata = ⟨simodel —– (3a)

⟨sisjdata = ⟨sisjmodel —– (3b)

where angular brackets denote statistical averaging over time. Having specified general mathematical model, one can also discuss similarities between financial and infinite- range magnetic systems in terms of phenomena related, e.g. extensivity, order parameters and phase transitions, etc. These features can be captured even in the simplified case, when si is a binary variable taking only two discrete values. Effect of the mapping to a binarized system, when the values si = +1 and si = −1 correspond to profit and loss respectively. In this case, diagonal elements of the coupling matrix, Jii, are zero because s2i = 1 terms do not contribute to the Hamiltonian….

US stock market interaction network as learned by the Boltzmann Machine

The Locus of Renormalization. Note Quote.

IMM-3-200-g005

Since symmetries and the related conservation properties have a major role in physics, it is interesting to consider the paradigmatic case where symmetry changes are at the core of the analysis: critical transitions. In these state transitions, “something is not preserved”. In general, this is expressed by the fact that some symmetries are broken or new ones are obtained after the transition (symmetry changes, corresponding to state changes). At the transition, typically, there is the passage to a new “coherence structure” (a non-trivial scale symmetry); mathematically, this is described by the non-analyticity of the pertinent formal development. Consider the classical para-ferromagnetic transition: the system goes from a disordered state to sudden common orientation of spins, up to the complete ordered state of a unique orientation. Or percolation, often based on the formation of fractal structures, that is the iteration of a statistically invariant motif. Similarly for the formation of a snow flake . . . . In all these circumstances, a “new physical object of observation” is formed. Most of the current analyses deal with transitions at equilibrium; the less studied and more challenging case of far form equilibrium critical transitions may require new mathematical tools, or variants of the powerful renormalization methods. These methods change the pertinent object, yet they are based on symmetries and conservation properties such as energy or other invariants. That is, one obtains a new object, yet not necessarily new observables for the theoretical analysis. Another key mathematical aspect of renormalization is that it analyzes point-wise transitions, that is, mathematically, the physical transition is seen as happening in an isolated mathematical point (isolated with respect to the interval topology, or the topology induced by the usual measurement and the associated metrics).

One can say in full generality that a mathematical frame completely handles the determination of the object it describes as long as no strong enough singularity (i.e. relevant infinity or divergences) shows up to break this very mathematical determination. In classical statistical fields (at criticality) and in quantum field theories this leads to the necessity of using renormalization methods. The point of these methods is that when it is impossible to handle mathematically all the interaction of the system in a direct manner (because they lead to infinite quantities and therefore to no relevant account of the situation), one can still analyze parts of the interactions in a systematic manner, typically within arbitrary scale intervals. This allows us to exhibit a symmetry between partial sets of “interactions”, when the arbitrary scales are taken as a parameter.

In this situation, the intelligibility still has an “upward” flavor since renormalization is based on the stability of the equational determination when one considers a part of the interactions occurring in the system. Now, the “locus of the objectivity” is not in the description of the parts but in the stability of the equational determination when taking more and more interactions into account. This is true for critical phenomena, where the parts, atoms for example, can be objectivized outside the system and have a characteristic scale. In general, though, only scale invariance matters and the contingent choice of a fundamental (atomic) scale is irrelevant. Even worse, in quantum fields theories, the parts are not really separable from the whole (this would mean to separate an electron from the field it generates) and there is no relevant elementary scale which would allow ONE to get rid of the infinities (and again this would be quite arbitrary, since the objectivity needs the inter-scale relationship).

In short, even in physics there are situations where the whole is not the sum of the parts because the parts cannot be summed on (this is not specific to quantum fields and is also relevant for classical fields, in principle). In these situations, the intelligibility is obtained by the scale symmetry which is why fundamental scale choices are arbitrary with respect to this phenomena. This choice of the object of quantitative and objective analysis is at the core of the scientific enterprise: looking only at molecules as the only pertinent observable of life is worse than reductionist, it is against the history of physics and its audacious unifications and invention of new observables, scale invariances and even conceptual frames.

As for criticality in biology, there exists substantial empirical evidence that living organisms undergo critical transitions. These are mostly analyzed as limit situations, either never really reached by an organism or as occasional point-wise transitions. Or also, as researchers nicely claim in specific analysis: a biological system, a cell genetic regulatory networks, brain and brain slices …are “poised at criticality”. In other words, critical state transitions happen continually.

Thus, as for the pertinent observables, the phenotypes, we propose to understand evolutionary trajectories as cascades of critical transitions, thus of symmetry changes. In this perspective, one cannot pre-give, nor formally pre-define, the phase space for the biological dynamics, in contrast to what has been done for the profound mathematical frame for physics. This does not forbid a scientific analysis of life. This may just be given in different terms.

As for evolution, there is no possible equational entailment nor a causal structure of determination derived from such entailment, as in physics. The point is that these are better understood and correlated, since the work of Noether and Weyl in the last century, as symmetries in the intended equations, where they express the underlying invariants and invariant preserving transformations. No theoretical symmetries, no equations, thus no laws and no entailed causes allow the mathematical deduction of biological trajectories in pre-given phase spaces – at least not in the deep and strong sense established by the physico-mathematical theories. Observe that the robust, clear, powerful physico-mathematical sense of entailing law has been permeating all sciences, including societal ones, economics among others. If we are correct, this permeating physico-mathematical sense of entailing law must be given up for unentailed diachronic evolution in biology, in economic evolution, and cultural evolution.

As a fundamental example of symmetry change, observe that mitosis yields different proteome distributions, differences in DNA or DNA expressions, in membranes or organelles: the symmetries are not preserved. In a multi-cellular organism, each mitosis asymmetrically reconstructs a new coherent “Kantian whole”, in the sense of the physics of critical transitions: a new tissue matrix, new collagen structure, new cell-to-cell connections . . . . And we undergo millions of mitosis each minute. More, this is not “noise”: this is variability, which yields diversity, which is at the core of evolution and even of stability of an organism or an ecosystem. Organisms and ecosystems are structurally stable, also because they are Kantian wholes that permanently and non-identically reconstruct themselves: they do it in an always different, thus adaptive, way. They change the coherence structure, thus its symmetries. This reconstruction is thus random, but also not random, as it heavily depends on constraints, such as the proteins types imposed by the DNA, the relative geometric distribution of cells in embryogenesis, interactions in an organism, in a niche, but also on the opposite of constraints, the autonomy of Kantian wholes.

In the interaction with the ecosystem, the evolutionary trajectory of an organism is characterized by the co-constitution of new interfaces, i.e. new functions and organs that are the proper observables for the Darwinian analysis. And the change of a (major) function induces a change in the global Kantian whole as a coherence structure, that is it changes the internal symmetries: the fish with the new bladder will swim differently, its heart-vascular system will relevantly change ….

Organisms transform the ecosystem while transforming themselves and they can stand/do it because they have an internal preserved universe. Its stability is maintained also by slightly, yet constantly changing internal symmetries. The notion of extended criticality in biology focuses on the dynamics of symmetry changes and provides an insight into the permanent, ontogenetic and evolutionary adaptability, as long as these changes are compatible with the co-constituted Kantian whole and the ecosystem. As we said, autonomy is integrated in and regulated by constraints, with an organism itself and of an organism within an ecosystem. Autonomy makes no sense without constraints and constraints apply to an autonomous Kantian whole. So constraints shape autonomy, which in turn modifies constraints, within the margin of viability, i.e. within the limits of the interval of extended criticality. The extended critical transition proper to the biological dynamics does not allow one to prestate the symmetries and the correlated phase space.

Consider, say, a microbial ecosystem in a human. It has some 150 different microbial species in the intestinal tract. Each person’s ecosystem is unique, and tends largely to be restored following antibiotic treatment. Each of these microbes is a Kantian whole, and in ways we do not understand yet, the “community” in the intestines co-creates their worlds together, co-creating the niches by which each and all achieve, with the surrounding human tissue, a task closure that is “always” sustained even if it may change by immigration of new microbial species into the community and extinction of old species in the community. With such community membership turnover, or community assembly, the phase space of the system is undergoing continual and open ended changes. Moreover, given the rate of mutation in microbial populations, it is very likely that these microbial communities are also co-evolving with one another on a rapid time scale. Again, the phase space is continually changing as are the symmetries.

Can one have a complete description of actual and potential biological niches? If so, the description seems to be incompressible, in the sense that any linguistic description may require new names and meanings for the new unprestable functions, where functions and their names make only sense in the newly co-constructed biological and historical (linguistic) environment. Even for existing niches, short descriptions are given from a specific perspective (they are very epistemic), looking at a purpose, say. One finds out a feature in a niche, because you observe that if it goes away the intended organisms dies. In other terms, niches are compared by differences: one may not be able to prove that two niches are identical or equivalent (in supporting life), but one may show that two niches are different. Once more, there are no symmetries organizing over time these spaces and their internal relations. Mathematically, no symmetry (groups) nor (partial-) order (semigroups) organize the phase spaces of phenotypes, in contrast to physical phase spaces.

Finally, here is one of the many logical challenges posed by evolution: the circularity of the definition of niches is more than the circularity in the definitions. The “in the definitions” circularity concerns the quantities (or quantitative distributions) of given observables. Typically, a numerical function defined by recursion or by impredicative tools yields a circularity in the definition and poses no mathematical nor logical problems, in contemporary logic (this is so also for recursive definitions of metabolic cycles in biology). Similarly, a river flow, which shapes its own border, presents technical difficulties for a careful equational description of its dynamics, but no mathematical nor logical impossibility: one has to optimize a highly non linear and large action/reaction system, yielding a dynamically constructed geodetic, the river path, in perfectly known phase spaces (momentum and space or energy and time, say, as pertinent observables and variables).

The circularity “of the definitions” applies, instead, when it is impossible to prestate the phase space, so the very novel interaction (including the “boundary conditions” in the niche and the biological dynamics) co-defines new observables. The circularity then radically differs from the one in the definition, since it is at the meta-theoretical (meta-linguistic) level: which are the observables and variables to put in the equations? It is not just within prestatable yet circular equations within the theory (ordinary recursion and extended non – linear dynamics), but in the ever changing observables, the phenotypes and the biological functions in a circularly co-specified niche. Mathematically and logically, no law entails the evolution of the biosphere.

Of Phenomenology, Noumenology and Appearances. Note Quote.

Heidegger’s project in Being and Time does not itself escape completely the problematic of transcendental reflection. The idea of fundamental ontology and its foundation in Dasein, which is concerned “with being” and the analysis of Dasein, at first seemed simply to mark a new dimension within transcendental phenomenology. But under the title of a hermeneutics of facticity, Heidegger objected to Husserl’s eidetic phenomenology that a hermeneutic phenomenology must contain also the theory of facticity, which is not in itself an eidos, Husserl’s phenomenology which consistently holds to the central idea of proto-I cannot be accepted without reservation in interpretation theory in particular that this eidos belong only to the eidetic sphere of universal essences. Phenomenology should be based ontologically on the facticity of the Dasein, and this existence cannot be derived from anything else.

Nevertheless, Heidegger’s complete reversal of reflection and its redirection of it toward “Being”, i.e, the turn or kehre, still is not so much an alteration of his point of view as the indirect result of his critique of Husserl’s concept of transcendental reflection, which had not yet become fully effective in Being and Time. Gadamer, however, would incorporate Husserl’s ideal of an eidetic ontology somewhat “alongside” transcendental constitutional research. Here, the philosophical justification lies ultimately in the completion of the transcendental reduction, which can come only at a higher level of direct access of the individual to the object. Thus there is a question of how our awareness of essences remains subordinated to transcendental phenomenology, but this does not rule out the possibility of turning transcendental phenomenology into an essence-oriented mundane science.

Heidegger does not follow Husserl from eidetic to transcendental phenomenology, but stays with the interpretation of phenomena in relation to their essences. As ‘hermeneutic’, his phenomenology still proceeds from a given Dasein in order to determine the meaning of existence, but now it takes the form of a fundamental ontology. By his careful discussion of the etymology of the words “phenomenon” and “Logos” he shows that “phenomenology” must be taken as letting that which shows itself be seen from itself, and in the very way in it which shows itself from itself. The more genuinely a methodological concept is worked out and the more comprehensively it determines the principles on which a science is to be conducted, the more deeply and primordially it is rooted in terms of the things themselves; whereas if understanding is restricted to the things themselves only so far as they correspond to those judgments considered “first in themselves”, then the things themselves cannot be addressed beyond particular judgements regarding events.

The doctrine of the thing-in-itself entails the possibility of a continuous transition from one aspect of a thing to another, which alone makes possible a unified matrix of experience. Husserl’s idea of the thing-in-itself, as Gadamer introduces it, must be understood in terms of the hermeneutic progress of our knowledge. In other words, in the hermeneutical context the maxim to the thing itself signifies to the text itself. Phenomenology here means grasping the text in such a way that every interpretation about the text must be considered first as directly exhibiting the text and then as demonstrating it with regard to other texts.

Heidegger called this “descriptive phenomenology” which is fundamentally tautological. He explains that phenomenon in Greek first signifies that which looks like something, or secondly that which is semblant or a semblance (das scheinbare, der Schein). He sees both these expressions as structurally interconnected, and having nothing to do with what is called an “appearance” or mere “appearance”. Based on the ordinary conception of phenomenon, the definition of “appearance” as referring to can be regarded also as characterizing the phenomenological concern for the text in itself and for itself. Only through referring to the text in itself can we have a real phenomenology based on appearance. This theory, however, requires a broad meaning of appearance including what does the referring as well as the noumenon.

Heidegger explains that what does the referring must show itself in itself. Further, the appearance “of something” does not mean showing-itself, but that the thing itself announces itself through something which does show itself. Thus, Heidegger urges that what appears does not show itself and anything which fails to show itself can never seem. On the other hand, while appearing is never a showing-itself in the sense of phenomenon, it is preconditioned by something showing-itself (through which the thing announces itself). This showing itself is not appearing itself, but makes the appearing possible. Appearing then is an announcing-itself (das sich-melden) through something that shows itself.

Also, a phenomenon cannot be represented by the word “appearance” if it alludes to that wherein something appears without itself being an appearance. That wherein something appears means that wherein something announces itself without showing itself, in other words without being itself an “appearance” (appearance signifying the showing itself which belongs essentially to that “wherein” something announces itself). Based upon this argument, phenomena are never appearances. This, however, does not deny the fact that every appearance is dependent on phenomena.

The Eclectics on Hyperstition. Collation Archives.

history-of-science-fiction-diagram

As Nick Land explains in the Catacomic, a hyperstition has four characteristics: They function as (1) an “element of effective culture that makes itself real,” (2) as a “fictional quality functional as a time-travelling device,” (3) as “coincidence intensifiers,” and (4) as a “call to the Old Ones”. The first three characteristics describe how hyperstions like the ‘ideology of progress’ or the religious conception of apocalypse enact their subversive influences in the cultural arena, becoming transmuted into perceived ‘truths,’ that influence the outcome of history. Finally, as Land indicates, a hyperstition signals the return of the irrational or the monstrous ‘other’ into the cultural arena. From the perspective of hyperstition, history is presided over by Cthonic ‘polytendriled abominations’ – the “Unuttera” that await us at history’s closure. The tendrils of these hyperstitional abominations reach back through time into the present, manifesting as the ‘dark will’ of progress that rips up political cultures, deletes traditions, dissolves subjectivities. “The [hu]man,” from the perspective of the Unuttera “is something for it to overcome: a problem, drag,” writes Land in Meltdown.

Exulting in capitalism’s permanent ‘crisis mode,’ hyperstition accelerates the tendencies towards chaos and dissolution by invoking irrational and monstrous forces – the Cthonic Old Ones. As Land explains, these forces move through history, planting the seeds of hyperstition:

John Carpenter’s In the Mouth of Madness includes the (approximate) line: “I thought I was making it up, but all the time they were telling me what to write.” ‘They’ are the Old Ones (explicitly), and this line operates at an extraordinary pitch of hyperstitional intensity. From the side of the human subject, ‘beliefs’ hyperstitionally condense into realities, but from the side of the hyperstitional object (the Old Ones), human intelligences are mere incubators through which intrusions are directed against the order of historical time. The archaic hint or suggestion is a germ or catalyst, retro-deposited out of the future along a path that historical consciousness perceives as technological progress.

The ‘Old Ones’ can either be read as (hyper)real Lovecraftian entities – as myth made flesh – or as monstrous avatars representing that which is most uncontainable and unfathomable; the inevitable annihilation that awaits all things when (their) historical time runs out. “Just as particular species or ecosystems flourish and die, so do human cultures,” explains Simon Reynolds. “What feels from any everyday human perspective like catastrophic change is really anastrophe: not the past coming apart, but the future coming together”.

Whatever its specific variants, the practice of hyperstition necessarily involves three irreducible ingredients, interlocked in a productive circuit of simultaneous, mutually stimulating tasks.

1. N u m o g r a m 
Rigorous systematic unfolding of the Decimal Labyrinth and all its implexes (Zones, Currents, Gates, Lemurs, Pandemonium Matrix, Book of Paths …) and echoes (Atlantean Cross, Decadology …). 

The methodical excavation of the occult abstract cartography intrinsic to decimal numeracy (and thus globally ‘oecumenic’) constitutes the first great task of hyperstition.

2. M y t h o s
Comprehensive attribution of all signal (discoveries, theories, problems and approaches) to artificial agencies, allegiances, cultures and continentities. 

The proliferation of ‘carriers’ (“Who says this?”) – multiplying perspectives and narrative fragments – produces a coherent but inherently disintegrated hyperstitional mythos while effecting a positive destruction of identity, authority and credibility. 

3. U n b e l i e f 
Pragmatic skepticism or constructive escape from integrated thinking and all its forms of imposed unity (religious dogma, political ideology, scientific law, common sense …). 
Each vortical sub-cycle of hyperstitional production announces itself through a communion with ‘the Thing’ coinciding with a “mystical consummation of uncertainty” or “attainment of positive unbelief.”

Typicality. Cosmological Constant and Boltzmann Brains. Note Quote.

tumblr_nsafvjW8W31qg20oho1_1280

In a multiverse we would expect there to be relatively many universe domains with large values of the cosmological constant, but none of these allow gravitationally bound structures (such as our galaxy) to occur, so the likelihood of observing ourselves to be in one is essentially zero.

The cosmological constant has negative pressure, but positive energy.  The negative pressure ensures that as the volume expands then matter loses energy (photons get red shifted, particles slow down); this loss of energy by matter causes the expansion to slow down – but the increase in energy of the increased volume is more important .  The increase of energy associated with the extra space the cosmological constant fills has to be balanced by a decrease in the gravitational energy of the expansion – and this expansion energy is negative, allowing the universe to carry on expanding.  If you put all the terms on one side in the Friedmann equation – which is just an energy balancing equation – (with the other side equal to zero) you will see that the expansion energy is negative, whereas the cosmological constant and matter (including dark matter) all have positive energy.

nex6

However, as the cosmological constant is decreased, we eventually reach a transition point where it becomes just small enough for gravitational structures to occur. Reduce it a bit further still, and you now get universes resembling ours. Given the increased likelihood of observing such a universe, the chances of our universe being one of these will be near its peak. Theoretical physicist Steven Weinberg used this reasoning to correctly predict the order of magnitude of the cosmological constant well before the acceleration of our universe was even measured.

Unfortunately this argument runs into conceptually murky water. The multiverse is infinite and it is not clear whether we can calculate the odds for anything to happen in an infinite volume of space- time. All we have is the single case of our apparently small but positive value of the cosmological constant, so it’s hard to see how we could ever test whether or not Weinberg’s prediction was a lucky coincidence. Such questions concerning infinity, and what one can reasonably infer from a single data point, are just the tip of the philosophical iceberg that cosmologists face.

Another conundrum is where the laws of physics come from. Even if these laws vary across the multiverse, there must be, so it seems, meta-laws that dictate the manner in which they are distributed. How can we, inhabitants on a planet in a solar system in a galaxy, meaningfully debate the origin of the laws of physics as well as the origins of something, the very universe, that we are part of? What about the parts of space-time we can never see? These regions could infinitely outnumber our visible patch. The laws of physics could differ there, for all we know.

We cannot settle any of these questions by experiment, and this is where philosophers enter the debate. Central to this is the so-called observational-selection effect, whereby an observation is influenced by the observer’s “telescope”, whatever form that may take. But what exactly is it to be an observer, or more specifically a “typical” observer, in a system where every possible sort of observer will come about infinitely many times? The same basic question, centred on the role of observers, is as fundamental to the science of the indefinitely large (cosmology) as it is to that of the infinitesimally small (quantum theory).

This key issue of typicality also confronted Austrian physicist and philosopher Ludwig Boltzmann. In 1897 he posited an infinite space-time as a means to explain how extraordinarily well-ordered the universe is compared with the state of high entropy (or disorder) predicted by thermodynamics. Given such an arena, where every conceivable combination of particle position and momenta would exist somewhere, he suggested that the orderliness around us might be that of an incredibly rare fluctuation within an infinite space-time.

But Boltzmann’s reasoning was undermined by another, more absurd, conclusion. Rare fluctuations could also give rise to single momentary brains – self aware entities that spontaneously arises through random collisions of particles. Such “Boltzmann brains”, the argument goes, are far more likely to arise than the entire visible universe or even the solar system. Ludwig Boltzmann reasoned that brains and other complex, orderly objects on Earth were the result of random fluctuations. But why, then, do we see billions of other complex, orderly objects all around us? Why aren’t we like the lone being in the sea of nonsense?Boltzmann theorized that if random fluctuations create brains like ours, there should be Boltzmann brains floating around in space or sitting alone on uninhabited planets untold lightyears away. And in fact, those Boltzmann brains should be incredibly more common than the herds of complex, orderly objects we see here on Earth. So we have another paradox. If the only requirement of consciousness is a brain like the one in your head, why aren’t you a Boltzmann brain? If you were assigned to experience a random consciousness, you should almost certainly find yourself alone in the depths of space rather than surrounded by similar consciousnesses. The easy answers seem to all require a touch of magic. Perhaps consciousness doesn’t arise naturally from a brain like yours but requires some metaphysical endowment. Or maybe we’re not random fluctuations in a thermodynamic soup, and we were put here by an intelligent being. An infinity of space would therefore contain an infinitude of such disembodied brains, which would then be the “typical observer”, not us. OR. Starting at the very beginning: entropy must always stay the same or increase over time, according to the second law of thermodynamics. However, Boltzmann (the Ludwig one, not the brain one) formulated a version of the law of entropy that was statistical. What this means for what you’re asking is that while entropy almost always increases or stays the same, over billions of billions of billions of billions of billions…you get the idea years, entropy might go down a bit. This is called a fluctuation. So backing up a tad, if entropy always increases/stays the same, what is surprising for cosmologists is that the universe started in such a low-entropy state. So to (try) to explain this, Boltzmann said, hey, what if there’s a bigger universe that our universe is in, and it is in a state of the most possible entropy, or thermal equilibrium. Then, let’s say it exists for a long long time, those billions we talked about earlier. There’ll be statistical fluctuations, right? And those statistical fluctuations might be represented by the birth of universes. Ahem, our universe is one of them. So now, we get into the brains. Our universe must be a HUGE statistical fluctuation comparatively to other fluctuations. I mean, think about it. If it is so nuts for entropy to decrease by just a little tiny bit, how nuts would it be for it to decrease enough for the birth of a universe to happen!? So the question is, why aren’t we just brains? That is, why aren’t we a statistical fluctuation just big enough for intelligent life to develop, look around, see it exists, and melt back into goop. And it is this goopy-not-long-existing intelligent life that is a Boltzmann brain. This is a huge challenge to the Boltzmann (Ludwig) theory.

Can this bizarre vision possibly be real, or does it indicate something fundamentally wrong with our notion of “typicality”? Or is our notion of “the observer” flawed – can thermodynamic fluctuations that give rise to Boltzmann’s brains really suffice? Or could a futuristic supercomputer even play the Matrix-like role of a multitude of observers?

Archivals. NRx Corporate Serfs.

darkenlightenmentvisualtrichotomy

Even if ‘The Road to Serfdom‘ by Friedrich von Hayek was a warning at one point, it has nor become a fact of history. Take the example of US, which is becoming more and more of a corporate serf to be exact, since the US is nothing but a big corporation, a formal structure by which a group of individuals agree to act collectively to meet some desired result. In the words of Mencius Moldbug,

It is not a mystic trust consigned to us by the generations. It is not the repository of our hopes and fears, the voice of conscience and the avenging sword of justice. It is just an big old company that holds a huge pile of assets, has no clear idea of what it’s trying to do with them, and is thrashing around like a ten-gallon shark in a five-gallon bucket, red ink spouting from each of its bazillion gills.

So what is needed is a reactionary or a radical to bring about social justice to confront us from becoming corporate serfs. Well, neither gets us any closer to achieving social justice, since we might be equal and still not more equal than others, the catch is we are not onto designing any abstract-utopia, but trying to make head and tail of the world that is screwed up. Can this be done via Formalism, which draws out a matrix of who has what, rather than who should have what, since the ‘ought’ alluded to in the latter is a simple recipe for violence. The matrix could at least draw attention to identify the real shareholders and stakeholders (The ‘We’, 99%, or what have you?), and in the process help reproduce the distribution as closely as possible to reach autonomous public ownership and eventually mitigate the risk of political violence imagined through either reactionary or radical means. Libertarianism it is.

Purely Random Correlations of the Matrix, or Studying Noise in Neural Networks

2-Figure1-1

Expressed in the most general form, in essentially all the cases of practical interest, the n × n matrices W used to describe the complex system are by construction designed as

W = XYT —– (1)

where X and Y denote the rectangular n × m matrices. Such, for instance, are the correlation matrices whose standard form corresponds to Y = X. In this case one thinks of n observations or cases, each represented by a m dimensional row vector xi (yi), (i = 1, …, n), and typically m is larger than n. In the limit of purely random correlations the matrix W is then said to be a Wishart matrix. The resulting density ρW(λ) of eigenvalues is here known analytically, with the limits (λmin ≤ λ ≤ λmax) prescribed by

λmaxmin = 1+1/Q±2 1/Q and Q = m/n ≥ 1.

The variance of the elements of xi is here assumed unity.

The more general case, of X and Y different, results in asymmetric correlation matrices with complex eigenvalues λ. In this more general case a limiting distribution corresponding to purely random correlations seems not to be yet known analytically as a function of m/n. It indicates however that in the case of no correlations, quite generically, one may expect a largely uniform distribution of λ bound in an ellipse on the complex plane.

Further examples of matrices of similar structure, of great interest from the point of view of complexity, include the Hamiltonian matrices of strongly interacting quantum many body systems such as atomic nuclei. This holds true on the level of bound states where the problem is described by the Hermitian matrices, as well as for excitations embedded in the continuum. This later case can be formulated in terms of an open quantum system, which is represented by a complex non-Hermitian Hamiltonian matrix. Several neural network models also belong to this category of matrix structure. In this domain the reference is provided by the Gaussian (orthogonal, unitary, symplectic) ensembles of random matrices with the semi-circle law for the eigenvalue distribution. For the irreversible processes there exists their complex version with a special case, the so-called scattering ensemble, which accounts for S-matrix unitarity.

As it has already been expressed above, several variants of ensembles of the random matrices provide an appropriate and natural reference for quantifying various characteristics of complexity. The bulk of such characteristics is expected to be consistent with Random Matrix Theory (RMT), and in fact there exists strong evidence that it is. Once this is established, even more interesting are however deviations, especially those signaling emergence of synchronous or coherent patterns, i.e., the effects connected with the reduction of dimensionality. In the matrix terminology such patterns can thus be associated with a significantly reduced rank k (thus k ≪ n) of a leading component of W. A satisfactory structure of the matrix that would allow some coexistence of chaos or noise and of collectivity thus reads:

W = Wr + Wc —– (2)

Of course, in the absence of Wr, the second term (Wc) of W generates k nonzero eigenvalues, and all the remaining ones (n − k) constitute the zero modes. When Wr enters as a noise (random like matrix) correction, a trace of the above effect is expected to remain, i.e., k large eigenvalues and the bulk composed of n − k small eigenvalues whose distribution and fluctuations are consistent with an appropriate version of random matrix ensemble. One likely mechanism that may lead to such a segregation of eigenspectra is that m in eq. (1) is significantly smaller than n, or that the number of large components makes it effectively small on the level of large entries w of W. Such an effective reduction of m (M = meff) is then expressed by the following distribution P(w) of the large off-diagonal matrix elements in the case they are still generated by the random like processes

P(w) = (|w|(M-1)/2K(M-1)/2(|w|))/(2(M-1)/2Γ(M/2)√π) —– (3)

where K stands for the modified Bessel function. Asymptotically, for large w, this leads to P(w) ∼ e(−|w|) |w|M/2−1, and thus reflects an enhanced probability of appearence of a few large off-diagonal matrix elements as compared to a Gaussian distribution. As consistent with the central limit theorem the distribution quickly converges to a Gaussian with increasing M.

Based on several examples of natural complex dynamical systems, like the strongly interacting Fermi systems, the human brain and the financial markets, one could systematize evidence that such effects are indeed common to all the phenomena that intuitively can be qualified as complex.