The Locus of Renormalization. Note Quote.

IMM-3-200-g005

Since symmetries and the related conservation properties have a major role in physics, it is interesting to consider the paradigmatic case where symmetry changes are at the core of the analysis: critical transitions. In these state transitions, “something is not preserved”. In general, this is expressed by the fact that some symmetries are broken or new ones are obtained after the transition (symmetry changes, corresponding to state changes). At the transition, typically, there is the passage to a new “coherence structure” (a non-trivial scale symmetry); mathematically, this is described by the non-analyticity of the pertinent formal development. Consider the classical para-ferromagnetic transition: the system goes from a disordered state to sudden common orientation of spins, up to the complete ordered state of a unique orientation. Or percolation, often based on the formation of fractal structures, that is the iteration of a statistically invariant motif. Similarly for the formation of a snow flake . . . . In all these circumstances, a “new physical object of observation” is formed. Most of the current analyses deal with transitions at equilibrium; the less studied and more challenging case of far form equilibrium critical transitions may require new mathematical tools, or variants of the powerful renormalization methods. These methods change the pertinent object, yet they are based on symmetries and conservation properties such as energy or other invariants. That is, one obtains a new object, yet not necessarily new observables for the theoretical analysis. Another key mathematical aspect of renormalization is that it analyzes point-wise transitions, that is, mathematically, the physical transition is seen as happening in an isolated mathematical point (isolated with respect to the interval topology, or the topology induced by the usual measurement and the associated metrics).

One can say in full generality that a mathematical frame completely handles the determination of the object it describes as long as no strong enough singularity (i.e. relevant infinity or divergences) shows up to break this very mathematical determination. In classical statistical fields (at criticality) and in quantum field theories this leads to the necessity of using renormalization methods. The point of these methods is that when it is impossible to handle mathematically all the interaction of the system in a direct manner (because they lead to infinite quantities and therefore to no relevant account of the situation), one can still analyze parts of the interactions in a systematic manner, typically within arbitrary scale intervals. This allows us to exhibit a symmetry between partial sets of “interactions”, when the arbitrary scales are taken as a parameter.

In this situation, the intelligibility still has an “upward” flavor since renormalization is based on the stability of the equational determination when one considers a part of the interactions occurring in the system. Now, the “locus of the objectivity” is not in the description of the parts but in the stability of the equational determination when taking more and more interactions into account. This is true for critical phenomena, where the parts, atoms for example, can be objectivized outside the system and have a characteristic scale. In general, though, only scale invariance matters and the contingent choice of a fundamental (atomic) scale is irrelevant. Even worse, in quantum fields theories, the parts are not really separable from the whole (this would mean to separate an electron from the field it generates) and there is no relevant elementary scale which would allow ONE to get rid of the infinities (and again this would be quite arbitrary, since the objectivity needs the inter-scale relationship).

In short, even in physics there are situations where the whole is not the sum of the parts because the parts cannot be summed on (this is not specific to quantum fields and is also relevant for classical fields, in principle). In these situations, the intelligibility is obtained by the scale symmetry which is why fundamental scale choices are arbitrary with respect to this phenomena. This choice of the object of quantitative and objective analysis is at the core of the scientific enterprise: looking only at molecules as the only pertinent observable of life is worse than reductionist, it is against the history of physics and its audacious unifications and invention of new observables, scale invariances and even conceptual frames.

As for criticality in biology, there exists substantial empirical evidence that living organisms undergo critical transitions. These are mostly analyzed as limit situations, either never really reached by an organism or as occasional point-wise transitions. Or also, as researchers nicely claim in specific analysis: a biological system, a cell genetic regulatory networks, brain and brain slices …are “poised at criticality”. In other words, critical state transitions happen continually.

Thus, as for the pertinent observables, the phenotypes, we propose to understand evolutionary trajectories as cascades of critical transitions, thus of symmetry changes. In this perspective, one cannot pre-give, nor formally pre-define, the phase space for the biological dynamics, in contrast to what has been done for the profound mathematical frame for physics. This does not forbid a scientific analysis of life. This may just be given in different terms.

As for evolution, there is no possible equational entailment nor a causal structure of determination derived from such entailment, as in physics. The point is that these are better understood and correlated, since the work of Noether and Weyl in the last century, as symmetries in the intended equations, where they express the underlying invariants and invariant preserving transformations. No theoretical symmetries, no equations, thus no laws and no entailed causes allow the mathematical deduction of biological trajectories in pre-given phase spaces – at least not in the deep and strong sense established by the physico-mathematical theories. Observe that the robust, clear, powerful physico-mathematical sense of entailing law has been permeating all sciences, including societal ones, economics among others. If we are correct, this permeating physico-mathematical sense of entailing law must be given up for unentailed diachronic evolution in biology, in economic evolution, and cultural evolution.

As a fundamental example of symmetry change, observe that mitosis yields different proteome distributions, differences in DNA or DNA expressions, in membranes or organelles: the symmetries are not preserved. In a multi-cellular organism, each mitosis asymmetrically reconstructs a new coherent “Kantian whole”, in the sense of the physics of critical transitions: a new tissue matrix, new collagen structure, new cell-to-cell connections . . . . And we undergo millions of mitosis each minute. More, this is not “noise”: this is variability, which yields diversity, which is at the core of evolution and even of stability of an organism or an ecosystem. Organisms and ecosystems are structurally stable, also because they are Kantian wholes that permanently and non-identically reconstruct themselves: they do it in an always different, thus adaptive, way. They change the coherence structure, thus its symmetries. This reconstruction is thus random, but also not random, as it heavily depends on constraints, such as the proteins types imposed by the DNA, the relative geometric distribution of cells in embryogenesis, interactions in an organism, in a niche, but also on the opposite of constraints, the autonomy of Kantian wholes.

In the interaction with the ecosystem, the evolutionary trajectory of an organism is characterized by the co-constitution of new interfaces, i.e. new functions and organs that are the proper observables for the Darwinian analysis. And the change of a (major) function induces a change in the global Kantian whole as a coherence structure, that is it changes the internal symmetries: the fish with the new bladder will swim differently, its heart-vascular system will relevantly change ….

Organisms transform the ecosystem while transforming themselves and they can stand/do it because they have an internal preserved universe. Its stability is maintained also by slightly, yet constantly changing internal symmetries. The notion of extended criticality in biology focuses on the dynamics of symmetry changes and provides an insight into the permanent, ontogenetic and evolutionary adaptability, as long as these changes are compatible with the co-constituted Kantian whole and the ecosystem. As we said, autonomy is integrated in and regulated by constraints, with an organism itself and of an organism within an ecosystem. Autonomy makes no sense without constraints and constraints apply to an autonomous Kantian whole. So constraints shape autonomy, which in turn modifies constraints, within the margin of viability, i.e. within the limits of the interval of extended criticality. The extended critical transition proper to the biological dynamics does not allow one to prestate the symmetries and the correlated phase space.

Consider, say, a microbial ecosystem in a human. It has some 150 different microbial species in the intestinal tract. Each person’s ecosystem is unique, and tends largely to be restored following antibiotic treatment. Each of these microbes is a Kantian whole, and in ways we do not understand yet, the “community” in the intestines co-creates their worlds together, co-creating the niches by which each and all achieve, with the surrounding human tissue, a task closure that is “always” sustained even if it may change by immigration of new microbial species into the community and extinction of old species in the community. With such community membership turnover, or community assembly, the phase space of the system is undergoing continual and open ended changes. Moreover, given the rate of mutation in microbial populations, it is very likely that these microbial communities are also co-evolving with one another on a rapid time scale. Again, the phase space is continually changing as are the symmetries.

Can one have a complete description of actual and potential biological niches? If so, the description seems to be incompressible, in the sense that any linguistic description may require new names and meanings for the new unprestable functions, where functions and their names make only sense in the newly co-constructed biological and historical (linguistic) environment. Even for existing niches, short descriptions are given from a specific perspective (they are very epistemic), looking at a purpose, say. One finds out a feature in a niche, because you observe that if it goes away the intended organisms dies. In other terms, niches are compared by differences: one may not be able to prove that two niches are identical or equivalent (in supporting life), but one may show that two niches are different. Once more, there are no symmetries organizing over time these spaces and their internal relations. Mathematically, no symmetry (groups) nor (partial-) order (semigroups) organize the phase spaces of phenotypes, in contrast to physical phase spaces.

Finally, here is one of the many logical challenges posed by evolution: the circularity of the definition of niches is more than the circularity in the definitions. The “in the definitions” circularity concerns the quantities (or quantitative distributions) of given observables. Typically, a numerical function defined by recursion or by impredicative tools yields a circularity in the definition and poses no mathematical nor logical problems, in contemporary logic (this is so also for recursive definitions of metabolic cycles in biology). Similarly, a river flow, which shapes its own border, presents technical difficulties for a careful equational description of its dynamics, but no mathematical nor logical impossibility: one has to optimize a highly non linear and large action/reaction system, yielding a dynamically constructed geodetic, the river path, in perfectly known phase spaces (momentum and space or energy and time, say, as pertinent observables and variables).

The circularity “of the definitions” applies, instead, when it is impossible to prestate the phase space, so the very novel interaction (including the “boundary conditions” in the niche and the biological dynamics) co-defines new observables. The circularity then radically differs from the one in the definition, since it is at the meta-theoretical (meta-linguistic) level: which are the observables and variables to put in the equations? It is not just within prestatable yet circular equations within the theory (ordinary recursion and extended non – linear dynamics), but in the ever changing observables, the phenotypes and the biological functions in a circularly co-specified niche. Mathematically and logically, no law entails the evolution of the biosphere.

Advertisement

Production Function as a Growth Model

Cobb-Douglas_Production_Function

Any science is tempted by the naive attitude of describing its object of enquiry by means of input-output representations, regardless of state. Typically, microeconomics describes the behavior of firms by means of a production function:

y = f(x) —– (1)

where x ∈ R is a p×1 vector of production factors (the input) and y ∈ R is a q × 1 vector of products (the output).

Both y and x are flows expressed in terms of physical magnitudes per unit time. Thus, they may refer to both goods and services.

Clearly, (1) is independent of state. Economics knows state variables as capital, which may take the form of financial capital (the financial assets owned by a firm), physical capital (the machinery owned by a firm) and human capital (the skills of its employees). These variables should appear as arguments in (1).

This is done in the Georgescu-Roegen production function, which may be expressed as follows:

y= f(k,x) —– (2)

where k ∈ R is a m × 1 vector of capital endowments, measured in physical magnitudes. Without loss of generality, we may assume that the first mp elements represent physical capital, the subsequent mh elements represent human capital and the last mf elements represent financial capital, with mp + mh + mf = m.

Contrary to input and output flows, capital is a stock. Physical capital is measured by physical magnitudes such as the number of machines of a given type. Human capital is generally proxied by educational degrees. Financial capital is measured in monetary terms.

Georgescu-Roegen called the stocks of capital funds, to be contrasted to the flows of products and production factors. Thus, Georgescu-Roegen’s production function is also known as the flows-funds model.

Georgescu-Roegen’s production function is little known and seldom used, but macroeconomics often employs aggregate production functions of the following form:

Y = f(K,L) —– (3)

where Y ∈ R is aggregate income, K ∈ R is aggregate capital and L ∈ R is aggregate labor. Though this connection is never made, (3) is a special case of (2).

The examination of (3) highlighted a fundamental difficulty. In fact, general equilibrium theory requires that the remunerations of production factors are proportional to the corresponding partial derivatives of the production function. In particular, the wage must be proportional to ∂f/∂L and the interest rate must be proportional to ∂f/∂K. These partial derivatives are uniquely determined if df is an exact differential.

If the production function is (1), this translates into requiring that:

2f/∂xi∂xj = ∂2f/∂xj∂xi ∀i, j —– (4)

which are surely satisfied because all xi are flows so they can be easily reverted. If the production function is expressed by (2), but m = 1 the following conditions must be added to (4):

2f/∂k∂xi2f/∂xi∂k ∀i —– (5)

Conditions 5 are still surely satisfied because there is only one capital good. However, if m > 1 the following conditions must be added to conditions 4:

2f/∂ki∂xj = ∂2f/∂xj∂ki ∀i, j —– (6)

2f/∂ki∂kj = ∂2f/∂kj∂ki ∀i, j —– (7)

Conditions 6 and 7 are not necessarily satisfied because each derivative depends on all stocks of capital ki. In particular, conditions 6 and 7 do not hold if, after capital ki has been accumulated in order to use the technique i, capital kj is accumulated in order to use the technique j but, subsequently, production reverts to technique i. This possibility, known as reswitching of techniques, undermines the validity of general equilibrium theory.

For many years, the reswitching of techniques has been regarded as a theoretical curiosum. However, the recent resurgence of coal as a source of energy may be regarded as instances of reswitching.

Finally, it should be noted that as any input-state-output representation, (2) must be complemented by the dynamics of the state variables:

k ̇ = g ( k , x , y ) —– ( 8 )

which updates the vector k in (2) making it dependent on time. In the case of aggregate production function (3), (8) combines with (3) to constitute a growth model.

Algorithmic Randomness and Complexity

Figure-13-Constructing-a-network-of-motifs-After-six-recursive-iterations-starting-from

How random is a real? Given two reals, which is more random? How should we even try to quantify these questions, and how do various choices of measurement relate? Once we have reasonable measuring devices, and, using these devices, we divide the reals into equivalence classes of the same “degree of randomness” what do the resulting structures look like? Once we measure the level of randomness how does the level of randomness relate to classical measures of complexity Turing degrees of unsolvability? Should it be the case that high levels of randomness mean high levels of complexity in terms of computational power, or low levels of complexity? Conversely should the structures of computability such as the degrees and the computably enumerable sets have anything to say about randomness for reals?

Algorithmic Randomness and Complexity

Universal Inclusion of the Void. Thought of the Day 38.0

entering-the-void

The universal inclusion of the void means that the intersection between two sets whatsoever is comparable with the void set. That is to say, there is no multiple that does not include within it some part of the “inconsistency” that it structures. The diversity of multiplicity can exhibit multiple modes of articulation, but as multiples, they have nothing to do with one another, they are two absolutely heterogeneous presentations, and this is why this relation – of non-relation – can only be thought under the signifier of being (of the void), which indicates that the multiples in question have nothing in common apart from being multiples. The universal inclusion of the void thus guarantees the consistency of the infinite multiplicities immanent to its presentation. That is to say, it underlines the universal distribution of the ontological structure seized at the point of the axiom of the void set. The void does not merely constitute a consistency at a local point but also organises, from this point of difference, a universal structure that legislates on the structure of all sets, the universe of consistent multiplicity.

This final step, the carrying over of the void seized as a local point of the presentation of the unpresentable, to a global field of sets provides us with the universal point of difference, applicable equally to any number of sets, that guarantees the universal consistency of ontological presentation. In one sense, the universal inclusion of the void demonstrates that, as a unit of presentation, the void anchors the set theoretical universe by its universal inclusion. As such, every presentation in ontological thought is situated in this elementary seizure of ontological difference. The void is that which “fills” ontological or set theoretical presentation. It is what makes common the universe of sets. It is in this sense that the “substance” or constitution of ontology is the void. At the same stroke, however, the universal inclusion of the void also concerns the consistency of set theory in a logical sense.

The universal inclusion of the void provides an important synthesis of the consistency of presentation. What is presented is necessarily consistent but its consistency gives way to two distinct senses. Consistency can refer to its own “substance,” its immanent presentation. Distinct presentations constitute different presentations principally because “what” they present are different. Ontology’s particularity is its presentation of the void. On the other hand, a political site might present certain elements just as a scientific procedure might present yet others. The other sense of consistency is tied to presentation as such, the consistency of presentation in its generality. When one speaks loosely about the “world” being consistent, where natural laws are verifiable against a background of regularity, it is this consistency that is invoked and not the elements that constitute the particularity of their presentation. This sense of consistency, occurring across presentations would certainly take us beyond the particularity of ontology. That is to say, ontological presentation presents a species of this consistency. However, the possibility of multiple approaches does not exclude an ontological treatment of this consistency.