The Canonical of a priori and a posteriori Variational Calculus as Phenomenologically Driven. Note Quote.

montage

The expression variational calculus usually identifies two different but related branches in Mathematics. The first aimed to produce theorems on the existence of solutions of (partial or ordinary) differential equations generated by a variational principle and it is a branch of local analysis (usually in Rn); the second uses techniques of differential geometry to deal with the so-called variational calculus on manifolds.

The local-analytic paradigm is often aimed to deal with particular situations, when it is necessary to pay attention to the exact definition of the functional space which needs to be considered. That functional space is very sensitive to boundary conditions. Moreover, minimal requirements on data are investigated in order to allow the existence of (weak) solutions of the equations.

On the contrary, the global-geometric paradigm investigates the minimal structures which allow to pose the variational problems on manifolds, extending what is done in Rn but usually being quite generous about regularity hypotheses (e.g. hardly ever one considers less than C-objects). Since, even on manifolds, the search for solutions starts with a local problem (for which one can use local analysis) the global-geometric paradigm hardly ever deals with exact solutions, unless the global geometric structure of the manifold strongly constrains the existence of solutions.

Untitled.png

Untitled

A further a priori different approach is the one of Physics. In Physics one usually has field equations which are locally given on a portion of an unknown manifold. One thence starts to solve field equations locally in order to find a local solution and only afterwards one tries to find the maximal analytical extension (if any) of that local solution. The maximal extension can be regarded as a global solution on a suitable manifold M, in the sense that the extension defines M as well. In fact, one first proceeds to solve field equations in a coordinate neighbourhood; afterwards, one changes coordinates and tries to extend the found solution out of the patches as long as it is possible. The coordinate changes are the cocycle of transition functions with respect to the atlas and they define the base manifold M. This approach is essential to physical applications when the base manifold is a priori unknown, as in General Relativity, and it has to be determined by physical inputs.

Luckily enough, that approach does not disagree with the standard variational calculus approach in which the base manifold M is instead fixed from the very beginning. One can regard the variational problem as the search for a solution on that particular base manifold. Global solutions on other manifolds may be found using other variational principles on different base manifolds. Even for this reason, the variational principle should be universal, i.e. one defines a family of variational principles: one for each base manifold, or at least one for any base manifold in a “reasonably” wide class of manifolds. The strong requirement, which is physically motivated by the belief that Physics should work more or less in the same way regardless of the particular spacetime which is actually realized in Nature. Of course, a scenario would be conceivable in which everything works because of the particular (topological, differentiable, etc.) structure of the spacetime. This position, however, is not desirable from a physical viewpoint since, in this case, one has to explain why that particular spacetime is realized (a priori or a posteriori).

In spite of the aforementioned strong regularity requirements, the spectrum of situations one can encounter is unexpectedly wide, covering the whole of fundamental physics. Moreover, it is surprising how the geometric formalism is effectual for what concerns identifications of basic structures of field theories. In fact, just requiring the theory to be globally well-defined and to depend on physical data only, it often constrains very strongly the choice of the local theories to be globalized. These constraints are one of the strongest motivations in choosing a variational approach in physical applications. Another motivation is a well formulated framework for conserved quantities. A global- geometric framework is a priori necessary to deal with conserved quantities being non-local.

In the modem perspective of Quantum Field Theory (QFT) the basic object encoding the properties of any quantum system is the action functional. From a quantum viewpoint the action functional is more fundamental than field equations which are obtained in the classical limit. The geometric framework provides drastic simplifications of some key issues, such as the definition of the variation operator. The variation is deeply geometric though, in practice, it coincides with the definition given in the local-analytic paradigm. In the latter case, the functional derivative is usually the directional derivative of the action functional which is a function on the infinite-dimensional space of fields defined on a region D together with some boundary conditions on the boundary ∂D. To be able to define it one should first define the functional space, then define some notion of deformation which preserves the boundary conditions (or equivalently topologize the functional space), define a variation operator on the chosen space, and, finally, prove the most commonly used properties of derivatives. Once one has done it, one finds in principle the same results that would be found when using the geometric definition of variation (for which no infinite dimensional space is needed). In fact, in any case of interest for fundamental physics, the functional derivative is simply defined by means of the derivative of a real function of one real variable. The Lagrangian formalism is a shortcut which translates the variation of (infinite dimensional) action functionals into the variation of the (finite dimensional) Lagrangian structure.

Another feature of the geometric framework is the possibility of dealing with non-local properties of field theories. There are, in fact, phenomena, such as monopoles or instantons, which are described by means of non-trivial bundles. Their properties are tightly related to the non-triviality of the configuration bundle; and they are relatively obscure when regarded by any local paradigm. In some sense, a local paradigm hides global properties in the boundary conditions and in the symmetries of the field equations, which are in turn reflected in the functional space we choose and about which, it being infinite dimensional, we do not know almost anything a priori. We could say that the existence of these phenomena is a further hint that field theories have to be stated on bundles rather than on Cartesian products. This statement, if anything, is phenomenologically driven.

When a non-trivial bundle is involved in a field theory, from a physical viewpoint it has to be regarded as an unknown object. As for the base manifold, it has then to be constructed out of physical inputs. One can do that in (at least) two ways which are both actually used in applications. First of all, one can assume the bundle to be a natural bundle which is thence canonically constructed out of its base manifold. Since the base manifold is identified by the (maximal) extension of the local solutions, then the bundle itself is identified too. This approach is the one used in General Relativity. In these applications, bundles are gauge natural and they are therefore constructed out of a structure bundle P, which, usually, contains extra information which is not directly encoded into the spacetime manifolds. In physical applications the structure bundle P has also to be constructed out of physical observables. This can be achieved by using gauge invariance of field equations. In fact, two local solutions differing by a (pure) gauge transformation describe the same physical system. Then while extending from one patch to another we feel free both to change coordinates on M and to perform a (pure) gauge transformation before glueing two local solutions. Then coordinate changes define the base manifold M, while the (pure) gauge transformations form a cocycle (valued in the gauge group) which defines, in fact, the structure bundle P. Once again solutions with different structure bundles can be found in different variational principles. Accordingly, the variational principle should be universal with respect to the structure bundle.

Local results are by no means less important. They are often the foundations on which the geometric framework is based on. More explicitly, Variational Calculus is perhaps the branch of mathematics that possibilizes the strongest interaction between Analysis and Geometry.

Embedding Branes in Minkowski Space-Time Dimensions To Decipher Them As Particles Or Otherwise

essempi

The physics treatment of Dirichlet branes in terms of boundary conditions is very analogous to that of the “bulk” quantum field theory, and the next step is again to study the renormalization group. This leads to equations of motion for the fields which arise from the open string, namely the data (M, E, ∇). In the supergravity limit, these equations are solved by taking the submanifold M to be volume minimizing in the metric on X, and the connection ∇ to satisfy the Yang-Mills equations.

Like the Einstein equations, the equations governing a submanifold of minimal volume are highly nonlinear, and their general theory is difficult. This is one motivation to look for special classes of solutions; the physical arguments favoring supersymmetry are another. Just as supersymmetric compactification manifolds correspond to a special class of Ricci-flat manifolds, those admitting a covariantly constant spinor, supersymmetry for a Dirichlet brane will correspond to embedding it into a special class of minimal volume submanifolds. Since the physical analysis is based on a covariantly constant spinor, this special class should be defined using the spinor, or else the covariantly constant forms which are bilinear in the spinor.

The standard physical arguments leading to this class are based on the kappa symmetry of the Green-Schwarz world-volume action, in which one finds that the subset of supersymmetry parameters ε which preserve supersymmetry, both of the metric and of the brane, must satisfy

φ ≡ Re εt Γε|M = Vol|M —– (1)

In words, the real part of one of the covariantly constant forms on M must equal the volume form when restricted to the brane.

Clearly dφ = 0, since it is covariantly constant. Thus,

Z(M) ≡ ∫φ —– (2)

depends only on the homology class of M. Thus, it is what physicists would call a “topological charge”, or a “central charge”.

If in addition the p-form φ is dominated by the volume form Vol upon restriction to any p-dimensional subspace V ⊂ Tx X, i.e.,

φ|V ≤ Vol|V —– (3)

then φ will be a calibration in the sense of implying the global statement

φ ≤ ∫Vol —– (4)

for any submanifold M . Thus, the central charge |Z (M)| is an absolute lower bound for Vol(M).

A calibrated submanifold M is now one satisfying (1), thereby attaining the lower bound and thus of minimal volume. Physically these are usually called “BPS branes,” after a prototypical argument of this type due, for magnetic monopole solutions in nonabelian gauge theory.

For a Calabi-Yau X, all of the forms ωp can be calibrations, and the corresponding calibrated submanifolds are p-dimensional holomorphic submanifolds. Furthermore, the n-form Re eΩ for any choice of real parameter θ is a calibration, and the corresponding calibrated submanifolds are called special Lagrangian.

This generalizes to the presence of a general connection on M, and leads to the following two types of BPS branes for a Calabi-Yau X. Let n = dimR M, and let F be the (End(E)-valued) curvature two-form of ∇.

The first kind of BPS D-brane, based on the ωp calibrations, is (for historical reasons) called a “B-type brane”. Here the BPS constraint is equivalent to the following three requirements:

  1. M is a p-dimensional complex submanifold of X.
  2. The 2-form F is of type (1, 1), i.e., (E, ∇) is a holomorphic vector bundle on M.
  3. In the supergravity limit, F satisfies the Hermitian Yang-Mills equation:ω|p−1M ∧ F = c · ω|pMfor some real constant c.
  4. F satisfies Im e(ω|M + ils2F)p = 0 for some real constant φ, where ls is the correction.

The second kind of BPS D-brane, based on the Re eΩ calibration, is called an “A-type” brane. The simplest examples of A-branes are the so-called special Lagrangian submanifolds (SLAGs), satisfying

(1) M is a Lagrangian submanifold of X with respect to ω.

(2) F = 0, i.e., the vector bundle E is flat.

(3) Im e Ω|M = 0 for some real constant α.

More generally, one also has the “coisotropic branes”. In the case when E is a line bundle, such A-branes satisfy the following four requirements:

(1)  M is a coisotropic submanifold of X with respect to ω, i.e., for any x ∈ M the skew-orthogonal complement of TxM ⊂ TxX is contained in TxM. Equivalently, one requires ker ωM to be an integrable distribution on M.

(2)  The 2-form F annihilates ker ωM.

(3)  Let F M be the vector bundle T M/ ker ωM. It follows from the first two conditions that ωM and F descend to a pair of skew-symmetric forms on FM, denoted by σ and f. Clearly, σ is nondegenerate. One requires the endomorphism σ−1f : FM → FM to be a complex structure on FM.

(4)  Let r be the complex dimension of FM. r is even and that r + n = dimR M. Let Ω be the holomorphic trivialization of KX. One requires that Im eΩ|M ∧ Fr/2 = 0 for some real constant α.

Coisotropic A-branes carrying vector bundles of higher rank are still not fully understood. Physically, one must also specify the embedding of the Dirichlet brane in the remaining (Minkowski) dimensions of space-time. The simplest possibility is to take this to be a time-like geodesic, so that the brane appears as a particle in the visible four dimensions. This is possible only for a subset of the branes, which depends on which string theory one is considering. Somewhat confusingly, in the type IIA theory, the B-branes are BPS particles, while in IIB theory, the A-branes are BPS particles.

Is There a Philosophy of Bundles and Fields? Drunken Risibility.

The bundle formulation of field theory is not at all motivated by just seeking a full mathematical generality; on the contrary it is just an empirical consequence of physical situations that concretely happen in Nature. One among the simplest of these situations may be that of a particle constrained to move on a sphere, denoted by S2; the physical state of such a dynamical system is described by providing both the position of the particle and its momentum, which is a tangent vector to the sphere. In other words, the state of this system is described by a point of the so-called tangent bundle TS2 of the sphere, which is non-trivial, i.e. it has a global topology which differs from the (trivial) product topology of S2 x R2. When one seeks for solutions of the relevant equations of motion some local coordinates have to be chosen on the sphere, e.g. stereographic coordinates covering the whole sphere but a point (let us say the north pole). On such a coordinate neighbourhood (which is contractible to a point being a diffeomorphic copy of R2) there exists a trivialization of the corresponding portion of the tangent bundle of the sphere, so that the relevant equations of motion can be locally written in R2 x R2. At the global level, however, together with the equations, one should give some boundary conditions which will ensure regularity in the north pole. As is well known, different inequivalent choices are possible; these boundary conditions may be considered as what is left in the local theory out of the non-triviality of the configuration bundle TS2.

Moreover, much before modem gauge theories or even more complicated new field theories, the theory of General Relativity is the ultimate proof of the need of a bundle framework to describe physical situations. Among other things, in fact, General Relativity assumes that spacetime is not the “simple” Minkowski space introduced for Special Relativity, which has the topology of R4. In general it is a Lorentzian four-dimensional manifold possibly endowed with a complicated global topology. On such a manifold, the choice of a trivial bundle M x F as the configuration bundle for a field theory is mathematically unjustified as well as physically wrong in general. In fact, as long as spacetime is a contractible manifold, as Minkowski space is, all bundles on it are forced to be trivial; however, if spacetime is allowed to be topologically non-trivial, then trivial bundles on it are just a small subclass of all possible bundles among which the configuration bundle can be chosen. Again, given the base M and the fiber F, the non-unique choice of the topology of the configuration bundle corresponds to different global requirements.

A simple purely geometrical example can be considered to sustain this claim. Let us consider M = S1 and F = (-1, 1), an interval of the real line R; then ∃ (at least) countably many “inequivalent” bundles other than the trivial one Mö0 = S1 X F , i.e. the cylinder, as shown

Untitled

Furthermore the word “inequivalent” can be endowed with different meanings. The bundles shown in the figure are all inequivalent as embedded bundles (i.e. there is no diffeomorphism of the ambient space transforming one into the other) but the even ones (as well as the odd ones) are all equivalent among each other as abstract (i.e. not embedded) bundles (since they have the same transition functions).

The bundles Mön (n being any positive integer) can be obtained from the trivial bundle Mö0 by cutting it along a fiber, twisting n-times and then glueing again together. The bundle Mö1 is called the Moebius band (or strip). All bundles Mön are canonically fibered on S1, but just Mö0 is trivial. Differences among such bundles are global properties, which for example imply that the even ones Mö2k allow never-vanishing sections (i.e. field configurations) while the odd ones Mö2k+1 do not.

Philosophy of Dimensions: M-Theory. Thought of the Day 85.0

diagram

Superstrings provided a perturbatively finite theory of gravity which, after compactification down to 3+1 dimensions, seemed potentially capable of explaining the strong, weak and electromagnetic forces of the Standard Model, including the required chiral representations of quarks and leptons. However, there appeared to be not one but five seemingly different but mathematically consistent superstring theories: the E8 × E8 heterotic string, the SO(32) heterotic string, the SO(32) Type I string, and Types IIA and IIB strings. Each of these theories corresponded to a different way in which fermionic degrees of freedom could be added to the string worldsheet.

Supersymmetry constrains the upper limit on the number of spacetime dimensions to be eleven. Why, then, do superstring theories stop at ten? In fact, before the “first string revolution” of the mid-1980’s, many physicists sought superunification in eleven-dimensional supergravity. Solutions to this most primitive supergravity theory include the elementary supermembrane and its dual partner, the solitonic superfivebrane. These are supersymmetric objects extended over two and five spatial dimensions, respectively. This brings to mind another question: why do superstring theories generalize zero-dimensional point particles only to one-dimensional strings, rather than p-dimensional objects?

During the “second superstring revolution” of the mid-nineties it was found that, in addition to the 1+1-dimensional string solutions, string theory contains soliton-like Dirichlet branes. These Dp-branes have p + 1-dimensional worldvolumes, which are hyperplanes in 9 + 1-dimensional spacetime on which strings are allowed to end. If a closed string collides with a D-brane, it can turn into an open string whose ends move along the D-brane. The end points of such an open string satisfy conventional free boundary conditions along the worldvolume of the D-brane, and fixed (Dirichlet) boundary conditions are obeyed in the 9 − p dimensions transverse to the D-brane.

D-branes make it possible to probe string theories non-perturbatively, i.e., when the interactions are no longer assumed to be weak. This more complete picture makes it evident that the different string theories are actually related via a network of “dualities.” T-dualities relate two different string theories by interchanging winding modes and Kaluza-Klein states, via R → α′/R. For example, Type IIA string theory compactified on a circle of radius R is equivalent to Type IIB string theory compactified on a circle of radius 1/R. We have a similar relation between E8 × E8 and SO(32) heterotic string theories. While T-dualities remain manifest at weak-coupling, S-dualities are less well-established strong/weak-coupling relationships. For example, the SO(32) heterotic string is believed to be S-dual to the SO(32) Type I string, while the Type IIB string is self-S-dual. There is a duality of dualities, in which the T-dual of one theory is the S-dual of another. Compactification on various manifolds often leads to dualities. The heterotic string compactified on a six-dimensional torus T6 is believed to be self-S-dual. Also, the heterotic string on T4 is dual to the type II string on four-dimensional K3. The heterotic string on T6 is dual to the Type II string on a Calabi-Yau manifold. The Type IIA string on a Calabi-Yau manifold is dual to the Type IIB string on the mirror Calabi-Yau manifold.

This led to the discovery that all five string theories are actually different sectors of an eleven-dimensional non-perturbative theory, known as M-theory. When M-theory is compactified on a circle S1 of radius R11, it leads to the Type IIA string, with string coupling constant gs = R3/211. Thus, the illusion that this string theory is ten-dimensional is a remnant of weak-coupling perturbative methods. Similarly, if M-theory is compactified on a line segment S1/Z2, then the E8 × E8 heterotic string is recovered.

Just as a given string theory has a corresponding supergravity in its low-energy limit, eleven-dimensional supergravity is the low-energy limit of M-theory. Since we do not yet know what the full M-theory actually is, many different names have been attributed to the “M,” including Magical, Mystery, Matrix, and Membrane! Whenever we refer to “M-theory,” we mean the theory which subsumes all five string theories and whose low-energy limit is eleven-dimensional supergravity. We now have an adequate framework with which to understand a wealth of non-perturbative phenomena. For example, electric-magnetic duality in D = 4 is a consequence of string-string duality in D = 6, which in turn is the result of membrane-fivebrane duality in D = 11. Furthermore, the exact electric-magnetic duality has been extended to an effective duality of non-conformal N = 2 Seiberg-Witten theory, which can be derived from M-theory. In fact, it seems that all supersymmetric quantum field theories with any gauge group could have a geometrical interpretation through M-theory, as worldvolume fields propagating on a common intersection of stacks of p-branes wrapped around various cycles of compactified manifolds.

In addition, while perturbative string theory has vacuum degeneracy problems due to the billions of Calabi-Yau vacua, the non-perturbative effects of M-theory lead to smooth transitions from one Calabi-Yau manifold to another. Now the question to ask is not why do we live in one topology but rather why do we live in a particular corner of the unique topology. M-theory might offer a dynamical explanation of this. While supersymmetry ensures that the high-energy values of the Standard Model coupling constants meet at a common value, which is consistent with the idea of grand unification, the gravitational coupling constant just misses this meeting point. In fact, M-theory may resolve long-standing cosmological and quantum gravitational problems. For example, M-theory accounts for a microscopic description of black holes by supplying the necessary non-perturbative components, namely p-branes. This solves the problem of counting black hole entropy by internal degrees of freedom.

Pareto Optimality

There are some solutions. (“If you don’t give a solution, you are part of the problem”). Most important: Human wealth should be set as the only goal in society and economy. Liberalism is ruinous for humans, while it may be optimal for fitter entities. Nobody is out there to take away the money of others without working for it. In a way of ‘revenge’ or ‘envy’, (basically justifying laziness) taking away the hard-work earnings of others. No way. Nobody wants it. Thinking that yours can be the only way a rational person can think. Anybody not ‘winning’ the game is a ‘loser’. Some of us, actually, do not even want to enter the game.

Yet – the big dilemma – that money-grabbing mentality is essential for the economy. Without it we would be equally doomed. But, what we will see now is that you’ll will lose every last penny either way, even without divine intervention.

Having said that, the solution is to take away the money. Seeing that the system is not stable and accumulates the capital on a big pile, disconnected from humans, mathematically there are two solutions:

1) Put all the capital in the hands of people. If profit is made M’-M, this profit falls to the hands of the people that caused it. This seems fair, and mathematically stable. However, how the wealth is then distributed? That would be the task of politicians, and history has shown that they are a worse pest than capital. Politicians, actually, always wind up representing the capital. No country in the world ever managed to avoid it.

2) Let the system be as it is, which is great for giving people incentives to work and develop things, but at the end of the year, redistribute the wealth to follow an ideal curve that optimizes both wealth and increments of wealth.

The latter is an interesting idea. Also since it does not need rigorous restructuring of society, something that would only be possible after a total collapse of civilization. While unavoidable in the system we have, it would be better to act pro-actively and do something before it happens. Moreover, since money is air – or worse, vacuum – there is actually nothing that is ‘taken away’. Money is just a right to consume and can thus be redistributed at will if there is a just cause to do so. In normal cases this euphemistic word ‘redistribution’ amounts to theft and undermines incentives for work and production and thus causes poverty. Yet, if it can be shown to actually increase incentives to work, and thus increase overall wealth, it would need no further justification.

We set out to calculate this idea. However, it turned out to give quite remarkable results. Basically, the optimal distribution is slavery. Let us present them here. Let’s look at the distribution of wealth. Figure below shows a curve of wealth per person, with the richest conventionally placed at the right and the poor on the left, to result in what is in mathematics called a monotonously-increasing function. This virtual country has 10 million inhabitants and a certain wealth that ranges from nearly nothing to millions, but it can easily be mapped to any country.

Untitled

Figure 1: Absolute wealth distribution function

As the overall wealth increases, it condenses over time at the right side of the curve. Left unchecked, the curve would become ever-more skew, ending eventually in a straight horizontal line at zero up to the last uttermost right point, where it shoots up to an astronomical value. The integral of the curve (total wealth/capital M) always increases, but it eventually goes to one person. Here it is intrinsically assumed that wealth, actually, is still connected to people and not, as it in fact is, becomes independent of people, becomes ‘capital’ autonomously by itself. If independent of people, this wealth can anyway be without any form of remorse whatsoever be confiscated and redistributed. Ergo, only the system where all the wealth is owned by people is needed to be studied.

A more interesting figure is the fractional distribution of wealth, with the normalized wealth w(x) plotted as a function of normalized population x (that thus runs from 0 to 1). Once again with the richest plotted on the right. See Figure below.

Untitled

Figure 2: Relative wealth distribution functions: ‘ideal communist’ (dotted line. constant distribution), ‘ideal capitalist’ (one person owns all, dashed line) and ‘ideal’ functions (work-incentive optimized, solid line).

Every person x in this figure feels an incentive to work harder, because it wants to overtake his/her right-side neighbor and move to the right on the curve. We can define an incentive i(x) for work for person x as the derivative of the curve, divided by the curve itself (a person will work harder proportional to the relative increase in wealth)

i(x) = dw(x)/dx/w(x) —– (1)

A ‘communistic’ (in the negative connotation) distribution is that everybody earns equally, that means that w(x) is constant, with the constant being one

‘ideal’ communist: w(x) = 1.

and nobody has an incentive to work, i(x) = 0 ∀ x. However, in a utopic capitalist world, as shown, the distribution is ‘all on a big pile’. This is what mathematicians call a delta-function

‘ideal’ capitalist: w(x) = δ(x − 1),

and once again, the incentive is zero for all people, i(x) = 0. If you work, or don’t work, you get nothing. Except one person who, working or not, gets everything.

Thus, there is somewhere an ‘ideal curve’ w(x) that optimizes the sum of incentives I defined as the integral of i(x) over x.

I = ∫01i(x)dx = ∫01(dw(x)/dx)/w(x) dx = ∫x=0x=1dw(x)/w(x) = ln[w(x)]|x=0x=1 —– (2)

Which function w is that? Boundary conditions are

1. The total wealth is normalized: The integral of w(x) over x from 0 to 1 is unity.

01w(x)dx = 1 —– (3)

2. Everybody has a at least a minimal income, defined as the survival minimum. (A concept that actually many societies implement). We can call this w0, defined as a percentage of the total wealth, to make the calculation easy (every year this parameter can be reevaluated, for instance when the total wealth increased, but not the minimum wealth needed to survive). Thus, w(0) = w0.

The curve also has an intrinsic parameter wmax. This represents the scale of the figure, and is the result of the other boundary conditions and therefore not really a parameter as such. The function basically has two parameters, minimal subsistence level w0 and skewness b.

As an example, we can try an exponentially-rising function with offset that starts by being forced to pass through the points (0, w0) and (1, wmax):

w(x) = w0 + (wmax − w0)(ebx −1)/(eb − 1) —– (4)

An example of such a function is given in the above Figure. To analytically determine which function is ideal is very complicated, but it can easily be simulated in a genetic algorithm way. In this, we start with a given distribution and make random mutations to it. If the total incentive for work goes up, we keep that new distribution. If not, we go back to the previous distribution.

The results are shown in the figure 3 below for a 30-person population, with w0 = 10% of average (w0 = 1/300 = 0.33%).

Untitled

Figure 3: Genetic algorithm results for the distribution of wealth (w) and incentive to work (i) in a liberal system where everybody only has money (wealth) as incentive. 

Depending on the starting distribution, the system winds up in different optima. If we start with a communistic distribution of figure 2, we wind up with a situation in which the distribution stays homogeneous ‘everybody equal’, with the exception of two people. A ‘slave’ earns the minimum wages and does nearly all the work, and a ‘party official’ that does not do much, but gets a large part of the wealth. Everybody else is equally poor (total incentive/production equal to 21), w = 1/30 = 10w0, with most people doing nothing, nor being encouraged to do anything. The other situation we find when we start with a random distribution or linear increasing distribution. The final situation is shown in situation 2 of the figure 3. It is equal to everybody getting minimum wealth, w0, except the ‘banker’ who gets 90% (270 times more than minimum), while nobody is doing anything, except, curiously, the penultimate person, which we can call the ‘wheedler’, for cajoling the banker into giving him money. The total wealth is higher (156), but the average person gets less, w0.

Note that this isn’t necessarily an evolution of the distribution of wealth over time. Instead, it is a final, stable, distribution calculated with an evolutionary (‘genetic’) algorithm. Moreover, this analysis can be made within a country, analyzing the distribution of wealth between people of the same country, as well as between countries.

We thus find that a liberal system, moreover one in which people are motivated by the relative wealth increase they might attain, winds up with most of the wealth accumulated by one person who not necessarily does any work. This is then consistent with the tendency of liberal capitalist societies to have indeed the capital and wealth accumulate in a single point, and consistent with Marx’s theories that predict it as well. A singularity of distribution of wealth is what you get in a liberal capitalist society where personal wealth is the only driving force of people. Which is ironic, in a way, because by going only for personal wealth, nobody gets any of it, except the big leader. It is a form of Prisoner’s Dilemma.

Liberalism.

main-qimg-05330a748a32e7edd65362167e0f2969

In a humanistic society, boundary conditions (‘laws’) are set which are designed to make the lives of human beings optimal. The laws are made by government. Yet, the skimming of surplus labor by the capital is only overshadowed by the skimming by politicians. Politicians are often ‘auto-invited’ (by colleagues) in board-of-directors of companies (the capital), further enabling amassing buying power. This shows that, in most countries, the differences between the capital and the political class are flimsy if not non-existent. As an example, all communist countries, in fact, were pure capitalist implementations, with a distinction that a greater share of the skimming was done by politicians compared to more conventional capitalist societies.

One form of a humanistic (!!!!!????) government is socialism, which has set as its goals the welfare of humans. One can argue if socialism is a good form to achieve a humanistic society. Maybe it is not efficient to reach this goal, whatever ‘efficient’ may mean and the difficulty in defining that concept.

Another form of government is liberalism. Before we continue, it is remarkable to observe that in practical ‘liberal’ societies, everything is free and allowed, except the creation of banks and doing banking. By definition, a ‘liberal government’ is a contradiction in terms. A real liberal government would be called ‘anarchy’. ‘Liberal’ is a name given by politicians to make people think they are free, while in fact it is the most binding and oppressing form of government.

Liberalism, by definition, has set no boundary conditions. A liberal society has at its core the absence of goals. Everything is left free; “Let a Darwinistic survival-of-the-fittest mechanism decide which things are ‘best'”. Best are, by definition, those things that survive. That means that it might be the case that humans are a nuisance. Inefficient monsters. Does this idea look far-fetched? May it be so that in a liberal society, humans will disappear and only capital (the money and the means of production) will survive in a Darwinistic way? Mathematically it is possible. Let me show you.

Trade unions are organizations that represent the humans in this cycle and they are the ways to break the cycle and guarantee minimization of the skimming of laborers. If you are human, you should like trade unions. (If you are a bank manager, you can – and should – organize yourself in a bank-managers trade union). If you are capital, you do not like them. (And there are many spokesmen of the capital in the world, paid to propagate this dislike). Capital, however, in itself cannot ‘think’, it is not human, nor has it a brain, or a way to communicate. It is just a ‘concept’, an ‘idea’ of a ‘system’. It does not ‘like’ or ‘dislike’ anything. You are not capital, even if you are paid by it. Even if you are paid handsomely by it. Even if you are paid astronomically by it. (In the latter case you are probably just an asocial asshole!!!!). We can thus morally confiscate as much from the capital we wish, without feeling any remorse whatsoever. As long as it does not destroy the game; destroying the game would put human happiness at risk by undermining the incentives for production and reduce the access to consumption.

On the other hand, the spokesmen of the capital will always talk about labor cost contention, because that will increase the marginal profit M’-M. Remember this, next time somebody talks in the media. Who is paying their salary? To give an idea how much you are being fleeced, compare your salary to that of difficult-to-skim, strike-prone, trade-union-bastion professions, like train drivers. The companies still hire them, implying that they still bring a net profit to the companies, in spite of their astronomical salaries. You deserve the same salary.

Continuing. For the capital, there is no ‘special place’ for human labor power LP. If the Marxist equation can be replaced by

M – C{MoP} – P – C’ – M’

i.e., without LP, capital would do just that, if that is optimizing M’-M. Mathematically, there is no difference whatsoever between MoP and LP. The only thing a liberal system seeks is optimization. It does not care at all, in no way whatsoever, how this is achieved. The more liberal the better. Less restrictions, more possibilities for optimizing marginal profit M’-M. If it means destruction of the human race, who cares? Collateral damage.

To make my point: Would you care if you had to pay (feed) monkeys one-cent peanuts to find you kilo-sized gold nuggets? Do you care if no human LP is involved in your business scheme? I guess you just care about maximizing your skimming of the labor power involved, be they human, animal or mechanic. Who cares?

There is only one problem. Somebody should consume the products made (no monkey cares about your gold nuggets). That is why the French economist Jean-Baptiste Say said “Every product creates its own demand”. If nobody can pay for the products made (because no LP is paid for the work done), the products cannot be sold, and the cycle stops at the step C’-M’, the M’ becoming zero (not sold), the profit M’-M reduced to a loss M and the company goes bankrupt.

However, individual companies can sell products, as long as there are other companies in the world still paying LP somewhere. Companies everywhere in the world thus still have a tendency to robotize their production. Companies exist in the world that are nearly fully robotized. The profit, now effectively skimming of the surplus of MoP-power instead of labor power, fully goes to the capital, since MoP has no way of organizing itself in trade unions and demand more ‘payment’. Or, and be careful with this step here – a step Marx could never have imagined – what if the MoP start consuming as well? Imagine that a factory robot needs parts. New robot arms, electricity, water, cleaning, etc. Factories will start making these products. There is a market for them. Hail the market! Now we come to the conclusion that the ‘system’, when liberalized will optimize the production (it is the only intrinsic goal) Preindustrial (without tools):

M – C{LP} – P – C’ – M’

Marxian: M – C{MoP, LP} – P – C’ – M’

Post-modern: M – C{MoP} – P – C’ – M’

If the latter is most efficient, in a completely liberalized system, it will be implemented.

This means

1) No (human) LP will be used in production

2) No humans will be paid for work of producing

3) No human consumption is possible

4) Humans will die from lack of consumption

In a Darwinistic way humanity will die to be substituted by something else; we are too inefficient to survive. We are not fit for this planet. We will be substituted by the exact things we created. There is nowhere a rule written “liberalism, with the condition that it favors humans”. No, liberalism is liberalism. It favors the fittest.

It went good so far. As long as we had exponential growth, even if the growth rate for MoP was far larger than the growth rate for rewards for LP, also LP was rewarded increasingly. When the exponential growth stops, when the system reaches saturation as it seems to do now, only the strongest survive. That is not necessarily mankind. Mathematically it can be either one or the other, without preference; the Marxian equation is symmetrical. Future will tell. Maybe the MoP (they will also acquire intelligence and reason somewhere probably) will later discuss how they won the race, the same way we, Homo Sapiens, currently talk about “those backward unfit Neanderthals”.

Your ideal dream job would be to manage the peanut bank, monopolizing the peanut supply, while the peanut eaters build for you palaces in the Italian Riviera and feed you grapes while you enjoy the scenery. Even if you were one of the few remaining humans. A world in which humans are extinct is not a far-fetched world. It might be the result of a Darwinian selection of the fittest.

The Locus of Renormalization. Note Quote.

IMM-3-200-g005

Since symmetries and the related conservation properties have a major role in physics, it is interesting to consider the paradigmatic case where symmetry changes are at the core of the analysis: critical transitions. In these state transitions, “something is not preserved”. In general, this is expressed by the fact that some symmetries are broken or new ones are obtained after the transition (symmetry changes, corresponding to state changes). At the transition, typically, there is the passage to a new “coherence structure” (a non-trivial scale symmetry); mathematically, this is described by the non-analyticity of the pertinent formal development. Consider the classical para-ferromagnetic transition: the system goes from a disordered state to sudden common orientation of spins, up to the complete ordered state of a unique orientation. Or percolation, often based on the formation of fractal structures, that is the iteration of a statistically invariant motif. Similarly for the formation of a snow flake . . . . In all these circumstances, a “new physical object of observation” is formed. Most of the current analyses deal with transitions at equilibrium; the less studied and more challenging case of far form equilibrium critical transitions may require new mathematical tools, or variants of the powerful renormalization methods. These methods change the pertinent object, yet they are based on symmetries and conservation properties such as energy or other invariants. That is, one obtains a new object, yet not necessarily new observables for the theoretical analysis. Another key mathematical aspect of renormalization is that it analyzes point-wise transitions, that is, mathematically, the physical transition is seen as happening in an isolated mathematical point (isolated with respect to the interval topology, or the topology induced by the usual measurement and the associated metrics).

One can say in full generality that a mathematical frame completely handles the determination of the object it describes as long as no strong enough singularity (i.e. relevant infinity or divergences) shows up to break this very mathematical determination. In classical statistical fields (at criticality) and in quantum field theories this leads to the necessity of using renormalization methods. The point of these methods is that when it is impossible to handle mathematically all the interaction of the system in a direct manner (because they lead to infinite quantities and therefore to no relevant account of the situation), one can still analyze parts of the interactions in a systematic manner, typically within arbitrary scale intervals. This allows us to exhibit a symmetry between partial sets of “interactions”, when the arbitrary scales are taken as a parameter.

In this situation, the intelligibility still has an “upward” flavor since renormalization is based on the stability of the equational determination when one considers a part of the interactions occurring in the system. Now, the “locus of the objectivity” is not in the description of the parts but in the stability of the equational determination when taking more and more interactions into account. This is true for critical phenomena, where the parts, atoms for example, can be objectivized outside the system and have a characteristic scale. In general, though, only scale invariance matters and the contingent choice of a fundamental (atomic) scale is irrelevant. Even worse, in quantum fields theories, the parts are not really separable from the whole (this would mean to separate an electron from the field it generates) and there is no relevant elementary scale which would allow ONE to get rid of the infinities (and again this would be quite arbitrary, since the objectivity needs the inter-scale relationship).

In short, even in physics there are situations where the whole is not the sum of the parts because the parts cannot be summed on (this is not specific to quantum fields and is also relevant for classical fields, in principle). In these situations, the intelligibility is obtained by the scale symmetry which is why fundamental scale choices are arbitrary with respect to this phenomena. This choice of the object of quantitative and objective analysis is at the core of the scientific enterprise: looking only at molecules as the only pertinent observable of life is worse than reductionist, it is against the history of physics and its audacious unifications and invention of new observables, scale invariances and even conceptual frames.

As for criticality in biology, there exists substantial empirical evidence that living organisms undergo critical transitions. These are mostly analyzed as limit situations, either never really reached by an organism or as occasional point-wise transitions. Or also, as researchers nicely claim in specific analysis: a biological system, a cell genetic regulatory networks, brain and brain slices …are “poised at criticality”. In other words, critical state transitions happen continually.

Thus, as for the pertinent observables, the phenotypes, we propose to understand evolutionary trajectories as cascades of critical transitions, thus of symmetry changes. In this perspective, one cannot pre-give, nor formally pre-define, the phase space for the biological dynamics, in contrast to what has been done for the profound mathematical frame for physics. This does not forbid a scientific analysis of life. This may just be given in different terms.

As for evolution, there is no possible equational entailment nor a causal structure of determination derived from such entailment, as in physics. The point is that these are better understood and correlated, since the work of Noether and Weyl in the last century, as symmetries in the intended equations, where they express the underlying invariants and invariant preserving transformations. No theoretical symmetries, no equations, thus no laws and no entailed causes allow the mathematical deduction of biological trajectories in pre-given phase spaces – at least not in the deep and strong sense established by the physico-mathematical theories. Observe that the robust, clear, powerful physico-mathematical sense of entailing law has been permeating all sciences, including societal ones, economics among others. If we are correct, this permeating physico-mathematical sense of entailing law must be given up for unentailed diachronic evolution in biology, in economic evolution, and cultural evolution.

As a fundamental example of symmetry change, observe that mitosis yields different proteome distributions, differences in DNA or DNA expressions, in membranes or organelles: the symmetries are not preserved. In a multi-cellular organism, each mitosis asymmetrically reconstructs a new coherent “Kantian whole”, in the sense of the physics of critical transitions: a new tissue matrix, new collagen structure, new cell-to-cell connections . . . . And we undergo millions of mitosis each minute. More, this is not “noise”: this is variability, which yields diversity, which is at the core of evolution and even of stability of an organism or an ecosystem. Organisms and ecosystems are structurally stable, also because they are Kantian wholes that permanently and non-identically reconstruct themselves: they do it in an always different, thus adaptive, way. They change the coherence structure, thus its symmetries. This reconstruction is thus random, but also not random, as it heavily depends on constraints, such as the proteins types imposed by the DNA, the relative geometric distribution of cells in embryogenesis, interactions in an organism, in a niche, but also on the opposite of constraints, the autonomy of Kantian wholes.

In the interaction with the ecosystem, the evolutionary trajectory of an organism is characterized by the co-constitution of new interfaces, i.e. new functions and organs that are the proper observables for the Darwinian analysis. And the change of a (major) function induces a change in the global Kantian whole as a coherence structure, that is it changes the internal symmetries: the fish with the new bladder will swim differently, its heart-vascular system will relevantly change ….

Organisms transform the ecosystem while transforming themselves and they can stand/do it because they have an internal preserved universe. Its stability is maintained also by slightly, yet constantly changing internal symmetries. The notion of extended criticality in biology focuses on the dynamics of symmetry changes and provides an insight into the permanent, ontogenetic and evolutionary adaptability, as long as these changes are compatible with the co-constituted Kantian whole and the ecosystem. As we said, autonomy is integrated in and regulated by constraints, with an organism itself and of an organism within an ecosystem. Autonomy makes no sense without constraints and constraints apply to an autonomous Kantian whole. So constraints shape autonomy, which in turn modifies constraints, within the margin of viability, i.e. within the limits of the interval of extended criticality. The extended critical transition proper to the biological dynamics does not allow one to prestate the symmetries and the correlated phase space.

Consider, say, a microbial ecosystem in a human. It has some 150 different microbial species in the intestinal tract. Each person’s ecosystem is unique, and tends largely to be restored following antibiotic treatment. Each of these microbes is a Kantian whole, and in ways we do not understand yet, the “community” in the intestines co-creates their worlds together, co-creating the niches by which each and all achieve, with the surrounding human tissue, a task closure that is “always” sustained even if it may change by immigration of new microbial species into the community and extinction of old species in the community. With such community membership turnover, or community assembly, the phase space of the system is undergoing continual and open ended changes. Moreover, given the rate of mutation in microbial populations, it is very likely that these microbial communities are also co-evolving with one another on a rapid time scale. Again, the phase space is continually changing as are the symmetries.

Can one have a complete description of actual and potential biological niches? If so, the description seems to be incompressible, in the sense that any linguistic description may require new names and meanings for the new unprestable functions, where functions and their names make only sense in the newly co-constructed biological and historical (linguistic) environment. Even for existing niches, short descriptions are given from a specific perspective (they are very epistemic), looking at a purpose, say. One finds out a feature in a niche, because you observe that if it goes away the intended organisms dies. In other terms, niches are compared by differences: one may not be able to prove that two niches are identical or equivalent (in supporting life), but one may show that two niches are different. Once more, there are no symmetries organizing over time these spaces and their internal relations. Mathematically, no symmetry (groups) nor (partial-) order (semigroups) organize the phase spaces of phenotypes, in contrast to physical phase spaces.

Finally, here is one of the many logical challenges posed by evolution: the circularity of the definition of niches is more than the circularity in the definitions. The “in the definitions” circularity concerns the quantities (or quantitative distributions) of given observables. Typically, a numerical function defined by recursion or by impredicative tools yields a circularity in the definition and poses no mathematical nor logical problems, in contemporary logic (this is so also for recursive definitions of metabolic cycles in biology). Similarly, a river flow, which shapes its own border, presents technical difficulties for a careful equational description of its dynamics, but no mathematical nor logical impossibility: one has to optimize a highly non linear and large action/reaction system, yielding a dynamically constructed geodetic, the river path, in perfectly known phase spaces (momentum and space or energy and time, say, as pertinent observables and variables).

The circularity “of the definitions” applies, instead, when it is impossible to prestate the phase space, so the very novel interaction (including the “boundary conditions” in the niche and the biological dynamics) co-defines new observables. The circularity then radically differs from the one in the definition, since it is at the meta-theoretical (meta-linguistic) level: which are the observables and variables to put in the equations? It is not just within prestatable yet circular equations within the theory (ordinary recursion and extended non – linear dynamics), but in the ever changing observables, the phenotypes and the biological functions in a circularly co-specified niche. Mathematically and logically, no law entails the evolution of the biosphere.

Whitehead’s Non-Anthropocentric Quantum Field Ontology. Note Quote.

pop5-5

Whitehead builds also upon James’s claim that “The thought is itself the thinker”.

Either your experience is of no content, of no change, or it is of a perceptible amount of content or change. Your acquaintance with reality grows literally by buds or drops of perception. Intellectually and on reflection you can divide them into components, but as immediately given they come totally or not at all. — William James.

If the quantum vacuum displays features that make it resemble a material, albeit a really special one, we can immediately ask: then what is this material made of? Is it a continuum, or are the “atoms” of vacuum? Is vacuum the primordial substance of which everything is made of? Let us start by decoupling the concept of vacuum from that of spacetime. The concept of vacuum as accepted and used in standard quantum field theory is tied with that of spacetime. This is important for the theory of quantum fields, because it leads to observable effects. It is the variation of geometry, either as a change in boundary conditions or as a change in the speed of light (and therefore the metric) which is responsible for the creation of particles. Now, one can legitimately go further and ask: which one is the fundamental “substance”, the space-time or the vacuum? Is the geometry fundamental in any way, or it is just a property of the empty space emerging from a deeper structure? That geometry and substance can be separated is of course not anything new for philosophers. Aristotle’s distinction between form and matter is one example. For Aristotle the “essence” becomes a true reality only when embodied in a form. Otherwise it is just a substratum of potentialities, somewhat similar to what quantum physics suggests. Immanuel Kant was even more radical: the forms, or in general the structures that we think of as either existing in or as being abstracted from the realm of noumena are actually innate categories of the mind, preconditions that make possible our experience of reality as phenomena. Structures such as space and time, causality, etc. are a priori forms of intuition – thus by nature very different from anything from the outside reality, and they are used to formulate synthetic a priori judgments. But almost everything that was discovered in modern physics is at odds with Kant’s view. In modern philosophy perhaps Whitehead’s process metaphysics provides the closest framework for formulating these problems. For Whitehead, potentialities are continuous, while the actualizations are discrete, much like in the quantum theory the unitary evolution is continuous, while the measurement is non-unitary and in some sense “discrete”. An important concept is the “extensive continuum”, defined as a “relational complex” containing all the possibilities of objectification. This continuum also contains the potentiality for division; this potentiality is effected in what Whitehead calls “actual entities (occasions)” – the basic blocks of his cosmology. The core issue for both Whiteheadian Process and Quantum Process is the emergence of the discrete from the continuous. But what fixes, or determines, the partitioning of the continuous whole into the discrete set of subsets? The orthodox answer is this: it is an intentional action of an experimenter that determines the partitioning! But, in Whiteheadian process the world of fixed and settled facts grows via a sequence actual occasions. The past actualities are the causal and structural inputs for the next actual occasion, which specifies a new space-time standpoint (region) from which the potentialities created by the past actualities will be prehended (grasped) by the current occasion. This basic autogenetic process creates the new actual entity, which, upon becoming actual, contributes to the potentialities for the succeeding actual occasions. For the pragmatic physicist, since the extensive continuum provides the space of possibilities from which the actual entities arise, it is tempting to identify it with the quantum vacuum. The actual entities are then assimilated with events in spacetime, as resulting from a quantum measurement, or simply with particles. The following caveat is however due: Whitehead’s extensive continuum is also devoid of geometrical content, while the quantum vacuum normally carries information about the geometry, be it flat or curved. Objective/absolute actuality consist of a sequence of psycho-physical quantum reduction events, identified as Whiteheadian actual entities/occasions. These happenings combine to create a growing “past” of fixed and settled “facts”. Each “fact” is specified by an actual occasion/entity that has a physical aspect (pole), and a region in space-time from which it views reality. The physical input is precisely the aspect of the physical state of the universe that is localized along the part of the contemporary space-like surface σ that constitutes the front of the standpoint region associated with the actual occasion. The physical output is reduced state ψ(σ) on this space-like surface σ. The mental pole consists of an input and an output. The mental inputs and outputs have the ontological character of thoughts, ideas, or feelings, and they play an essential dynamical role in unifying, evaluating, and selecting discrete classically conceivable activities from among the continuous range of potentialities offered by the operation of the physically describable laws. The paradigmatic example of an actual occasion is an event whose mental pole is experienced by a human being as an addition to his or her stream of conscious events, and whose output physical pole is the neural correlate of that experiential event. Such events are “high-grade” actual occasions. But the Whitehead/Quantum ontology postulates that simpler organisms will have fundamentally similar but lower-grade actual occasions, and that there can be actual occasions associated with any physical systems that possess a physical structure that will support physically effective mental interventions of the kind described above. Thus the Whitehead/Quantum ontology is essentially an ontologicalization of the structure of orthodox relativistic quantum field theory, stripped of its anthropocentric trappings. It identifies the essential physical and psychological aspects of contemporary orthodox relativistic quantum field theory, and lets them be essential features of a general non-anthropocentric ontology.

quantum_veda

It is reasonable to expect that the continuous differentiable manifold that we use as spacetime in physics (and experience in our daily life) is a coarse-grained manifestation of a deeper reality, perhaps also of quantum (probabilistic) nature. This search for the underlying structure of spacetime is part of the wider effort of bringing together quantum physics and the theory of gravitation under the same conceptual umbrella. From various the- oretical considerations, it is inferred that this unification should account for physics at the incredibly small scale set by the Planck length, 10−35m, where the effects of gravitation and quantum physics would be comparable. What happens below this scale, which concepts will survive in the new description of the world, is not known. An important point is that, in order to incorporate the main conceptual innovation of general relativity, the the- ory should be background-independent. This contrasts with the case of the other fields (electromagnetic, Dirac, etc.) that live in the classical background provided by gravitation. The problem with quantizing gravitation is – if we believe that the general theory of relativity holds in the regime where quantum effects of gravitation would appear, that is, beyond the Planck scale – that there is no underlying background on which the gravitational field lives. There are several suggestions and models for a “pre-geometry” (a term introduced by Wheeler) that are currently actively investigated. This is a question of ongoing investigation and debate, and several research programs in quantum gravity (loops, spinfoams, noncommutative geometry, dynamical triangulations, etc.) have proposed different lines of attack. Spacetime would then be an emergent entity, an approximation valid only at scales much larger than the Planck length. Incidentally, nothing guarantees that background-independence itself is a fundamental concept that will survive in the new theory. For example, string theory is an approach to unifying the Standard Model of particle physics with gravitation which uses quantization in a fixed (non-dynamic) background. In string theory, gravitation is just another force, with the graviton (zero mass and spin 2) obtained as one of the string modes in the perturbative expansion. A background-independent formulation of string theory would be a great achievement, but so far it is not known if it can be achieved.