The Natural Theoretic of Electromagnetism. Thought of the Day 147.0

pRwcC

In Maxwell’s theory, the field strength F = 1/2Fμν dxμ ∧ dxν is a real 2-form on spacetime, and thence a natural object at the same time. The homogeneous Maxwell equation dF = 0 is an equation involving forms and it has a well-known local solution F = dA’, i.e. there exists a local spacetime 1-form A’ which is a potential for the field strength F. Of course, if spacetime is contractible, as e.g. for Minkowski space, the solution is also a global one. As is well-known, in the non-commutative Yang-Mills theory case the field strength F = 1/2FAμν TA ⊗ dxμ ∧ dxν is no longer a spacetime form. This is a somewhat trivial remark since the transformation laws of such field strength are obtained as the transformation laws of the curvature of a principal connection with values in the Lie algebra of some (semisimple) non-Abelian Lie group G (e.g. G = SU(n), n 2 ≥ 2). However, the common belief that electromagnetism is to be intended as the particular case (for G =U(1)) of a non-commutative theory is not really physically evident. Even if we subscribe this common belief, which is motivated also by the tremendous success of the quantized theory, let us for a while discuss electromagnetism as a standalone theory.

From a mathematical viewpoint this is a (different) approach to electromagnetism and the choice between the two can be dealt with on a physical ground only. Of course the 1-form A’ is defined modulo a closed form, i.e. locally A” = A’ + dα is another solution.

How can one decide whether the potential of electromagnetism should be considered as a 1-form or rather as a principal connection on a U(1)-bundle? First of all we notice that by a standard hole argument (one can easily define compact supported closed 1-forms, e.g. by choosing the differential of compact supported functions which always exist on a paracompact manifold) the potentials A and A’ represent the same physical situation. On the other hand, from a mathematical viewpoint we would like the dynamical field, i.e. the potential A’, to be a global section of some suitable configuration bundle. This requirement is a mathematical one, motivated on the wish of a well-defined geometrical perspective based on global Variational Calculus.

The first mathematical way out is to restrict attention to contractible spacetimes, where A’ may be always chosen to be global. Then one can require the gauge transformations A” = A’ + dα to be Lagrangian symmetries. In this way, field equations select a whole equivalence class of gauge-equivalent potentials, a procedure which solves the hole argument problem. In this picture the potential A’ is really a 1-form, which can be dragged along spacetime diffeomorphism and which admits the ordinary Lie derivatives of 1-forms. Unfortunately, the restriction to contractible spacetimes is physically unmotivated and probably wrong.

Alternatively, one can restrict electromagnetic fields F, deciding that only exact 2-forms F are allowed. That actually restricts the observable physical situations, by changing the homogeneous Maxwell equations (i.e. Bianchi identities) by requiring that F is not only closed but exact. One should in principle be able to empirically reject this option.

On non-contractible spacetimes, one is necessarily forced to resort to a more “democratic” attitude. The spacetime is covered by a number of patches Uα. On each patch Uα one defines a potential A(α). In the intersection of two patches the two potentials A(α) and A(β) may not agree. In each patch, in fact, the observer chooses his own conventions and he finds a different representative of the electromagnetic potential, which is related by a gauge transformation to the representatives chosen in the neighbour patch(es). Thence we have a family of gauge transformations, one in each intersection Uαβ, which obey cocycle identities. If one recognizes in them the action of U(1) then one can build a principal bundle P = (P, M, π; U(1)) and interpret the ensuing potential as a connection on P. This leads way to the gauge natural formalism.

Anyway this does not close the matter. One can investigate if and when the principal bundle P, in addition to the obvious principal structure, can be also endowed with a natural structure. If that were possible then the bundle of connections Cp (which is associated to P) would also be natural. The problem of deciding whether a given gauge natural bundle can be endowed with a natural structure is quite difficult in general and no full theory is yet completely developed in mathematical terms. That is to say, there is no complete classification of the topological and differential geometric conditions which a principal bundle P has to satisfy in order to ensure that, among the principal trivializations which determine its gauge natural structure, one can choose a sub-class of trivializations which induce a purely natural bundle structure. Nor it is clear how many inequivalent natural structures a good principal bundle may support. Though, there are important examples of bundles which support at the same time a natural and a gauge natural structure. Actually any natural bundle is associated to some frame bundle L(M), which is principal; thence each natural bundle is also gauge natural in a trivial way. Since on any paracompact manifold one can choose a global Riemannian metric g, the corresponding tangent bundle T(M) can be associated to the orthonormal frame bundle O(M, g) besides being obviously associated to L(M). Thence the natural bundle T(M) may be also endowed with a gauge natural bundle structure with structure group O(m). And if M is orientable the structure can be further reduced to a gauge natural bundle with structure group SO(m).

Roughly speaking, the task is achieved by imposing restrictions to cocycles which generate T(M) according to the prescription by imposing a privileged class of changes of local laboratories and sets of measures. Imposing the cocycle ψ(αβ) to take its values in O(m) rather than in the larger group GL(m). Inequivalent gauge natural structures are in one-to-one correspondence with (non isometric) Riemannian metrics on M. Actually whenever there is a Lie group homomorphism ρ : GU(m) → G for some s onto some given Lie group G we can build a natural G-principal bundle on M. In fact, let (Uα, ψ(α)) be an atlas of the given manifold M, ψ(αβ) be its transition functions and jψ(αβ) be the induced transition functions of L(M). Then we can define a G-valued cocycle on M by setting ρ(jψ(αβ)) and thence a (unique up to fibered isomorphisms) G-principal bundle P(M) = (P(M), M, π; G). The bundle P(M), as well as any gauge natural bundle associated to it, is natural by construction. Now, defining a whole family of natural U(1)-bundles Pq(M) by using the bundle homomorphisms

ρq: GL(m) → U(1): J ↦ exp(iq ln det|J|) —– (1)

where q is any real number and In denotes the natural logarithm. In the case q = 0 the image of ρ0 is the trivial group {I}; and, all the induced bundles are trivial, i.e. P = M x U(1).

The natural lift φ’ of a diffeomorphism φ: M → M is given by

φ'[x, e]α = [φ(x), eiq ln det|J|. e]α —– (2)

where J is the Jacobin of the morphism φ. The bundles Pq(M) are all trivial since they allow a global section. In fact, on any manifold M, one can define a global Riemannian metric g, where the local sections glue together.

Since the bundles Pq(M) are all trivial, they are all isomorphic to M x U(1) as principal U(1)-bundles, though in a non-canonical way unless q = 0. Any two of the bundles Pq1(M) and Pq2(M) for two different values of q are isomorphic as principal bundles but the isomorphism obtained is not the lift of a spacetime diffeomorphism because of the two different values of q. Thence they are not isomorphic as natural bundles. We are thence facing a very interesting situation: a gauge natural bundle C associated to the trivial principal bundle P can be endowed with an infinite family of natural structures, one for each q ∈ R; each of these natural structures can be used to regard principal connections on P as natural objects on M and thence one can regard electromagnetism as a natural theory.

Now that the mathematical situation has been a little bit clarified, it is again a matter of physical interpretation. One can in fact restrict to electromagnetic potentials which are a priori connections on a trivial structure bundle P ≅ M x U(1) or to accept that more complicated situations may occur in Nature. But, non-trivial situations are still empirically unsupported, at least at a fundamental level.

The Canonical of a priori and a posteriori Variational Calculus as Phenomenologically Driven. Note Quote.

montage

The expression variational calculus usually identifies two different but related branches in Mathematics. The first aimed to produce theorems on the existence of solutions of (partial or ordinary) differential equations generated by a variational principle and it is a branch of local analysis (usually in Rn); the second uses techniques of differential geometry to deal with the so-called variational calculus on manifolds.

The local-analytic paradigm is often aimed to deal with particular situations, when it is necessary to pay attention to the exact definition of the functional space which needs to be considered. That functional space is very sensitive to boundary conditions. Moreover, minimal requirements on data are investigated in order to allow the existence of (weak) solutions of the equations.

On the contrary, the global-geometric paradigm investigates the minimal structures which allow to pose the variational problems on manifolds, extending what is done in Rn but usually being quite generous about regularity hypotheses (e.g. hardly ever one considers less than C-objects). Since, even on manifolds, the search for solutions starts with a local problem (for which one can use local analysis) the global-geometric paradigm hardly ever deals with exact solutions, unless the global geometric structure of the manifold strongly constrains the existence of solutions.

Untitled.png

Untitled

A further a priori different approach is the one of Physics. In Physics one usually has field equations which are locally given on a portion of an unknown manifold. One thence starts to solve field equations locally in order to find a local solution and only afterwards one tries to find the maximal analytical extension (if any) of that local solution. The maximal extension can be regarded as a global solution on a suitable manifold M, in the sense that the extension defines M as well. In fact, one first proceeds to solve field equations in a coordinate neighbourhood; afterwards, one changes coordinates and tries to extend the found solution out of the patches as long as it is possible. The coordinate changes are the cocycle of transition functions with respect to the atlas and they define the base manifold M. This approach is essential to physical applications when the base manifold is a priori unknown, as in General Relativity, and it has to be determined by physical inputs.

Luckily enough, that approach does not disagree with the standard variational calculus approach in which the base manifold M is instead fixed from the very beginning. One can regard the variational problem as the search for a solution on that particular base manifold. Global solutions on other manifolds may be found using other variational principles on different base manifolds. Even for this reason, the variational principle should be universal, i.e. one defines a family of variational principles: one for each base manifold, or at least one for any base manifold in a “reasonably” wide class of manifolds. The strong requirement, which is physically motivated by the belief that Physics should work more or less in the same way regardless of the particular spacetime which is actually realized in Nature. Of course, a scenario would be conceivable in which everything works because of the particular (topological, differentiable, etc.) structure of the spacetime. This position, however, is not desirable from a physical viewpoint since, in this case, one has to explain why that particular spacetime is realized (a priori or a posteriori).

In spite of the aforementioned strong regularity requirements, the spectrum of situations one can encounter is unexpectedly wide, covering the whole of fundamental physics. Moreover, it is surprising how the geometric formalism is effectual for what concerns identifications of basic structures of field theories. In fact, just requiring the theory to be globally well-defined and to depend on physical data only, it often constrains very strongly the choice of the local theories to be globalized. These constraints are one of the strongest motivations in choosing a variational approach in physical applications. Another motivation is a well formulated framework for conserved quantities. A global- geometric framework is a priori necessary to deal with conserved quantities being non-local.

In the modem perspective of Quantum Field Theory (QFT) the basic object encoding the properties of any quantum system is the action functional. From a quantum viewpoint the action functional is more fundamental than field equations which are obtained in the classical limit. The geometric framework provides drastic simplifications of some key issues, such as the definition of the variation operator. The variation is deeply geometric though, in practice, it coincides with the definition given in the local-analytic paradigm. In the latter case, the functional derivative is usually the directional derivative of the action functional which is a function on the infinite-dimensional space of fields defined on a region D together with some boundary conditions on the boundary ∂D. To be able to define it one should first define the functional space, then define some notion of deformation which preserves the boundary conditions (or equivalently topologize the functional space), define a variation operator on the chosen space, and, finally, prove the most commonly used properties of derivatives. Once one has done it, one finds in principle the same results that would be found when using the geometric definition of variation (for which no infinite dimensional space is needed). In fact, in any case of interest for fundamental physics, the functional derivative is simply defined by means of the derivative of a real function of one real variable. The Lagrangian formalism is a shortcut which translates the variation of (infinite dimensional) action functionals into the variation of the (finite dimensional) Lagrangian structure.

Another feature of the geometric framework is the possibility of dealing with non-local properties of field theories. There are, in fact, phenomena, such as monopoles or instantons, which are described by means of non-trivial bundles. Their properties are tightly related to the non-triviality of the configuration bundle; and they are relatively obscure when regarded by any local paradigm. In some sense, a local paradigm hides global properties in the boundary conditions and in the symmetries of the field equations, which are in turn reflected in the functional space we choose and about which, it being infinite dimensional, we do not know almost anything a priori. We could say that the existence of these phenomena is a further hint that field theories have to be stated on bundles rather than on Cartesian products. This statement, if anything, is phenomenologically driven.

When a non-trivial bundle is involved in a field theory, from a physical viewpoint it has to be regarded as an unknown object. As for the base manifold, it has then to be constructed out of physical inputs. One can do that in (at least) two ways which are both actually used in applications. First of all, one can assume the bundle to be a natural bundle which is thence canonically constructed out of its base manifold. Since the base manifold is identified by the (maximal) extension of the local solutions, then the bundle itself is identified too. This approach is the one used in General Relativity. In these applications, bundles are gauge natural and they are therefore constructed out of a structure bundle P, which, usually, contains extra information which is not directly encoded into the spacetime manifolds. In physical applications the structure bundle P has also to be constructed out of physical observables. This can be achieved by using gauge invariance of field equations. In fact, two local solutions differing by a (pure) gauge transformation describe the same physical system. Then while extending from one patch to another we feel free both to change coordinates on M and to perform a (pure) gauge transformation before glueing two local solutions. Then coordinate changes define the base manifold M, while the (pure) gauge transformations form a cocycle (valued in the gauge group) which defines, in fact, the structure bundle P. Once again solutions with different structure bundles can be found in different variational principles. Accordingly, the variational principle should be universal with respect to the structure bundle.

Local results are by no means less important. They are often the foundations on which the geometric framework is based on. More explicitly, Variational Calculus is perhaps the branch of mathematics that possibilizes the strongest interaction between Analysis and Geometry.

The Affinity of Mirror Symmetry to Algebraic Geometry: Going Beyond Formalism

symmetry-07-01633-g005

1617T345fibreandbaseOLD2

Even though formalism of homological mirror symmetry is an established case, what of other explanations of mirror symmetry which lie closer to classical differential and algebraic geometry? One way to tackle this is the so-called Strominger, Yau and Zaslow mirror symmetry or SYZ in short.

The central physical ingredient in this proposal is T-duality. To explain this, let us consider a superconformal sigma model with target space (M, g), and denote it (defined as a geometric functor, or as a set of correlation functions), as

CFT(M, g)

In physics, a duality is an equivalence

CFT(M, g) ≅ CFT(M′, g′)

which holds despite the fact that the underlying geometries (M,g) and (M′, g′) are not classically diffeomorphic.

T-duality is a duality which relates two CFT’s with toroidal target space, M ≅ M′ ≅ Td, but different metrics. In rough terms, the duality relates a “small” target space, with noncontractible cycles of length L < ls, with a “large” target space in which all such cycles have length L > ls.

This sort of relation is generic to dualities and follows from the following logic. If all length scales (lengths of cycles, curvature lengths, etc.) are greater than ls, string theory reduces to conventional geometry. Now, in conventional geometry, we know what it means for (M, g) and (M′, g′) to be non-isomorphic. Any modification to this notion must be associated with a breakdown of conventional geometry, which requires some length scale to be “sub-stringy,” with L < ls. To state T-duality precisely, let us first consider M = M′ = S1. We parameterise this with a coordinate X ∈ R making the identification X ∼ X + 2π. Consider a Euclidean metric gR given by ds2 = R2dX2. The real parameter R is usually called the “radius” from the obvious embedding in R2. This manifold is Ricci-flat and thus the sigma model with this target space is a conformal field theory, the “c = 1 boson.” Let us furthermore set the string scale ls = 1. With this, we attain a complete physical equivalence.

CFT(S1, gR) ≅ CFT(S1, g1/R)

Thus these two target spaces are indistinguishable from the point of view of string theory.

Just to give a physical picture for what this means, suppose for sake of discussion that superstring theory describes our universe, and thus that in some sense there must be six extra spatial dimensions. Suppose further that we had evidence that the extra dimensions factorized topologically and metrically as K5 × S1; then it would make sense to ask: What is the radius R of this S1 in our universe? In principle this could be measured by producing sufficiently energetic particles (so-called “Kaluza-Klein modes”), or perhaps measuring deviations from Newton’s inverse square law of gravity at distances L ∼ R. In string theory, T-duality implies that R ≥ ls, because any theory with R < ls is equivalent to another theory with R > ls. Thus we have a nontrivial relation between two (in principle) observable quantities, R and ls, which one might imagine testing experimentally. Let us now consider the theory CFT(Td, g), where Td is the d-dimensional torus, with coordinates Xi parameterising Rd/2πZd, and a constant metric tensor gij. Then there is a complete physical equivalence

CFT(Td, g) ≅ CFT(Td, g−1)

In fact this is just one element of a discrete group of T-duality symmetries, generated by T-dualities along one-cycles, and large diffeomorphisms (those not continuously connected to the identity). The complete group is isomorphic to SO(d, d; Z).

While very different from conventional geometry, T-duality has a simple intuitive explanation. This starts with the observation that the possible embeddings of a string into X can be classified by the fundamental group π1(X). Strings representing non-trivial homotopy classes are usually referred to as “winding states.” Furthermore, since strings interact by interconnecting at points, the group structure on π1 provided by concatenation of based loops is meaningful and is respected by interactions in the string theory. Now π1(Td) ≅ Zd, as an abelian group, referred to as the group of “winding numbers”.

Of course, there is another Zd we could bring into the discussion, the Pontryagin dual of the U(1)d of which Td is an affinization. An element of this group is referred to physically as a “momentum,” as it is the eigenvalue of a translation operator on Td. Again, this group structure is respected by the interactions. These two group structures, momentum and winding, can be summarized in the statement that the full closed string algebra contains the group algebra C[Zd] ⊕ C[Zd].

In essence, the point of T-duality is that if we quantize the string on a sufficiently small target space, the roles of momentum and winding will be interchanged. But the main point can be seen by bringing in some elementary spectral geometry. Besides the algebra structure, another invariant of a conformal field theory is the spectrum of its Hamiltonian H (technically, the Virasoro operator L0 + L ̄0). This Hamiltonian can be thought of as an analog of the standard Laplacian ∆g on functions on X, and its spectrum on Td with metric g is

Spec ∆= {∑i,j=1d gijpipj; pi ∈ Zd}

On the other hand, the energy of a winding string is (intuitively) a function of its length. On our torus, a geodesic with winding number w ∈ Zd has length squared

L2 = ∑i,j=1d gijwiwj

Now, the only string theory input we need to bring in is that the total Hamiltonian contains both terms,

H = ∆g + L2 + · · ·

where the extra terms … express the energy of excited (or “oscillator”) modes of the string. Then, the inversion g → g−1, combined with the interchange p ↔ w, leaves the spectrum of H invariant. This is T-duality.

There is a simple generalization of the above to the case with a non-zero B-field on the torus satisfying dB = 0. In this case, since B is a constant antisymmetric tensor, we can label CFT’s by the matrix g + B. Now, the basic T-duality relation becomes

CFT(Td, g + B) ≅ CFT(Td, (g + B)−1)

Another generalization, which is considerably more subtle, is to do T-duality in families, or fiberwise T-duality. The same arguments can be made, and would become precise in the limit that the metric on the fibers varies on length scales far greater than ls, and has curvature lengths far greater than ls. This is sometimes called the “adiabatic limit” in physics. While this is a very restrictive assumption, there are more heuristic physical arguments that T-duality should hold more generally, with corrections to the relations proportional to curvatures ls2R and derivatives ls∂ of the fiber metric, both in perturbation theory and from world-sheet instantons.

Revisiting Catastrophes. Thought of the Day 134.0

The most explicit influence from mathematics in semiotics is probably René Thom’s controversial theory of catastrophes (here and here), with philosophical and semiotic support from Jean Petitot. Catastrophe theory is but one of several formalisms in the broad field of qualitative dynamics (comprising also chaos theory, complexity theory, self-organized criticality, etc.). In all these cases, the theories in question are in a certain sense phenomenological because the focus is different types of qualitative behavior of dynamic systems grasped on a purely formal level bracketing their causal determination on the deeper level. A widespread tool in these disciplines is phase space – a space defined by the variables governing the development of the system so that this development may be mapped as a trajectory through phase space, each point on the trajectory mapping one global state of the system. This space may be inhabited by different types of attractors (attracting trajectories), repellors (repelling them), attractor basins around attractors, and borders between such basins characterized by different types of topological saddles which may have a complicated topology.

Catastrophe theory has its basis in differential topology, that is, the branch of topology keeping various differential properties in a function invariant under transformation. It is, more specifically, the so-called Whitney topology whose invariants are points where the nth derivative of a function takes the value 0, graphically corresponding to minima, maxima, turning tangents, and, in higher dimensions, different complicated saddles. Catastrophe theory takes its point of departure in singularity theory whose object is the shift between types of such functions. It thus erects a distinction between an inner space – where the function varies – and an outer space of control variables charting the variation of that function including where it changes type – where, e.g. it goes from having one minimum to having two minima, via a singular case with turning tangent. The continuous variation of control parameters thus corresponds to a continuous variation within one subtype of the function, until it reaches a singular point where it discontinuously, ‘catastrophically’, changes subtype. The philosophy-of-science interpretation of this formalism now conceives the stable subtype of function as representing the stable state of a system, and the passage of the critical point as the sudden shift to a new stable state. The configuration of control parameters thus provides a sort of map of the shift between continuous development and discontinuous ‘jump’. Thom’s semiotic interpretation of this formalism entails that typical catastrophic trajectories of this kind may be interpreted as stable process types phenomenologically salient for perception and giving rise to basic verbal categories.

Untitled

One of the simpler catastrophes is the so-called cusp (a). It constitutes a meta-diagram, namely a diagram of the possible type-shifts of a simpler diagram (b), that of the equation ax4 + bx2 + cx = 0. The upper part of (a) shows the so-called fold, charting the manifold of solutions to the equation in the three dimensions a, b and c. By the projection of the fold on the a, b-plane, the pointed figure of the cusp (lower a) is obtained. The cusp now charts the type-shift of the function: Inside the cusp, the function has two minima, outside it only one minimum. Different paths through the cusp thus corresponds to different variations of the equation by the variation of the external variables a and b. One such typical path is the path indicated by the left-right arrow on all four diagrams which crosses the cusp from inside out, giving rise to a diagram of the further level (c) – depending on the interpretation of the minima as simultaneous states. Here, thus, we find diagram transformations on three different, nested levels.

The concept of transformation plays several roles in this formalism. The most spectacular one refers, of course, to the change in external control variables, determining a trajectory through phase space where the function controlled changes type. This transformation thus searches the possibility for a change of the subtypes of the function in question, that is, it plays the role of eidetic variation mapping how the function is ‘unfolded’ (the basic theorem of catastrophe theory refers to such unfolding of simple functions). Another transformation finds stable classes of such local trajectory pieces including such shifts – making possible the recognition of such types of shifts in different empirical phenomena. On the most empirical level, finally, one running of such a trajectory piece provides, in itself, a transformation of one state into another, whereby the two states are rationally interconnected. Generally, it is possible to make a given transformation the object of a higher order transformation which by abstraction may investigate aspects of the lower one’s type and conditions. Thus, the central unfolding of a function germ in Catastrophe Theory constitutes a transformation having the character of an eidetic variation making clear which possibilities lie in the function germ in question. As an abstract formalism, the higher of these transformations may determine the lower one as invariant in a series of empirical cases.

Complexity theory is a broader and more inclusive term covering the general study of the macro-behavior of composite systems, also using phase space representation. The theoretical biologist Stuart Kauffman (intro) argues that in a phase space of all possible genotypes, biological evolution must unfold in a rather small and specifically qualified sub-space characterized by many, closely located and stable states (corresponding to the possibility of a species to ‘jump’ to another and better genotype in the face of environmental change) – as opposed to phase space areas with few, very stable states (which will only be optimal in certain, very stable environments and thus fragile when exposed to change), and also opposed, on the other hand, to sub-spaces with a high plurality of only metastable states (here, the species will tend to merge into neighboring species and hence never stabilize). On the base of this argument, only a small subset of the set of virtual genotypes possesses ‘evolvability’ as this special combination between plasticity and stability. The overall argument thus goes that order in biology is not a pure product of evolution; the possibility of order must be present in certain types of organized matter before selection begins – conversely, selection requires already organized material on which to work. The identification of a species with a co-localized group of stable states in genome space thus provides a (local) invariance for the transformation taking a trajectory through space, and larger groups of neighboring stabilities – lineages – again provide invariants defined by various more or less general transformations. Species, in this view, are in a certain limited sense ‘natural kinds’ and thus naturally signifying entities. Kauffman’s speculations over genotypical phase space have a crucial bearing on a transformation concept central to biology, namely mutation. On this basis far from all virtual mutations are really possible – even apart from their degree of environmental relevance. A mutation into a stable but remotely placed species in phase space will be impossible (evolution cannot cross the distance in phase space), just like a mutation in an area with many, unstable proto-species will not allow for any stabilization of species at all and will thus fall prey to arbitrary small environment variations. Kauffman takes a spontaneous and non-formalized transformation concept (mutation) and attempts a formalization by investigating its condition of possibility as movement between stable genomes in genotype phase space. A series of constraints turn out to determine type formation on a higher level (the three different types of local geography in phase space). If the trajectory of mutations must obey the possibility of walking between stable species, then the space of possibility of trajectories is highly limited. Self-organized criticality as developed by Per Bak (How Nature Works the science of self-organized criticality) belongs to the same type of theories. Criticality is here defined as that state of a complicated system where sudden developments in all sizes spontaneously occur.

Philosophizing Loops – Why Spin Foam Constraints to 3D Dynamics Evolution?

I02-31-theories2

The philosophy of loops is canonical, i.e., an analysis of the evolution of variables defined classically through a foliation of spacetime by a family of space-like three- surfaces ∑t. The standard choice is the three-dimensional metric gij, and its canonical conjugate, related to the extrinsic curvature. If the system is reparametrization invariant, the total hamiltonian vanishes, and this hamiltonian constraint is usually called the Wheeler-DeWitt equation. Choosing the canonical variables is fundamental, to say the least.

Abhay Ashtekar‘s insights stems from the definition of an original set of variables stemming from Einstein-Hilbert Lagrangian written in the form,

S = ∫ea ∧ eb ∧ Rcdεabcd —– (1)

where, eare the one-forms associated to the tetrad,

ea ≡ eaμdxμ —– (2)

The associated SO(1, 3) connection one-form ϖab is called the spin connection. Its field strength is the curvature expressed as a two form:

Rab ≡ dϖab + ϖac ∧ ϖcb —– (3)

Ashtekar’s variables are actually based on the SU(2) self-dual connection

A = ϖ − i ∗ ϖ —– (4)

Its field strength is

F ≡ dA + A ∧ A —– (5)

The dynamical variables are then (Ai, Ei ≡ F0i). The main virtue of these variables is that constraints are then linearized. One of them is exactly analogous to Gauss’ law:

DiEi = 0 —– (6)

There is another one related to three-dimensional diffeomorphisms invariance,

trFijEi = 0 —– (7)

and, finally, there is the Hamiltonian constraint,

trFijEiEj = 0 —– (8)

On a purely mathematical basis, there is no doubt that Astekhar’s variables are of a great ingenuity. As a physical tool to describe the metric of space, they are not real in general. This forces a reality condition to be imposed, which is akward. For this reason it is usually prefered to use the Barbero-Immirzi formalism in which the connection depends on a free parameter, γ

Aia + ϖia + γKia —– (9)

ϖ being the spin connection, and K the extrinsic curvature. When γ = i, Ashtekar’s formalism is recovered, for other values of γ, the explicit form of the constraints is more complicated. Even if there is a Hamiltonian constraint that seems promising, was isn’t particularly clear is if the quantum constraint algebra is isomorphic to the classical algebra.

Some states which satisfy the Astekhar constraints are given by the loop representation, which can be introduced from the construct (depending both on the gauge field A and on a parametrized loop γ)

W (γ, A) ≡ trPeφγA —– (10)

and a functional transform mapping functionals of the gauge field ψ(A) into functionals of loops, ψ(γ):

ψ(γ) ≡ ∫DAW(γ, A) ψ(A) —– (11)

When one divides by diffeomorphisms, it is found that functions of knot classes (diffeomorphisms classes of smooth, non self-intersecting loops) satisfy all the constraints. Some particular states sought to reproduce smooth spaces at coarse graining are the Weaves. It is not clear to what extent they also approach the conjugate variables (that is, the extrinsic curvature) as well.

In the presence of a cosmological constant the hamiltonian constraint reads:

εijkEaiEbj(Fkab + λ/3εabcEck) = 0 —– (12)

A particular class of solutions expounded by Lee Smolin of the constraint are self-dual solutions of the form

Fiab = -λ/3εabcEci —– (13)

Loop states in general (suitable symmetrized) can be represented as spin network states: colored lines (carrying some SU(2) representation) meeting at nodes where intertwining SU(2) operators act. There is also a path integral representation, known as spin foam, a topological theory of colored surfaces representing the evolution of a spin network. Spin foams can also be considered as an independent approach to the quantization of the gravitational field. In addition to its specific problems, the hamiltonian constraint does not say in what sense (with respect to what) the three-dimensional dynamics evolve.

Fictionalism. Drunken Risibility.

mathematical-objects

Applied mathematics is often used as a source of support for platonism. How else but by becoming platonists can we make sense of the success of applied mathematics in science? As an answer to this question, the fictionalist empiricist will note that it’s not the case that applied mathematics always works. In several cases, it doesn’t work as initially intended, and it works only when accompanied by suitable empirical interpretations of the mathematical formalism. For example, when Dirac found negative energy solutions to the equation that now bears his name, he tried to devise physically meaningful interpretations of these solutions. His first inclination was to ignore these negative energy solutions as not being physically significant, and he took the solutions to be just an artifact of the mathematics – as is commonly done in similar cases in classical mechanics. Later, however, he identified a physically meaningful interpretation of these negative energy solutions in terms of “holes” in a sea of electrons. But the resulting interpretation was empirically inadequate, since it entailed that protons and electrons had the same mass. Given this difficulty, Dirac rejected that interpretation and formulated another. He interpreted the negative energy solutions in terms of a new particle that had the same mass as the electron but opposite charge. A couple of years after Dirac’s final interpretation was published Carl Anderson detected something that could be interpreted as the particle that Dirac posited. Asked as to whether Anderson was aware of Dirac’s papers, Anderson replied that he knew of the work, but he was so busy with his instruments that, as far as he was concerned, the discovery of the positron was entirely accidental.

The application of mathematics is ultimately a matter of using the vocabulary of mathematical theories to express relations among physical entities. Given that, for the fictionalist empiricist, the truth of the various theories involved – mathematical, physical, biological, and whatnot – is never asserted, no commitment to the existence of the entities that are posited by such theories is forthcoming. But if the theories in question – and, in particular, the mathematical theories – are not taken to be true, how can they be successfully applied? There is no mystery here. First, even in science, false theories can have true consequences. The situation here is analogous to what happens in fiction. Novels can, and often do, provide insightful, illuminating descriptions of phenomena of various kinds – for example, psychological or historical events – that help us understand the events in question in new, unexpected ways, despite the fact that the novels in question are not true. Second, given that mathematical entities are not subject to spatial-temporal constraints, it’s not surprising that they have no active role in applied contexts. Mathematical theories need only provide a framework that, suitably interpreted, can be used to describe the behavior of various types of phenomena – whether the latter are physical, chemical, biological, or whatnot. Having such a descriptive function is clearly compatible with the (interpreted) mathematical framework not being true, as Dirac’s case illustrates so powerfully. After all, as was just noted, one of the interpretations of the mathematical formalism was empirically inadequate.

On the fictionalist empiricist account, mathematical discourse is clearly taken on a par with scientific discourse. There is no change in the semantics. Mathematical and scientific statements are treated in exactly the same way. Both sorts of statements are truth-apt, and are taken as describing (correctly or not) the objects and relations they are about. The only shift here is on the aim of the research. After all, on the fictionalist empiricist proposal, the goal is not truth, but something weaker: empirical adequacy – or truth only with respect to the observable phenomena. However, once again, this goal matters to both science and (applied) mathematics, and the semantic uniformity between the two fields is still preserved. According to the fictionalist empiricist, mathematical discourse is also taken literally. If a mathematical theory states that “There are differentiable functions such that…”, the theory is not going to be reformulated in any way to avoid reference to these functions. The truth of the theory, however, is never asserted. There’s no need for that, given that only the empirical adequacy of the overall theoretical package is required.

Quantum Informational Biochemistry. Thought of the Day 71.0

el_net2

A natural extension of the information-theoretic Darwinian approach for biological systems is obtained taking into account that biological systems are constituted in their fundamental level by physical systems. Therefore it is through the interaction among physical elementary systems that the biological level is reached after increasing several orders of magnitude the size of the system and only for certain associations of molecules – biochemistry.

In particular, this viewpoint lies in the foundation of the “quantum brain” project established by Hameroff and Penrose (Shadows of the Mind). They tried to lift quantum physical processes associated with microsystems composing the brain to the level of consciousness. Microtubulas were considered as the basic quantum information processors. This project as well the general project of reduction of biology to quantum physics has its strong and weak sides. One of the main problems is that decoherence should quickly wash out the quantum features such as superposition and entanglement. (Hameroff and Penrose would disagree with this statement. They try to develop models of hot and macroscopic brain preserving quantum features of its elementary micro-components.)

However, even if we assume that microscopic quantum physical behavior disappears with increasing size and number of atoms due to decoherence, it seems that the basic quantum features of information processing can survive in macroscopic biological systems (operating on temporal and spatial scales which are essentially different from the scales of the quantum micro-world). The associated information processor for the mesoscopic or macroscopic biological system would be a network of increasing complexity formed by the elementary probabilistic classical Turing machines of the constituents. Such composed network of processors can exhibit special behavioral signatures which are similar to quantum ones. We call such biological systems quantum-like. In the series of works Asano and others (Quantum Adaptivity in Biology From Genetics to Cognition), there was developed an advanced formalism for modeling of behavior of quantum-like systems based on theory of open quantum systems and more general theory of adaptive quantum systems. This formalism is known as quantum bioinformatics.

The present quantum-like model of biological behavior is of the operational type (as well as the standard quantum mechanical model endowed with the Copenhagen interpretation). It cannot explain physical and biological processes behind the quantum-like information processing. Clarification of the origin of quantum-like biological behavior is related, in particular, to understanding of the nature of entanglement and its role in the process of interaction and cooperation in physical and biological systems. Qualitatively the information-theoretic Darwinian approach supplies an interesting possibility of explaining the generation of quantum-like information processors in biological systems. Hence, it can serve as the bio-physical background for quantum bioinformatics. There is an intriguing point in the fact that if the information-theoretic Darwinian approach is right, then it would be possible to produce quantum information from optimal flows of past, present and anticipated classical information in any classical information processor endowed with a complex enough program. Thus the unified evolutionary theory would supply a physical basis to Quantum Information Biology.

Rants of the Undead God: Instrumentalism. Thought of the Day 68.1

math-mathematics-monochrome-hd-wallpaper-71796

Hilbert’s program has often been interpreted as an instrumentalist account of mathematics. This reading relies on the distinction Hilbert makes between the finitary part of mathematics and the non-finitary rest which is in need of grounding (via finitary meta-mathematics). The finitary part Hilbert calls “contentual,” i.e., its propositions and proofs have content. The infinitary part, on the other hand, is “not meaningful from a finitary point of view.” This distinction corresponds to a distinction between formulas of the axiomatic systems of mathematics for which consistency proofs are being sought. Some of the formulas correspond to contentual, finitary propositions: they are the “real” formulas. The rest are called “ideal.” They are added to the real part of our mathematical theories in order to preserve classical inferences such as the principle of the excluded middle for infinite totalities, i.e., the principle that either all numbers have a given property or there is a number which does not have it.

It is the extension of the real part of the theory by the ideal, infinitary part that is in need of justification by a consistency proof – for there is a condition, a single but absolutely necessary one, to which the use of the method of ideal elements is subject, and that is the proof of consistency; for, extension by the addition of ideals is legitimate only if no contradiction is thereby brought about in the old, narrower domain, that is, if the relations that result for the old objects whenever the ideal objects are eliminated are valid in the old domain. Weyl described Hilbert’s project as replacing meaningful mathematics by a meaningless game of formulas. He noted that Hilbert wanted to “secure not truth, but the consistency of analysis” and suggested a criticism that echoes an earlier one by Frege – why should we take consistency of a formal system of mathematics as a reason to believe in the truth of the pre-formal mathematics it codifies? Is Hilbert’s meaningless inventory of formulas not just “the bloodless ghost of analysis? Weyl suggested that if mathematics is to remain a serious cultural concern, then some sense must be attached to Hilbert’s game of formulae. In theoretical physics we have before us the great example of a [kind of] knowledge of completely different character than the common or phenomenal knowledge that expresses purely what is given in intuition. While in this case every judgment has its own sense that is completely realizable within intuition, this is by no means the case for the statements of theoretical physics. Hilbert suggested that consistency is not the only virtue ideal mathematics has –  transfinite inference simplifies and abbreviates proofs, brevity and economy of thought are the raison d’être of existence proofs.

Hilbert’s treatment of philosophical questions is not meant as a kind of instrumentalist agnosticism about existence and truth and so forth. On the contrary, it is meant to provide a non-skeptical and positive solution to such problems, a solution couched in cognitively accessible terms. And, it appears, the same solution holds for both mathematical and physical theories. Once new concepts or “ideal elements” or new theoretical terms have been accepted, then they exist in the sense in which any theoretical entities exist. When Weyl eventually turned away from intuitionism, he emphasized the purpose of Hilbert’s proof theory, not to turn mathematics into a meaningless game of symbols, but to turn it into a theoretical science which codifies scientific (mathematical) practice. The reading of Hilbert as an instrumentalist goes hand in hand with a reading of the proof-theoretic program as a reductionist project. The instrumentalist reading interprets ideal mathematics as a meaningless formalism, which simplifies and “rounds out” mathematical reasoning. But a consistency proof of ideal mathematics by itself does not explain what ideal mathematics is an instrument for.

On this picture, classical mathematics is to be formalized in a system which includes formalizations of all the directly verifiable (by calculation) propositions of contentual finite number theory. The consistency proof should show that all real propositions which can be proved by ideal methods are true, i.e., can be directly verified by finite calculation. Actual proofs such as the ε-substitution procedure are of such a kind: they provide finitary procedures which eliminate transfinite elements from proofs of real statements. In particular, they turn putative ideal derivations of 0 = 1 into derivations in the real part of the theory; the impossibility of such a derivation establishes consistency of the theory. Indeed, Hilbert saw that something stronger is true: not only does a consistency proof establish truth of real formulas provable by ideal methods, but it yields finitary proofs of finitary general propositions if the corresponding free-variable formula is derivable by ideal methods.

ε-calculus and Hilbert’s Contentual Number Theory: Proselytizing Intuitionism. Thought of the Day 67.0

Untitled

Hilbert came to reject Russell’s logicist solution to the consistency problem for arithmetic, mainly for the reason that the axiom of reducibility cannot be accepted as a purely logical axiom. He concluded that the aim of reducing set theory, and with it the usual methods of analysis, to logic, has not been achieved today and maybe cannot be achieved at all. At the same time, Brouwer’s intuitionist mathematics gained currency. In particular, Hilbert’s former student Hermann Weyl converted to intuitionism.

According to Hilbert, there is a privileged part of mathematics, contentual elementary number theory, which relies only on a “purely intuitive basis of concrete signs.” Whereas the operating with abstract concepts was considered “inadequate and uncertain,” there is a realm of extra-logical discrete objects, which exist intuitively as immediate experience before all thought. If logical inference is to be certain, then these objects must be capable of being completely surveyed in all their parts, and their presentation, their difference, their succession (like the objects themselves) must exist for us immediately, intuitively, as something which cannot be reduced to something else.

The objects in questions are signs, both numerals and the signs that make up formulas a formal proofs. The domain of contentual number theory consists in the finitary numerals, i.e., sequences of strokes. These have no meaning, i.e., they do not stand for abstract objects, but they can be operated on (e.g., concatenated) and compared. Knowledge of their properties and relations is intuitive and unmediated by logical inference. Contentual number theory developed this way is secure, according to Hilbert: no contradictions can arise simply because there is no logical structure in the propositions of contentual number theory. The intuitive-contentual operations with signs form the basis of Hilbert’s meta-mathematics. Just as contentual number theory operates with sequences of strokes, so meta-mathematics operates with sequences of symbols (formulas, proofs). Formulas and proofs can be syntactically manipulated, and the properties and relationships of formulas and proofs are similarly based in a logic-free intuitive capacity which guarantees certainty of knowledge about formulas and proofs arrived at by such syntactic operations. Mathematics itself, however, operates with abstract concepts, e.g., quantifiers, sets, functions, and uses logical inference based on principles such as mathematical induction or the principle of the excluded middle. These “concept-formations” and modes of reasoning had been criticized by Brouwer and others on grounds that they presuppose infinite totalities as given, or that they involve impredicative definitions. Hilbert’s aim was to justify their use. To this end, he pointed out that they can be formalized in axiomatic systems (such as that of Principia or those developed by Hilbert himself), and mathematical propositions and proofs thus turn into formulas and derivations from axioms according to strictly circumscribed rules of derivation. Mathematics, to Hilbert, “becomes an inventory of provable formulas.” In this way the proofs of mathematics are subject to metamathematical, contentual investigation. The goal of Hilbert is then to give a contentual, meta-mathematical proof that there can be no derivation of a contradiction, i.e., no formal derivation of a formula A and of its negation ¬A.

Hilbert and Bernays developed the ε-calculus as their definitive formalism for axiom systems for arithmetic and analysis, and the so-called ε-substitution method as the preferred approach to giving consistency proofs. Briefly, the ε-calculus is a formalism that includes ε as a term-forming operator. If A(x) is a formula, then εxA(x) is a term, which intuitively stands for a witness for A(x). In a logical formalism containing the ε-operator, the quantifiers can be defined by: ∃x A(x) ≡ A(εxA(x)) and ∀x A(x) ≡ A(εx¬A(x)). The only additional axiom necessary is the so-called “transfinite axiom,” A(t) → A(εxA(x)). Based on this idea, Hilbert and his collaborators developed axiomatizations of number theory and analysis. Consistency proofs for these systems were then given using the ε-substitution method. The idea of this method is, roughly, that the ε-terms εxA(x) occurring in a formal proof are replaced by actual numerals, resulting in a quantifier-free proof. Suppose we had a (suitably normalized) derivation of 0 = 1 that contains only one ε-term εxA(x). Replace all occurrences of εxA(x) by 0. The instances of the transfinite axiom then are all of the form A(t) → A(0). Since no other ε-terms occur in the proof, A(t) and A(0) are basic numerical formulas without quantifiers and, we may assume, also without free variables. So they can be evaluated by finitary calculation. If all such instances turn out to be true numerical formulas, we are done. If not, this must be because A(t) is true for some t, and A(0) is false. Then replace εxA(x) instead by n, where n is the numerical value of the term t. The resulting proof is then seen to be a derivation of 0 = 1 from true, purely numerical formulas using only modus ponens, and this is impossible. Indeed, the procedure works with only slight modifications even in the presence of the induction axiom, which in the ε-calculus takes the form of a least number principle: A(t) → εxA(x) ≤ t, which intuitively requires εxA(x) to be the least witness for A(x).

Abstract Expressions of Time’s Modalities. Thought of the Day 21.0

00_Pask_Archtecture_of_Knowledge_24

According to Gregory Bateson,

What we mean by information — the elementary unit of information — is a difference which makes a difference, and it is able to make a difference because the neural pathways along which it travels and is continually transformed are themselves provided with energy. The pathways are ready to be triggered. We may even say that the question is already implicit in them.

In other words, we always need to know some second order logic, and presuppose a second order of “order” (cybernetics) usually shared within a distinct community, to realize what a certain claim, hypothesis or theory means. In Koichiro Matsuno’s opinion Bateson’s phrase

must be a prototypical example of second-order logic in that the difference appearing both in the subject and predicate can accept quantification. Most statements framed in second-order logic are not decidable. In order to make them decidable or meaningful, some qualifier needs to be used. A popular example of such a qualifier is a subjective observer. However, the point is that the subjective observer is not limited to Alice or Bob in the QBist parlance.

This is what is necessitated in order understand the different viewpoints in logic of mathematicians, physicists and philosophers in the dispute about the existence of time. An essential aspect of David Bohm‘s “implicate order” can be seen in the grammatical formulation of theses such as the law of motion:

While it is legitimate in its own light, the physical law of motion alone framed in eternal time referable in the present tense, whether in classical or quantum mechanics, is not competent enough to address how the now could be experienced. … Measurement differs from the physical law of motion as much as the now in experience differs from the present tense in description. The watershed separating between measurement and the law of motion is in the distinction between the now and the present tense. Measurement is thus subjective and agential in making a punctuation at the moment of now. (Matsuno)

The distinction between experiencing and capturing experience of time in terms of language is made explicit in Heidegger’s Being and Time

… by passing away constantly, time remains as time. To remain means: not to disappear, thus, to presence. Thus time is determined by a kind of Being. How, then, is Being supposed to be determined by time?

Koichiro Matsuno’s comment on this is:

Time passing away is an abstraction from accepting the distinction of the grammatical tenses, while time remaining as time refers to the temporality of the durable now prior to the abstraction of the tenses.

Therefore, when trying to understand the “local logics/phenomenologies” of the individual disciplines (mathematics physics, philosophy, etc., including their fields), one should be aware of the fact that the capabilities of our scientific language are not limitless:

…the now of the present moment is movable and dynamic in updating the present perfect tense in the present progressive tense. That is to say, the now is prior and all of the grammatical tenses including the ubiquitous present tense are the abstract derivatives from the durable now. (Matsuno)

This presupposes the adequacy of mathematical abstractions specifically invented or adopted and elaborated for the expression of more sophisticated modalities of time’s now than those currently used in such formalisms as temporal logic.