# The Canonical of a priori and a posteriori Variational Calculus as Phenomenologically Driven. Note Quote.

The expression variational calculus usually identifies two different but related branches in Mathematics. The first aimed to produce theorems on the existence of solutions of (partial or ordinary) differential equations generated by a variational principle and it is a branch of local analysis (usually in Rn); the second uses techniques of differential geometry to deal with the so-called variational calculus on manifolds.

The local-analytic paradigm is often aimed to deal with particular situations, when it is necessary to pay attention to the exact definition of the functional space which needs to be considered. That functional space is very sensitive to boundary conditions. Moreover, minimal requirements on data are investigated in order to allow the existence of (weak) solutions of the equations.

On the contrary, the global-geometric paradigm investigates the minimal structures which allow to pose the variational problems on manifolds, extending what is done in Rn but usually being quite generous about regularity hypotheses (e.g. hardly ever one considers less than C-objects). Since, even on manifolds, the search for solutions starts with a local problem (for which one can use local analysis) the global-geometric paradigm hardly ever deals with exact solutions, unless the global geometric structure of the manifold strongly constrains the existence of solutions.

A further a priori different approach is the one of Physics. In Physics one usually has field equations which are locally given on a portion of an unknown manifold. One thence starts to solve field equations locally in order to find a local solution and only afterwards one tries to find the maximal analytical extension (if any) of that local solution. The maximal extension can be regarded as a global solution on a suitable manifold M, in the sense that the extension defines M as well. In fact, one first proceeds to solve field equations in a coordinate neighbourhood; afterwards, one changes coordinates and tries to extend the found solution out of the patches as long as it is possible. The coordinate changes are the cocycle of transition functions with respect to the atlas and they define the base manifold M. This approach is essential to physical applications when the base manifold is a priori unknown, as in General Relativity, and it has to be determined by physical inputs.

Luckily enough, that approach does not disagree with the standard variational calculus approach in which the base manifold M is instead fixed from the very beginning. One can regard the variational problem as the search for a solution on that particular base manifold. Global solutions on other manifolds may be found using other variational principles on different base manifolds. Even for this reason, the variational principle should be universal, i.e. one defines a family of variational principles: one for each base manifold, or at least one for any base manifold in a “reasonably” wide class of manifolds. The strong requirement, which is physically motivated by the belief that Physics should work more or less in the same way regardless of the particular spacetime which is actually realized in Nature. Of course, a scenario would be conceivable in which everything works because of the particular (topological, differentiable, etc.) structure of the spacetime. This position, however, is not desirable from a physical viewpoint since, in this case, one has to explain why that particular spacetime is realized (a priori or a posteriori).

In spite of the aforementioned strong regularity requirements, the spectrum of situations one can encounter is unexpectedly wide, covering the whole of fundamental physics. Moreover, it is surprising how the geometric formalism is effectual for what concerns identifications of basic structures of field theories. In fact, just requiring the theory to be globally well-defined and to depend on physical data only, it often constrains very strongly the choice of the local theories to be globalized. These constraints are one of the strongest motivations in choosing a variational approach in physical applications. Another motivation is a well formulated framework for conserved quantities. A global- geometric framework is a priori necessary to deal with conserved quantities being non-local.

In the modem perspective of Quantum Field Theory (QFT) the basic object encoding the properties of any quantum system is the action functional. From a quantum viewpoint the action functional is more fundamental than field equations which are obtained in the classical limit. The geometric framework provides drastic simplifications of some key issues, such as the definition of the variation operator. The variation is deeply geometric though, in practice, it coincides with the definition given in the local-analytic paradigm. In the latter case, the functional derivative is usually the directional derivative of the action functional which is a function on the infinite-dimensional space of fields defined on a region D together with some boundary conditions on the boundary ∂D. To be able to define it one should first define the functional space, then define some notion of deformation which preserves the boundary conditions (or equivalently topologize the functional space), define a variation operator on the chosen space, and, finally, prove the most commonly used properties of derivatives. Once one has done it, one finds in principle the same results that would be found when using the geometric definition of variation (for which no infinite dimensional space is needed). In fact, in any case of interest for fundamental physics, the functional derivative is simply defined by means of the derivative of a real function of one real variable. The Lagrangian formalism is a shortcut which translates the variation of (infinite dimensional) action functionals into the variation of the (finite dimensional) Lagrangian structure.

Another feature of the geometric framework is the possibility of dealing with non-local properties of field theories. There are, in fact, phenomena, such as monopoles or instantons, which are described by means of non-trivial bundles. Their properties are tightly related to the non-triviality of the configuration bundle; and they are relatively obscure when regarded by any local paradigm. In some sense, a local paradigm hides global properties in the boundary conditions and in the symmetries of the field equations, which are in turn reflected in the functional space we choose and about which, it being infinite dimensional, we do not know almost anything a priori. We could say that the existence of these phenomena is a further hint that field theories have to be stated on bundles rather than on Cartesian products. This statement, if anything, is phenomenologically driven.

When a non-trivial bundle is involved in a field theory, from a physical viewpoint it has to be regarded as an unknown object. As for the base manifold, it has then to be constructed out of physical inputs. One can do that in (at least) two ways which are both actually used in applications. First of all, one can assume the bundle to be a natural bundle which is thence canonically constructed out of its base manifold. Since the base manifold is identified by the (maximal) extension of the local solutions, then the bundle itself is identified too. This approach is the one used in General Relativity. In these applications, bundles are gauge natural and they are therefore constructed out of a structure bundle P, which, usually, contains extra information which is not directly encoded into the spacetime manifolds. In physical applications the structure bundle P has also to be constructed out of physical observables. This can be achieved by using gauge invariance of field equations. In fact, two local solutions differing by a (pure) gauge transformation describe the same physical system. Then while extending from one patch to another we feel free both to change coordinates on M and to perform a (pure) gauge transformation before glueing two local solutions. Then coordinate changes define the base manifold M, while the (pure) gauge transformations form a cocycle (valued in the gauge group) which defines, in fact, the structure bundle P. Once again solutions with different structure bundles can be found in different variational principles. Accordingly, the variational principle should be universal with respect to the structure bundle.

Local results are by no means less important. They are often the foundations on which the geometric framework is based on. More explicitly, Variational Calculus is perhaps the branch of mathematics that possibilizes the strongest interaction between Analysis and Geometry.

# Coarse Philosophies of Coarse Embeddabilities: Metric Space Conjectures Act Algorithmically On Manifolds – Thought of the Day 145.0

A coarse structure on a set X is defined to be a collection of subsets of X × X, called the controlled sets or entourages for the coarse structure, which satisfy some simple axioms. The most important of these states that if E and F are controlled then so is

E ◦ F := {(x, z) : ∃y, (x, y) ∈ E, (y, z) ∈ F}

Consider the metric spaces Zn and Rn. Their small-scale structure, their topology is entirely different, but on the large scale they resemble each other closely: any geometric configuration in Rn can be approximated by one in Zn, to within a uniformly bounded error. We think of such spaces as “coarsely equivalent”. The other axioms require that the diagonal should be a controlled set, and that subsets, transposes, and (finite) unions of controlled sets should be controlled. It is accurate to say that a coarse structure is the large-scale counterpart of a uniformity than of a topology.

Coarse structures and coarse spaces enjoy a philosophical advantage over coarse metric spaces, in that, all left invariant bounded geometry metrics on a countable group induce the same metric coarse structure which is therefore transparently uniquely determined by the group. On the other hand, the absence of a natural gauge complicates the notion of a coarse family, while it is natural to speak of sets of uniform size in different metric spaces it is not possible to do so in different coarse spaces without imposing additional structure.

Mikhail Leonidovich Gromov introduced the notion of coarse embedding for metric spaces. Let X and Y be metric spaces.

A map f : X → Y is said to be a coarse embedding if ∃ nondecreasing functions ρ1 and ρ2 from R+ = [0, ∞) to R such that

• ρ1(d(x,y)) ≤ d(f(x),f(y)) ≤ ρ2(d(x,y)) ∀ x, y ∈ X.
• limr→∞ ρi(r) = +∞ (i=1, 2).

Intuitively, coarse embeddability of a metric space X into Y means that we can draw a picture of X in Y which reflects the large scale geometry of X. In early 90’s, Gromov suggested that coarse embeddability of a discrete group into Hilbert space or some Banach spaces should be relevant to solving the Novikov conjecture. The connection between large scale geometry and differential topology and differential geometry, such as the Novikov conjecture, is built by index theory. Recall that an elliptic differential operator D on a compact manifold M is Fredholm in the sense that the kernel and cokernel of D are finite dimensional. The Fredholm index of D, which is defined by

index(D) = dim(kerD) − dim(cokerD),

has the following fundamental properties:

(1) it is an obstruction to invertibility of D;

(2) it is invariant under homotopy equivalence.

The celebrated Atiyah-Singer index theorem computes the Fredholm index of elliptic differential operators on compact manifolds and has important applications. However, an elliptic differential operator on a noncompact manifold is in general not Fredholm in the usual sense, but Fredholm in a generalized sense. The generalized Fredholm index for such an operator is called the higher index. In particular, on a general noncompact complete Riemannian manifold M, John Roe (Coarse Cohomology and Index Theory on Complete Riemannian Manifolds) introduced a higher index theory for elliptic differential operators on M.

The coarse Baum-Connes conjecture is an algorithm to compute the higher index of an elliptic differential operator on noncompact complete Riemannian manifolds. By the descent principal, the coarse Baum-Connes conjecture implies the Novikov higher signature conjecture. Guoliang Yu has proved the coarse Baum-Connes conjecture for bounded geometry metric spaces which are coarsely embeddable into Hilbert space. The metric spaces which admit coarse embeddings into Hilbert space are a large class, including e.g. all amenable groups and hyperbolic groups. In general, however, there are counterexamples to the coarse Baum-Connes conjecture. A notorious one is expander graphs. On the other hand, the coarse Novikov conjecture (i.e. the injectivity part of the coarse Baum-Connes conjecture) is an algorithm of determining non-vanishing of the higher index. Kasparov-Yu have proved the coarse Novikov conjecture for spaces which admit coarse embeddings into a uniformly convex Banach space.

# Is General Theory of Relativity a Gauge Theory? Trajectories of Diffeomorphism.

Historically the problem of observables in classical and quantum gravity is closely related to the so-called Einstein hole problem, i.e. to some of the consequences of general covariance in general relativity (GTR).

The central question is the physical meaning of the points of the event manifold underlying GTR. In contrast to pure mathematics this is a non-trivial point in physics. While in pure differential geometry one simply decrees the existence of, for example, a (pseudo-) Riemannian manifold with a differentiable structure (i.e., an appropriate cover with coordinate patches) plus a (pseudo-) Riemannian metric, g, the relation to physics is not simply one-one. In popular textbooks about GTR, it is frequently stated that all diffeomorphic (space-time) manifolds, M are physically indistinguishable. Put differently:

S − T = Riem/Diff —– (1)

This becomes particularly virulent in the Einstein hole problem. i.e., assuming that we have a region of space-time, free of matter, we can apply a local diffeomorphism which only acts within this hole, letting the exterior invariant. We get thus in general two different metric tensors

g(x) , g′(x) := Φ ◦ g(x) —– (2)

in the hole while certain inital conditions lying outside of the hole are unchanged, thus yielding two different solutions of the Einstein field equations.

Many physicists consider this to be a violation of determinism (which it is not!) and hence argue that the class of observable quantities have to be drastically reduced in (quantum) gravity theory. They follow the line of reasoning developed by Dirac in the context of gauge theory, thus implying that GTR is essentially also a gauge theory. This then winds up to the conclusion:

Dirac observables in quantum gravity are quantities which are diffeomorphism invariant with the diffeomorphism group, Diff acting from M to M, i.e.

Φ : M → M —– (3)

One should note that with respect to physical observations there is no violation of determinism. An observer can never really observe two different metric fields on one and the same space-time manifold. This can only happen on the mathematical paper. He will use a fixed measurement protocol, using rods and clocks in e.g. a local inertial frame where special relativity locally applies and then extend the results to general coordinate frames.

We get a certain orbit under Diff if we start from a particular manifold M with a metric tensor g and take the orbit

{M, Φ ◦g} —– (4)

In general we have additional fields and matter distributions on M which are transformd accordingly.

Note that not even scalars are invariant in general in the above sense, i.e., not even the Ricci scalar is observable in the Dirac sense:

R(x) ≠ Φ ◦ R(x) —– (5)

in the generic case. Thus, this would imply that the class of admissible observables can be pretty small (even empty!). Furthermore, it follows that points of M are not a priori distinguishable. On the other hand, many consider the Ricci scalar at a point to be an observable quantity.

This winds up to the question whether GTR is a true gauge theory or perhaps only apparently so at a first glance, while on a more fundamental level it is something different. In the words of Kuchar (What is observable..),

Quantities non-invariant under the full diffeomorphism group are observable in gravity.

The reason for these apparently diverging opinions stems from the role reference systems are assumed to play in GTR with some arguing that the gauge property of general coordinate invariance is only of a formal nature.

In the hole argument it is for example argued that it is important to add some particle trajectories which cross each other, thus generating concrete events on M. As these point events transform accordingly under a diffeomorphism, the distance between the corresponding coordinates x, y equals the distance between the transformed points Φ(x), Φ(y), thus being a Dirac observable. On the other hand, the coordinates x or y are not observable.

One should note that this observation is somewhat tautological in the realm of Riemannian geometry as the metric is an absolute quantity, put differently (and somewhat sloppily), ds2 is invariant under passive and by the same token active coordinate transformation (diffeomorphisms) because, while conceptually different, the transformation properties under the latter operations are defined as in the passive case. In the case of GTR this absolute quantity enters via the equivalence principle i.e., distances are measured for example in a local inertial frame (LIF) where special relativity holds and are then generalized to arbitrary coordinate systems.

# Abelian Categories, or Injective Resolutions are Diagrammatic. Note Quote.

Jean-Pierre Serre gave a more thoroughly cohomological turn to the conjectures than Weil had. Grothendieck says

Anyway Serre explained the Weil conjectures to me in cohomological terms around 1955 – and it was only in these terms that they could possibly ‘hook’ me …I am not sure anyone but Serre and I, not even Weil if that is possible, was deeply convinced such [a cohomology] must exist.

Specifically Serre approached the problem through sheaves, a new method in topology that he and others were exploring. Grothendieck would later describe each sheaf on a space T as a “meter stick” measuring T. The cohomology of a given sheaf gives a very coarse summary of the information in it – and in the best case it highlights just the information you want. Certain sheaves on T produced the Betti numbers. If you could put such “meter sticks” on Weil’s arithmetic spaces, and prove standard topological theorems in this form, the conjectures would follow.

By the nuts and bolts definition, a sheaf F on a topological space T is an assignment of Abelian groups to open subsets of T, plus group homomorphisms among them, all meeting a certain covering condition. Precisely these nuts and bolts were unavailable for the Weil conjectures because the arithmetic spaces had no useful topology in the then-existing sense.

At the École Normale Supérieure, Henri Cartan’s seminar spent 1948-49 and 1950-51 focussing on sheaf cohomology. As one motive, there was already de Rham cohomology on differentiable manifolds, which not only described their topology but also described differential analysis on manifolds. And during the time of the seminar Cartan saw how to modify sheaf cohomology as a tool in complex analysis. Given a complex analytic variety V Cartan could define sheaves that reflected not only the topology of V but also complex analysis on V.

These were promising for the Weil conjectures since Weil cohomology would need sheaves reflecting algebra on those spaces. But understand, this differential analysis and complex analysis used sheaves and cohomology in the usual topological sense. Their innovation was to find particular new sheaves which capture analytic or algebraic information that a pure topologist might not focus on.

The greater challenge to the Séminaire Cartan was, that along with the cohomology of topological spaces, the seminar looked at the cohomology of groups. Here sheaves are replaced by G-modules. This was formally quite different from topology yet it had grown from topology and was tightly tied to it. Indeed Eilenberg and Mac Lane created category theory in large part to explain both kinds of cohomology by clarifying the links between them. The seminar aimed to find what was common to the two kinds of cohomology and they found it in a pattern of functors.

The cohomology of a topological space X assigns to each sheaf F on X a series of Abelian groups HnF and to each sheaf map f : F → F′ a series of group homomorphisms Hnf : HnF → HnF′. The definition requires that each Hn is a functor, from sheaves on X to Abelian groups. A crucial property of these functors is:

HnF = 0 for n > 0

for any fine sheaf F where a sheaf is fine if it meets a certain condition borrowed from differential geometry by way of Cartan’s complex analytic geometry.

The cohomology of a group G assigns to each G-module M a series of Abelian groups HnM and to each homomorphism f : M →M′ a series of homomorphisms HnF : HnM → HnM′. Each Hn is a functor, from G-modules to Abelian groups. These functors have the same properties as topological cohomology except that:

HnM = 0 for n > 0

for any injective module M. A G-module I is injective if: For every G-module inclusion N M and homomorphism f : N → I there is at least one g : M → I making this commute

Cartan could treat the cohomology of several different algebraic structures: groups, Lie groups, associative algebras. These all rest on injective resolutions. But, he could not include topological spaces, the source of the whole, and still one of the main motives for pursuing the other cohomologies. Topological cohomology rested on the completely different apparatus of fine resolutions. As to the search for a Weil cohomology, this left two questions: What would Weil cohomology use in place of topological sheaves or G-modules? And what resolutions would give their cohomology? Specifically, Cartan & Eilenberg defines group cohomology (like several other constructions) as a derived functor, which in turn is defined using injective resolutions. So the cohomology of a topological space was not a derived functor in their technical sense. But a looser sense was apparently current.

I have realized that by formulating the theory of derived functors for categories more general than modules, one gets the cohomology of spaces at the same time at small cost. The existence follows from a general criterion, and fine sheaves will play the role of injective modules. One gets the fundamental spectral sequences as special cases of delectable and useful general spectral sequences. But I am not yet sure if it all works as well for non-separated spaces and I recall your doubts on the existence of an exact sequence in cohomology for dimensions ≥ 2. Besides this is probably all more or less explicit in Cartan-Eilenberg’s book which I have not yet had the pleasure to see.

Here he lays out the whole paper, commonly cited as Tôhoku for the journal that published it. There are several issues. For one thing, fine resolutions do not work for all topological spaces but only for the paracompact – that is, Hausdorff spaces where every open cover has a locally finite refinement. The Séminaire Cartan called these separated spaces. The limitation was no problem for differential geometry. All differential manifolds are paracompact. Nor was it a problem for most of analysis. But it was discouraging from the viewpoint of the Weil conjectures since non-trivial algebraic varieties are never Hausdorff.

The fact that sheaf cohomology is a special case of derived func- tors (at least for the paracompact case) is not in Cartan-Sammy. Cartan was aware of it and told [David] Buchsbaum to work on it, but he seems not to have done it. The interest of it would be to show just which properties of fine sheaves we need to use; and so one might be able to figure out whether or not there are enough fine sheaves in the non-separated case (I think the answer is no but I am not at all sure!).

So Grothendieck began rewriting Cartan-Eilenberg before he had seen it. Among other things he preempted the question of resolutions for Weil cohomology. Before anyone knew what “sheaves” it would use, Grothendieck knew it would use injective resolutions. He did this by asking not what sheaves “are” but how they relate to one another. As he later put it, he set out to:

consider the set13 of all sheaves on a given topological space or, if you like, the prodigious arsenal of all the “meter sticks” that measure it. We consider this “set” or “arsenal” as equipped with its most evident structure, the way it appears so to speak “right in front of your nose”; that is what we call the structure of a “category”…From here on, this kind of “measuring superstructure” called the “category of sheaves” will be taken as “incarnating” what is most essential to that space.

The Séminaire Cartan had shown this structure in front of your nose suffices for much of cohomology. Definitions and proofs can be given in terms of commutative diagrams and exact sequences without asking, most of the time, what these are diagrams of.  Grothendieck went farther than any other, insisting that the “formal analogy” between sheaf cohomology and group cohomology should become “a common framework including these theories and others”. To start with, injectives have a nice categorical sense: An object I in any category is injective if, for every monic N → M and arrow f : N → I there is at least one g : M → I such that

Fine sheaves are not so diagrammatic.

Grothendieck saw that Reinhold Baer’s original proof that modules have injective resolutions was largely diagrammatic itself. So Grothendieck gave diagrammatic axioms for the basic properties used in cohomology, and called any category that satisfies them an Abelian category. He gave further diagrammatic axioms tailored to Baer’s proof: Every category satisfying these axioms has injective resolutions. Such a category is called an AB5 category, and sometimes around the 1960s a Grothendieck category though that term has been used in several senses.

So sheaves on any topological space have injective resolutions and thus have derived functor cohomology in the strict sense. For paracompact spaces this agrees with cohomology from fine, flabby, or soft resolutions. So you can still use those, if you want them, and you will. But Grothendieck treats paracompactness as a “restrictive condition”, well removed from the basic theory, and he specifically mentions the Weil conjectures.

Beyond that, Grothendieck’s approach works for topology the same way it does for all cohomology. And, much further, the axioms apply to many categories other than categories of sheaves on topological spaces or categories of modules. They go far beyond topological and group cohomology, in principle, though in fact there were few if any known examples outside that framework when they were given.

# Emancipating Microlinearity from within a Well-adapted Model of Synthetic Differential Geometry towards an Adequately Restricted Cartesian Closed Category of Frölicher Spaces. Thought of the Day 15.0

Differential geometry of finite-dimensional smooth manifolds has been generalized by many authors to the infinite-dimensional case by replacing finite-dimensional vector spaces by Hilbert spaces, Banach spaces, Fréchet spaces or, more generally, convenient vector spaces as the local prototype. We know well that the category of smooth manifolds of any kind, whether finite-dimensional or infinite-dimensional, is not cartesian closed, while Frölicher spaces, introduced by Frölicher, do form a cartesian closed category. It seems that Frölicher and his followers do not know what a kind of Frölicher space, besides convenient vector spaces, should become the basic object of research for infinite-dimensional differential geometry. The category of Frölicher spaces and smooth mappings should be restricted adequately to a cartesian closed subcategory.

Synthetic differential geometry is differential geometry with a cornucopia of nilpotent infinitesimals. Roughly speaking, a space of nilpotent infinitesimals of some kind, which exists only within an imaginary world, corresponds to a Weil algebra, which is an entity of the real world. The central object of study in synthetic differential geometry is microlinear spaces. Although the notion of a manifold (=a pasting of copies of a certain linear space) is defined on the local level, the notion of microlinearity is defined absolutely on the genuinely infinitesimal level. What we should do so as to get an adequately restricted cartesian closed category of Frölicher spaces is to emancipate microlinearity from within a well-adapted model of synthetic differential geometry.

Although nilpotent infinitesimals exist only within a well-adapted model of synthetic differential geometry, the notion of Weil functor was formulated for finite-dimensional manifolds and for infinite-dimensional manifolds. This is the first step towards microlinearity for Frölicher spaces. Therein all Frölicher spaces which believe in fantasy that all Weil functors are really exponentiations by some adequate infinitesimal objects in imagination form a cartesian closed category. This is the second step towards microlinearity for Frölicher spaces. Introducing the notion of “transversal limit diagram of Frölicher spaces” after the manner of that of “transversal pullback” is the third and final step towards microlinearity for Frölicher spaces. Just as microlinearity is closed under arbitrary limits within a well-adapted model of synthetic differential geometry, microlinearity for Frölicher spaces is closed under arbitrary transversal limits.

# Geometric Structure, Causation, and Instrumental Rip-Offs, or, How Does a Physicist Read Off the Physical Ontology From the Mathematical Apparatus?

The benefits of the various structuralist approaches in the philosophy of mathematics is that it allows both the mathematical realist and anti-realist to use mathematical structures without obligating a Platonism about mathematical objects, such as numbers – one can simply accept that, say, numbers exist as places in a larger structure, like the natural number system, rather than as some sort of independently existing, transcendent entities. Accordingly, a variation on a well-known mathematical structure, such as exchanging the natural numbers “3” and “7”, does not create a new structure, but merely gives the same structure “relabeled” (with “7” now playing the role of “3”, and visa-verse). This structuralist tactic is familiar to spacetime theorists, for not only has it been adopted by substantivalists to undermine an ontological commitment to the independent existence of the manifold points of M, but it is tacitly contained in all relational theories, since they would count the initial embeddings of all material objects and their relations in a spacetime as isomorphic.

A critical question remains, however: Since spacetime structure is geometric structure, how does the Structural Realism (SR) approach to spacetime differ in general from mathematical structuralism? Is the theory just mathematical structuralism as it pertains to geometry (or, more accurately, differential geometry), rather than arithmetic or the natural number series? While it may sound counter-intuitive, the SR theorist should answer this question in the affirmative – the reason being, quite simply, that the puzzle of how mathematical spacetime structures apply to reality, or are exemplified in the real world, is identical to the problem of how all mathematical structures are exemplified in the real world. Philosophical theories of mathematics, especially nominalist theories, commonly take as their starting point the fact that certain mathematical structures are exemplified in our common experience, while other are excluded. To take a simple example, a large collection of coins can exemplify the standard algebraic structure that includes commutative multiplication (e.g., 2 x 3 = 3 x 2), but not the more limited structure associated with, say, Hamilton’s quaternion algebra (where multiplication is non-commutative; 2 x 3 ≠ 3 x 2). In short, not all mathematical structures find real-world exemplars (although, for the minimal nominalists, these structures can be given a modal construction). The same holds for spacetime theories: empirical evidence currently favors the mathematical structures utilized in General Theory of Relativity, such that the physical world exemplifies, say, g, but a host of other geometric structures, such as the flat Newtonian metric, h, are not exemplified.

The critic will likely respond that there is substantial difference between the mathematical structures that appear in physical theories and the mathematics relevant to everyday experience. For the former, and not the latter, the mathematical structures will vary along with the postulated physical forces and laws; and this explains why there are a number of competing spacetime theories, and thus different mathematical structures, compatible with the same evidence: in Poincaré fashion, Newtonian rivals to GTR can still employ h as long as special distorting forces are introduced. Yet, underdetermination can plague even simple arithmetical experience, a fact well known in the philosophy of mathematics and in measurement theory. For example, in Charles Chihara, an assessment of the empiricist interpretation of mathematics prompts the following conclusion: “the fact that adding 5 gallons of alcohol to 2 gallons of water does not yield 7 gallons of liquid does not refute any law of logic or arithmetic [“5+2=7”] but only a mistaken physical assumption about the conservation of liquids when mixed”. While obviously true, Chihara could have also mentioned that, in order to capture our common-sense intuitions about mathematics, the application of the mathematical structure in such cases requires coordination with a physical measuring convention that preserves the identity of each individual entity, or unit, both before and after the mixing. In the mixing experiment, perhaps atoms should serve as the objects coordinated to the natural number series, since the stability of individual atoms would prevent the sort of blurring together of the individuals (“gallon of liquid”) that led to the arithmetically deviant results. By choosing a different coordination, the mixing experiment can thus be judged to uphold, or exemplify, the statement “5+2=7”. What all of this helps to show is that mathematics, for both complex geometrical spacetime structures and simple non-geometrical structures, cannot be empirically applied without stipulating physical hypotheses and/or conventions about the objects that model the mathematics. Consequently, as regards real world applications, there is no difference in kind between the mathematical structures that are exemplified in spacetime physics and in everyday observation; rather, they only differ in their degree of abstractness and the sophistication of the physical hypotheses or conventions required for their application. Both in the simple mathematical case and in the spacetime case, moreover, the decision to adopt a particular convention or hypothesis is normally based on a judgment of its overall viability and consistency with our total scientific view (a.k.a., the scientific method): we do not countenance a world where macroscopic objects can, against the known laws of physics, lose their identity by blending into one another (as in the addition example), nor do we sanction otherwise undetectable universal forces simply for the sake of saving a cherished metric.

Another significant shared feature of spacetime and mathematical structure is the apparent absence of causal powers or effects, even though the relevant structures seem to play some sort of “explanatory role” in the physical phenomena. To be more precise, consider the example of an “arithmetically-challenged” consumer who lacks an adequate grasp of addition: if he were to ask for an explanation of the event of adding five coins to another seven, and why it resulted in twelve, one could simply respond by stating, “5+7=12”, which is an “explanation” of sorts, although not in the scientific sense. On the whole, philosophers since Plato have found it difficult to offer a satisfactory account of the relationship between general mathematical structures (arithmetic/”5+7=12”) and the physical manifestations of those structures (the outcome of the coin adding). As succinctly put by Michael Liston:

Why should appeals to mathematical objects [numbers, etc.] whose very nature is non-physical make any contribution to sound inferences whose conclusions apply to physical objects?

One response to the question can be comfortably dismissed, nevertheless: mathematical structures did not cause the outcome of the coin adding, for this would seem to imply that numbers (or “5+7=12”) somehow had a mysterious, platonic influence over the course of material affairs.

In the context of the spacetime ontology debate, there has been a corresponding reluctance on the part of both sophisticated substantivalists and (R2, the rejection of substantivalist) relationists to explain how space and time differentiate the inertial and non-inertial motions of bodies; and, in particular, what role spacetime plays in the origins of non-inertial force effects. Returning once more to our universe with a single rotating body, and assuming that no other forces or causes, it would be somewhat peculiar to claim that the causal agent responsible for the observed force effects of the motion is either substantival spacetime or the relative motions of bodies (or, more accurately, the motion of bodies relative to a privileged reference frame, or possible trajectories, etc.). Yet, since it is the motion of the body relative to either substantival space, other bodies/fields, privileged frames, possible trajectories, etc., that explains (or identifies, defines) the presence of the non-inertial force effects of the acceleration of the lone rotating body, both theories are therefore in serious need of an explanation of the relationship between space and these force effects. The strict (R1) relationists face a different, if not less daunting, task; for they must reinterpret the standard formulations of, say, Newtonian theory in such a way that the rotation of our lone body in empty space, or the rotation of the entire universe, is not possible. To accomplish this goal, the (R1) relationist must draw upon different mathematical resources and adopt various physical assumptions that may, or may not, ultimately conflict with empirical evidence: for example, they must stipulate that the angular momentum of the entire universe is 0.

All participants in the spacetime ontology debate are confronted with the nagging puzzle of understanding the relationship between, on the one hand, the empirical behavior of bodies, especially the non-inertial forces, and, on the other hand, the apparently non-empirical, mathematical properties of the spacetime structure that are somehow inextricably involved in any adequate explanation of those non-inertial forces – namely, for the substantivalists and (R2) relationists, the affine structure,  that lays down the geodesic paths of inertially moving bodies. The task of explaining this connection between the empirical and abstract mathematical or quantitative aspects of spacetime theories is thus identical to elucidating the mathematical problem of how numbers relate to experience (e.g., how “5+7=12” figures in our experience of adding coins). Likewise, there exists a parallel in the fact that most substantivalists and (R2) relationists seem to shy away from positing a direct causal connection between material bodies and space (or privileged frames, possible trajectories, etc.) in order to account for non-inertial force effects, just as a mathematical realist would recoil from ascribing causal powers to numbers so as to explain our common experience of adding and subtracting.

An insight into the non-causal, mathematical role of spacetime structures can also be of use to the (R2) relationist in defending against the charge of instrumentalism, as, for instance, in deflecting Earman’s criticisms of Sklar’s “absolute acceleration” concept. Conceived as a monadic property of bodies, Sklar’s absolute acceleration does not accept the common understanding of acceleration as a species of relative motion, whether that motion is relative to substantival space, other bodies, or privileged reference frames. Earman’s objection to this strategy centers upon the utilization of spacetime structures in describing the primitive acceleration property: “it remains magic that the representative [of Sklar’s absolute acceleration] is neo-Newtonian acceleration

d2xi/dt2 + Γijk (dxj/dt)(dxk/dt) —– (1)

[i.e., the covariant derivative, or ∇ in coordinate form]”. Ultimately, Earman’s critique of Sklar’s (R2) relationism would seem to cut against all sophisticated (R2) hypotheses, for he seems to regard the exercise of these richer spacetime structures, like ∇, as tacitly endorsing the absolute/substantivalist side of the dispute:

..the Newtonian apparatus can be used to make the predictions and afterwards discarded as a convenient fiction, but this ploy is hardly distinguishable from instrumentalism, which, taken to its logical conclusion, trivializes the absolute-relationist debate.

The weakness of Earman’s argument should be readily apparent—since, to put it bluntly, does the equivalent use of mathematical statements, such as “5+7=12”, likewise obligate the mathematician to accept a realist conception of numbers (such that they exist independently of all exemplifying systems)? Yet, if the straightforward employment of mathematics does not entail either a realist or nominalist theory of mathematics (as most mathematicians would likely agree), then why must the equivalent use of the geometric structures of spacetime physics, e.g., ∇ require a substantivalist conception of ∇ as opposed to an (R2) relationist conception of ∇? Put differently, does a substantivalist commitment to whose overall function is to determine the straight-line trajectories of Neo-Newtonian spacetime, also necessitate a substantivalist commitment to its components, such as the vector d/dt along with its limiting process and mapping into ℜ? In short, how does a physicist read off the physical ontology from the mathematical apparatus? A non-instrumental interpretation of some component of the theory’s quantitative structure is often justified if that component can be given a plausible causal role (as in subatomic physics)—but, as noted above, ∇ does not appear to cause anything in spacetime theories. All told, Earman’s argument may prove too much, for if we accept his reasoning at face value, then the introduction of any mathematical or quantitative device that is useful in describing or measuring physical events would saddle the ontology with a bizarre type of entity (e.g., gross national product, average household family, etc.). A nice example of a geometric structure that provides a similarly useful explanatory function, but whose substantive existence we would be inclined to reject as well, is provided by Dieks’ example of a three-dimensional colour solid:

Different colours and their shades can be represented in various ways; one way is as points on a 3-dimensional colour solid. But the proposal to regard this ‘colour space’ as something substantive, needed to ground the concept of colour, would be absurd.

# Gauge Geometry and Philosophical Dynamism

Weyl was dissatisfied with his own theory of the predicative construction of the arithmetically definable subset of the classical real continuum by the time he had finished his Das Kontinuum, when he compared it with Husserl’s continuum of time, which possessed a “non-atomistic” character in contradistinction with his own theory. No determined point of time could be exhibited, only approximate fixing is possible, just as in the case of “continuum of spatial intuition”. He himself accepted the necessity that the mathematical concept of continuum, the continuous manifold, should not be characterized in terms of modern set theory enriched by topological axioms, because this would contradict the essence of continuum. Weyl says,

It seems that set theory violates against the essence of continuum, which, by its very nature, cannot at all be battered into a single set of elements. not the relationship of an element to a set, but a part of the whole ought to be taken as a basis for the analysis of the continuum.

For Weyl, single points of continuum were empty abstractions, and made him enter a difficult terrain, as no mathematical conceptual frame was in sight, which could satisfy his methodological postulate in a sufficiently elaborative manner. For some years, he sympathized with Brouwer’s idea to characterize points in the intuitionistic one-dimensional continuum by “free choice sequences” of nested intervals, and even tried to extend the idea to higher dimensions and explored the possibility of a purely combinatorial approach to the concept of manifold, in which point-like localizations were given only by infinite sequences of nested star neighborhoods in barycentric subdivisions of a combinatorially defined “manifold”. There arose, however, the problem of how to characterize the local manifold property in purely combinatorial terms.

Weyl was much more successful on another level to rebuild differential geometry in manifolds from a “purely infinitesimal” point of view. He generalized Riemann’s proposal for a differential geometric metric

ds2(x) = ∑n i, j = 1 gij(x) dxi dxj

From his purely infinitesimal point of view, it seemed a strange effect that the length of two vectors ξ(x) and η(x’) given at different points x and x’ can be immediately and objectively compared in this framework after the calculation of

|ξ(x)|2 = ∑n i, j = 1 gij(x) ξi ξj,

|η(x’)|2 = ∑n i, j = 1 gij(x’) ηi ηj

In this context, it was, comparatively easy for Weyl, to give a perfectly infinitesimal characterization of metrical concepts. He started from a well-known structure of conformal metric, i.e. an equivalence class [g] of semi-Riemannian metrics g = gij(x) and g’ = g’ij(x), which are equal up to a point of dependent positive factor λ(x) > 0, g’ = λg. Then, comparison of length made immediate sense only for vectors attached to the same point x, independently of the gauge of the metric, i.e. the choice of the representative in the conformal class. To achieve comparability of lengths of vectors inside each infinitesimal neighborhood, Weyl introduced the conception of length connection formed in analogy to the affine connection, Γ, just distilled from the classical Christoffel Symbols Γkij of Riemannian geometry by Levi Civita. The localization inside such an infinitesimal neighborhood was given, as would have been done already by the mathematicians of the past, by coordinate parameters x and x’ = x + dx for some infinitesimal displacement dx. Weyl’s length connection consisted, then, in an equivalence class of differential I-forms [Ψ], Ψ ≡ ∑ni = 1 Ψidxi, where an equivalent representation of the form is given by Ψ’ ≡ Ψ – d log λ, corresponding to a change of gauge of the conformal metric by the factor λ. Weyl called this transformation, which he recognized as necessary for the consistency of his extended symbol system, the gauge transformation of the length connection.

Weyl established a purely infinitesimal gauge geometry, where lengths of vectors (or derived metrical concepts in tensor fields) were immediately comparable only in the infinitesimal neighborhood of one point, and for points of finite distance only after an integration procedure. this integration turned out to be, in general, path dependent. Independence of the choice of path between two points x and x’ holds if and only if the length curvature vanishes. the concept of curvature was built in direct analogy to the curvature of the affine connection and turned out to be, in this case, just the exterior derivative of the length connection f ≡ dΨ. This led Weyl to a coherent and conceptually pleasing realization of a metrical differential geometry built upon purely infinitesimal principles. moreover, Weyl was convinced of important consequences of his new gauge geometry for physics. The infinitesimal neighborhoods understood as spheres of activity as Fichte might have said, suggested looking for interpretations of the length connection as a field representing physically active quantities. In fact, building on the mathematically obvious observation df ≡ 0, which was formally identical with the second system of the generally covariant Maxwell equations, Weyl immediately drew the conclusion that the length curvature f ought to be identified with the electromagnetic field.

He, however, gave up the belief in the ontological correctness of the purely field-theoretic approach to matter, where the Mie-Hilbert theory of a combined Lagrange function L(g,Ψ) for the action of the gravitational field (g) and electromagnetism (Ψ) was further geometrized and technically enriched by the principle of gauge invariance (L), substituting in its place a philosophically motivated a priori argumentation for the conceptual superiority of his gauge geometry. The goal of a unified description of gravitation and electromagnetism, and the derivation of matter structures from it, was nothing specific to Weyl. In his theory, the purely infinitesimal approach to manifolds and the ensuing possibility to geometrically unify the two-known interaction fields gravitation and electromagnetism took on a dense and conceptually sophisticated form.

# Algebraic Representation of Space-Time as Esoteric?

If the philosophical analysis of the singular feature of space-time is able to shed some new light on the possible nature of space-time, one should not lose sight of the fact that, although connected to fundamental issues in cosmology, like the ‘initial’ state of our universe, space-time singularities involve unphysical behaviour (like, for instance, the very geodesic incompleteness implied by the singularity theorems or some possible infinite value for physical quantities) and constitute therefore a physical problem that should be overcome. We now consider some recent theoretical developments that directly address this problem by drawing some possible physical (and mathematical) consequences of the above considerations.

Indeed, according to the algebraic approaches to space-time, the singular feature of space-time is an indicator for the fundamental non-local character of space-time: it is conceived actually as a very important part of General Relativity that reveals the fundamental pointless structure of space-time. This latter cannot be described by the usual mathematical tools like standard differential geometry, since, as we have seen above, it presupposes some “amount of locality” and is inherently point-like. The mathematical roots of such considerations are to be found in the full equivalence of, on the one hand, the usual (geometric) definition of a differentiable manifold M in terms of a set of points with a topology and a differential structure (compatible atlases) with, on the other hand, the definition using only the algebraic structure of the (commutative) ring C(M) of the smooth real functions on M (under pointwise addition and multiplication; indeed C(M) is a (concrete) algebra). For instance, the existence of points of M is equivalent to the existence of maximal ideals of C(M). Indeed, all the differential geometric properties of the space-time Lorentz manifold (M,g) are encoded in the (concrete) algebra C(M). Moreover, the Einstein field equations and their solutions (which represent the various space-times) can be constructed only in terms of the algebra C(M). Now, the algebraic structure of C(M) can be considered as primary (in exactly the same way in which space-time points or regions, represented by manifold points or sets of manifold points, may be considered as primary) and the manifold M as derived from this algebraic structure. Indeed, one can define the Einstein field equations from the very beginning in abstract algebraic terms without any reference to the manifold M as well as the abstract algebras, called the ‘Einstein algebras’, satisfying these equations. The standard geometric description of space-time in terms of a Lorentz manifold (M,g) can then be considered as inducing a mathematical representation of an Einstein algebra. Without entering into too many technical details, the important point for our discussion is that Einstein algebras and sheaf-theoretic generalizations thereof reveal the above discussed non-local feature of (essential) space-time singularities from a different point of view. In the framework of the b-boundary construction M = M ∪ ∂M, the (generalized) algebraic structure C corresponding to M can be prolonged to the (generalized) algebraic structure C corresponding to the b-completed M such that CM = C, where CM is the restriction of C to M; then in the singular cases, only constant functions (and therefore only zero vector fields) can be prolonged. This underlines the non-local feature of the singular behaviour of space-time, since constant functions are non-local in the sense that they do not distinguish points. This fundamental non-local feature suggests non-commutative generalizations of the Einstein algebras formulation of General Relativity, since non-commutative spaces are highly non-local. In general, non-commutative algebras have no maximal ideals, so that the very concept of a point has no counterpart within this non-commutative framework. Therefore, according to this line of thought, space-time, at the fundamental level, is completely non-local. Then it seems that the very distinction between singular and non-singular is not meaningful anymore at the fundamental level; within this framework, space-time singularities are ‘produced’ at a less fundamental level together with standard physics and its standard differential (commutative) geometric representation of space-time.

Although these theoretical developments are rather speculative, it must be emphasized that the algebraic representation of space-time itself is “by no means esoteric”. Starting from an algebraic formulation of the theory, which is completely equivalent to the standard geometric one, it provides another point of view on space-time and its singular behaviour that should not be dismissed too quickly. At least it underlines the fact that our interpretative framework for space-time should not be dependent on the traditional atomistic and local (point-like) conception of space-time (induced by the standard differential geometric formulation). Indeed, this misleading dependence on the standard differential geometric formulation seems to be at work in some reference arguments in contemporary philosophy of space-time, like in the field argument. According to the field argument, field properties occur at space-time points or regions, which must therefore be presupposed. Such an argument seems to fall prey to the standard differential geometric representation of space-time and fields, since within the algebraic formalism of General Relativity, (scalar) fields – elements of the algebra C – can be interpreted as primary and the manifold (points) as a secondary derived notion.