Platonist Assertory Mathematics. Thought of the Day 88.0

god-and-platonic-host-1

Traditional Platonism, according to which our mathematical theories are bodies of truths about a realm of mathematical objects, assumes that only some amongst consistent theory candidates succeed in correctly describing the mathematical realm. For platonists, while mathematicians may contemplate alternative consistent extensions of the axioms for ZF (Zermelo–Fraenkel) set theory, for example, at most one such extension can correctly describe how things really are with the universe of sets. Thus, according to Platonists such as Kurt Gödel, intuition together with quasi-empirical methods (such as the justification of axioms by appeal to their intuitively acceptable consequences) can guide us in discovering which amongst alternative axiom candidates for set theory has things right about set theoretic reality. Alternatively, according to empiricists such as Quine, who hold that our belief in the truth of mathematical theories is justified by their role in empirical science, empirical evidence can choose between alternative consistent set theories. In Quine’s view, we are justified in believing the truth of the minimal amount of set theory required by our most attractive scientific account of the world.

Despite their differences at the level of detail, both of these versions of Platonism share the assumption that mere consistency is not enough for a mathematical theory: For such a theory to be true, it must correctly describe a realm of objects, where the existence of these objects is not guaranteed by consistency alone. Such a view of mathematical theories requires that we must have some grasp of the intended interpretation of an axiomatic theory that is independent of our axiomatization – otherwise inquiry into whether our axioms “get things right” about this intended interpretation would be futile. Hence, it is natural to see these Platonist views of mathematics as following Frege in holding that axioms

. . . must not contain a word or sign whose sense and meaning, or whose contribution to the expression of a thought, was not already completely laid down, so that there is no doubt about the sense of the proposition and the thought it expresses. The only question can be whether this thought is true and what its truth rests on. (Frege to Hilbert Gottlob Frege The Philosophical and Mathematical Correspondence)

On such an account, our mathematical axioms express genuine assertions (thoughts), which may or may not succeed in asserting truths about their subject matter. These Platonist views are “assertory” views of mathematics. Assertory views of mathematics make room for a gap between our mathematical theories and their intended subject matter, and the possibility of such a gap leads to at least two difficulties for traditional Platonism. These difficulties are articulated by Paul Benacerraf (here and here) in his aforementioned papers. The first difficulty comes from the realization that our mathematical theories, even when axioms are supplemented with less formal characterizations of their subject matter, may be insufficient to choose between alternative interpretations. For example, assertory views hold that the Peano axioms for arithmetic aim to assert truths about the natural numbers. But there are many candidate interpretations of these axioms, and nothing in the axioms, or in our wider mathematical practices, seems to suffice to pin down one interpretation over any other as the correct one. The view of mathematical theories as assertions about a specific realm of objects seems to force there to be facts about the correct interpretation of our theories even if, so far as our mathematical practice goes (for example, in the case of arithmetic), any ω-sequence would do.

Benacerraf’s second worry is perhaps even more pressing for assertory views. The possibility of a gap between our mathematical theories and their intended subject matter raises the question, “How do we know that our mathematical theories have things right about their subject matter?”. To answer this, we need to consider the nature of the purported objects about which our theories are supposed to assert truths. It seems that our best characterization of mathematical objects is negative: to account for the extent of our mathematical theories, and the timelessness of mathematical truths, it seems reasonable to suppose that mathematical objects are non-physical, non- spatiotemporal (and, it is sometimes added, mind- and language-independent) objects – in short, mathematical objects are abstract. But this negative characterization makes it difficult to say anything positive about how we could know anything about how things are with these objects. Assertory, Platonist views of mathematics are thus challenged to explain just how we are meant to evaluate our mathematical assertions – just how do the kinds of evidence these Platonists present in support of their theories succeed in ensuring that these theories track the truth?

Advertisement

Appropriation of (Ir)reversibility of Noise Fluctuations to (Un)Facilitate Complexity

 

data

The logical depth is a suitable measure of subjective complexity for physical as well as mathematical objects. this, upon considering the effect of irreversibility, noise, and spatial symmetries of the equations of motion and initial conditions on the asymptotic depth-generating abilities of model systems.

“Self-organization” suggests a spontaneous increase of complexity occurring in a system with simple, generic (e.g. spatially homogeneous) initial conditions. The increase of complexity attending a computation, by contrast, is less remarkable because it occurs in response to special initial conditions. An important question, which would have interested Turing, is whether self-organization is an asymptotically qualitative phenomenon like phase transitions. In other words, are there physically reasonable models in which complexity, appropriately defined, not only increases, but increases without bound in the limit of infinite space and time? A positive answer to this question would not explain the natural history of our particular finite world, but would suggest that its quantitative complexity can legitimately be viewed as an approximation to a well-defined qualitative property of infinite systems. On the other hand, a negative answer would suggest that our world should be compared to chemical reaction-diffusion systems (e.g. Belousov-Zhabotinsky), which self-organize on a macroscopic, but still finite scale, or to hydrodynamic systems which self-organize on a scale determined by their boundary conditions.

The suitability of logical depth as a measure of physical complexity depends on the assumed ability (“physical Church’s thesis”) of Turing machines to simulate physical processes, and to do so with reasonable efficiency. Digital machines cannot of course integrate a continuous system’s equations of motion exactly, and even the notion of computability is not very robust in continuous systems, but for realistic physical systems, subject throughout their time development to finite perturbations (e.g. electromagnetic and gravitational) from an uncontrolled environment, it is plausible that a finite-precision digital calculation can approximate the motion to within the errors induced by these perturbations. Empirically, many systems have been found amenable to “master equation” treatments in which the dynamics is approximated as a sequence of stochastic transitions among coarse-grained microstates.

We concentrate arbitrarily on cellular automata, in the broad sense of discrete lattice models with finitely many states per site, which evolve according to a spatially homogeneous local transition rule that may be deterministic or stochastic, reversible or irreversible, and synchronous (discrete time) or asynchronous (continuous time, master equation). Such models cover the range from evidently computer-like (e.g. deterministic cellular automata) to evidently material-like (e.g. Ising models) with many gradations in between.

More of the favorable properties need to be invoked to obtain “self-organization,” i.e. nontrivial computation from a spatially homogeneous initial condition. A rather artificial system (a cellular automaton which is stochastic but noiseless, in the sense that it has the power to make purely deterministic as well as random decisions) undergoes this sort of self-organization. It does so by allowing the nucleation and growth of domains, within each of which a depth-producing computation begins. When two domains collide, one conquers the other, and uses the conquered territory to continue its own depth-producing computation (a computation constrained to finite space, of course, cannot continue for more than exponential time without repeating itself). To achieve the same sort of self-organization in a truly noisy system appears more difficult, partly because of the conflict between the need to encourage fluctuations that break the system’s translational symmetry, while suppressing fluctuations that introduce errors in the computation.

Irreversibility seems to facilitate complex behavior by giving noisy systems the generic ability to correct errors. Only a limited sort of error-correction is possible in microscopically reversible systems such as the canonical kinetic Ising model. Minority fluctuations in a low-temperature ferromagnetic Ising phase in zero field may be viewed as errors, and they are corrected spontaneously because of their potential energy cost. This error correcting ability would be lost in nonzero field, which breaks the symmetry between the two ferromagnetic phases, and even in zero field it gives the Ising system the ability to remember only one bit of information. This limitation of reversible systems is recognized in the Gibbs phase rule, which implies that under generic conditions of the external fields, a thermodynamic system will have a unique stable phase, all others being metastable. Even in reversible systems, it is not clear why the Gibbs phase rule enforces as much simplicity as it does, since one can design discrete Ising-type systems whose stable phase (ground state) at zero temperature simulates an aperiodic tiling of the plane, and can even get the aperiodic ground state to incorporate (at low density) the space-time history of a Turing machine computation. Even more remarkably, one can get the structure of the ground state to diagonalize away from all recursive sequences.

Relationist and Substantivalist meet by the Isometric Cut in the Hole Argument

General-Relativity-Eddington-eclipse

To begin, the models of relativity theory are relativistic spacetimes, which are pairs (M,gab) consisting of a 4-manifold M and a smooth, Lorentz-signature metric gab. The metric represents geometrical facts about spacetime, such as the spatiotemporal distance along a curve, the volume of regions of spacetime, and the angles between vectors at a point. It also characterizes the motion of matter: the metric gab determines a unique torsion-free derivative operator ∇, which provides the standard of constancy in the equations of motion for matter. Meanwhile, geodesics of this derivative operator whose tangent vectors ξa satisfy gabξaξb > 0 are the possible trajectories for free massive test particles, in the absence of external forces. The distribution of matter in space and time determines the geometry of spacetime via Einstein’s equation, Rab − 1/2Rgab = 8πTab, where Tab is the energy-momentum tensor associated with any matter present, Rab is the Ricci tensor, and R = Raa. Thus, as in Yang-Mills theory, matter propagates through a curved space, the curvature of which depends on the distribution of matter in spacetime.

The most widely discussed topic in the philosophy of general relativity over the last thirty years has been the hole argument, which goes as follows. Fix some spacetime (M,gab), and consider some open set O ⊆ M with compact closure. For convenience, assume Tab = 0 everywhere. Now pick some diffeomorphism ψ : M → M such that ψ|M−O acts as the identity, but ψ|O is not the identity. This is sufficient to guarantee that ψ is a non-trivial automorphism of M. In general, ψ will not be an isometry, but one can always define a new spacetime (M, ψ(gab)) that is guaranteed to be isometric to (M,gab), with the isometry realized by ψ. This yields two relativistic spacetimes, both representing possible physical configurations, that agree on the value of the metric at every point outside of O, but in general disagree at points within O. This means that the metric outside of O, including at all points in the past of O, cannot determine the metric at a point p ∈ O. General relativity, as standardly presented, faces a pernicious form of indeterminism. To avoid this indeterminism, one must become a relationist and accept that “Leibniz equivalent”, i.e., isometric, spacetimes represent the same physical situations. The person who denies this latter view – and thus faces the indeterminism – is dubbed a manifold substantivalist.

One way of understanding the dialectical context of the hole argument is as a dispute concerning the correct notion of equivalence between relativistic spacetimes. The manifold substantivalist claims that isometric spacetimes are not equivalent, whereas the relationist claims that they are. In the present context, these views correspond to different choices of arrows for the categories of models of general relativity. The relationist would say that general relativity should be associated with the category GR1, whose objects are relativistic spacetimes and whose arrows are isometries. The manifold substantivalist, meanwhile, would claim that the right category is GR2, whose objects are again relativistic spacetimes, but which has only identity arrows. Clearly there is a functor F : GR2 → GR1 that acts as the identity on both objects and arrows and forgets only structure. Thus the manifold substantivalist posits more structure than the relationist.

Manifold substantivalism might seem puzzling—after all, we have said that a relativistic spacetime is a Lorentzian manifold (M,gab), and the theory of pseudo-Riemannian manifolds provides a perfectly good standard of equivalence for Lorentzian manifolds qua mathematical objects: namely, isometry. Indeed, while one may stipulate that the objects of GR2 are relativistic spacetimes, the arrows of the category do not reflect that choice. One way of charitably interpreting the manifold substantivalist is to say that in order to provide an adequate representation of all the physical facts, one actually needs more than a Lorentzian manifold. This extra structure might be something like a fixed collection of labels for the points of the manifold, encoding which point in physical spacetime is represented by a given point in the manifold. Isomorphisms would then need to preserve these labels, so spacetimes would have no non-trivial automorphisms. On this view, one might use Lorentzian manifolds, without the extra labels, for various purposes, but when one does so, one does not represent all of the facts one might (sometimes) care about.

In the context of the hole argument, isometries are sometimes described as the “gauge transformations” of relativity theory; they are then taken as evidence that general relativity has excess structure. One can expect to have excess structure in a formalism only if there are models of the theory that have the same representational capacities, but which are not isomorphic as mathematical objects. If we take models of GR to be Lorentzian manifolds, then that criterion is not met: isometries are precisely the isomorphisms of these mathematical objects, and so general relativity does not have excess structure.

This point may be made in another way. Motivated in part by the idea that the standard formalism has excess structure, a proposal to move to the alternative formalism of so-called Einstein algebras for general relativity is sought, arguing that Einstein algebras have less structure than relativistic spacetimes. In what follows, a smooth n−algebra A is an algebra isomorphic (as algebras) to the algebra C(M) of smooth real-valued functions on some smooth n−manifold, M. A derivation on A is an R-linear map ξ : A → A satisfying the Leibniz rule, ξ(ab) = aξ(b) + bξ(a). The space of derivations on A forms an A-module, Γ(A), elements of which are analogous to smooth vector fields on M. Likewise, one may define a dual module, Γ(A), of linear functionals on Γ(A). A metric, then, is a module isomorphism g : Γ(A) → Γ(A) that is symmetric in the sense that for any ξ,η ∈ Γ(A), g(ξ)(η) = g(η)(ξ). With some further work, one can capture a notion of signature of such metrics, exactly analogously to metrics on a manifold. An Einstein algebra, then, is a pair (A, g), where A is a smooth 4−algebra and g is a Lorentz signature metric.

Einstein algebras arguably provide a “relationist” formalism for general relativity, since one specifies a model by characterizing (algebraic) relations between possible states of matter, represented by scalar fields. It turns out that one may then reconstruct a unique relativistic spacetime, up to isometry, from these relations by representing an Einstein algebra as the algebra of functions on a smooth manifold. The question, though, is whether this formalism really eliminates structure. Let GR1 be as above, and define EA to be the category whose objects are Einstein algebras and whose arrows are algebra homomorphisms that preserve the metric g (in a way made precise by Rosenstock). Define a contravariant functor F : GR1 → EA that takes relativistic spacetimes (M,gab) to Einstein algebras (C(M),g), where g is determined by the action of gab on smooth vector fields on M, and takes isometries ψ : (M, gab) → (M′, g′ab) to algebra isomorphisms ψˆ : C(M′) → C(M), defined by ψˆ(a) = a ◦ ψ. Rosenstock et al. (2015) prove the following.

Proposition: F : GR1 → EA forgets nothing.