Nomological Unification and Phenomenology of Gravitation. Thought of the Day 110.0

Calabi-Yau-manifold

String theory, which promises to give an all-encompassing, nomologically unified description of all interactions did not even lead to any unambiguous solutions to the multitude of explanative desiderata of the standard model of quantum field theory: the determination of its specific gauge invariances, broken symmetries and particle generations as well as its 20 or more free parameters, the chirality of matter particles, etc. String theory does at least give an explanation for the existence and for the number of particle generations. The latter is determined by the topology of the compactified additional spatial dimensions of string theory; their topology determines the structure of the possible oscillation spectra. The number of particle generations is identical to half the absolute value of the Euler number of the compact Calabi-Yau topology. But, because it is completely unclear which topology should be assumed for the compact space, there are no definitive results. This ambiguity is part of the vacuum selection problem; there are probably more than 10100 alternative scenarios in the so-called string landscape. Moreover all concrete models, deliberately chosen and analyzed, lead to generation numbers much too big. There are phenomenological indications that the number of particle generations can not exceed three. String theory admits generation numbers between three and 480.

Attempts at a concrete solution of the relevant external problems (and explanative desiderata) either did not take place, or they did not show any results, or they led to escalating ambiguities and finally got drowned completely in the string landscape scenario: the recently developed insight that string theory obviously does not lead to a unique description of nature, but describes an immense number of nomologically, physically and phenomenologically different worlds with different symmetries, parameter values, and values of the cosmological constant.

String theory seems to be by far too much preoccupied with its internal conceptual and mathematical problems to be able to find concrete solutions to the relevant external physical problems. It is almost completely dominated by internal consistency constraints. It is not the fact that we are living in a ten-dimensional world which forces string theory to a ten-dimensional description. It is that perturbative string theories are only anomaly-free in ten dimensions; and they contain gravitons only in a ten-dimensional formulation. The resulting question, how the four-dimensional spacetime of phenomenology comes off from ten-dimensional perturbative string theories (or its eleven-dimensional non-perturbative extension: the mysterious, not yet existing M theory), led to the compactification idea and to the braneworld scenarios, and from there to further internal problems.

It is not the fact that empirical indications for supersymmetry were found, that forces consistent string theories to include supersymmetry. Without supersymmetry, string theory has no fermions and no chirality, but there are tachyons which make the vacuum instable; and supersymmetry has certain conceptual advantages: it leads very probably to the finiteness of the perturbation series, thereby avoiding the problem of non-renormalizability which haunted all former attempts at a quantization of gravity; and there is a close relation between supersymmetry and Poincaré invariance which seems reasonable for quantum gravity. But it is clear that not all conceptual advantages are necessarily part of nature, as the example of the elegant, but unsuccessful Grand Unified Theories demonstrates.

Apart from its ten (or eleven) dimensions and the inclusion of supersymmetry, both have more or less the character of only conceptually, but not empirically motivated ad-hoc assumptions. String theory consists of a rather careful adaptation of the mathematical and model-theoretical apparatus of perturbative quantum field theory to the quantized, one-dimensionally extended, oscillating string (and, finally, of a minimal extension of its methods into the non-perturbative regime for which the declarations of intent exceed by far the conceptual successes). Without any empirical data transcending the context of our established theories, there remains for string theory only the minimal conceptual integration of basic parts of the phenomenology already reproduced by these established theories. And a significant component of this phenomenology, namely the phenomenology of gravitation, was already used up in the selection of string theory as an interesting approach to quantum gravity. Only, because string theory, containing gravitons as string states, reproduces in a certain way the phenomenology of gravitation, it is taken seriously.

Mathematical Reductionism: As Case Via C. S. Peirce’s Hypothetical Realism.

mathematical-beauty

During the 20th century, the following epistemology of mathematics was predominant: a sufficient condition for the possibility of the cognition of objects is that these objects can be reduced to set theory. The conditions for the possibility of the cognition of the objects of set theory (the sets), in turn, can be given in various manners; in any event, the objects reduced to sets do not need an additional epistemological discussion – they “are” sets. Hence, such an epistemology relies ultimately on ontology. Frege conceived the axioms as descriptions of how we actually manipulate extensions of concepts in our thinking (and in this sense as inevitable and intuitive “laws of thought”). Hilbert admitted the use of intuition exclusively in metamathematics where the consistency proof is to be done (by which the appropriateness of the axioms would be established); Bourbaki takes the axioms as mere hypotheses. Hence, Bourbaki’s concept of justification is the weakest of the three: “it works as long as we encounter no contradiction”; nevertheless, it is still epistemology, because from this hypothetical-deductive point of view, one insists that at least a proof of relative consistency (i.e., a proof that the hypotheses are consistent with the frequently tested and approved framework of set theory) should be available.

Doing mathematics, one tries to give proofs for propositions, i.e., to deduce the propositions logically from other propositions (premisses). Now, in the reductionist perspective, a proof of a mathematical proposition yields an insight into the truth of the proposition, if the premisses are already established (if one has already an insight into their truth); this can be done by giving in turn proofs for them (in which new premisses will occur which ask again for an insight into their truth), or by agreeing to put them at the beginning (to consider them as axioms or postulates). The philosopher tries to understand how the decision about what propositions to take as axioms is arrived at, because he or she is dissatisfied with the reductionist claim that it is on these axioms that the insight into the truth of the deduced propositions rests. Actually, this epistemology might contain a short-coming since Poincaré (and Wittgenstein) stressed that to have a proof of a proposition is by no means the same as to have an insight into its truth.

Attempts to disclose the ontology of mathematical objects reveal the following tendency in epistemology of mathematics: Mathematics is seen as suffering from a lack of ontological “determinateness”, namely that this science (contrarily to many others) does not concern material data such that the concept of material truth is not available (especially in the case of the infinite). This tendency is embarrassing since on the other hand mathematical cognition is very often presented as cognition of the “greatest possible certainty” just because it seems not to be bound to material evidence, let alone experimental check.

The technical apparatus developed by the reductionist and set-theoretical approach nowadays serves other purposes, partly for the reason that tacit beliefs about sets were challenged; the explanations of the science which it provides are considered as irrelevant by the practitioners of this science. There is doubt that the above mentioned sufficient condition is also necessary; it is not even accepted throughout as a sufficient one. But what happens if some objects, as in the case of category theory, do not fulfill the condition? It seems that the reductionist approach, so to say, has been undocked from the historical development of the discipline in several respects; an alternative is required.

Anterior to Peirce, epistemology was dominated by the idea of a grasp of objects; since Descartes, intuition was considered throughout as a particular, innate capacity of cognition (even if idealists thought that it concerns the general, and empiricists that it concerns the particular). The task of this particular capacity was the foundation of epistemology; already from Aristotle’s first premisses of syllogism, what was aimed at was to go back to something first. In this traditional approach, it is by the ontology of the objects that one hopes to answer the fundamental question concerning the conditions for the possibility of the cognition of these objects. One hopes that there are simple “basic objects” to which the more complex objects can be reduced and whose cognition is possible by common sense – be this an innate or otherwise distinguished capacity of cognition common to all human beings. Here, epistemology is “wrapped up” in (or rests on) ontology; to do epistemology one has to do ontology first.

Peirce shares Kant’s opinion according to which the object depends on the subject; however, he does not agree that reason is the crucial means of cognition to be criticised. In his paper “Questions concerning certain faculties claimed for man”, he points out the basic assumption of pragmatist philosophy: every cognition is semiotically mediated. He says that there is no immediate cognition (a cognition which “refers immediately to its object”), but that every cognition “has been determined by a previous cognition” of the same object. Correspondingly, Peirce replaces critique of reason by critique of signs. He thinks that Kant’s distinction between the world of things per se (Dinge an sich) and the world of apparition (Erscheinungswelt) is not fruitful; he rather distinguishes the world of the subject and the world of the object, connected by signs; his position consequently is a “hypothetical realism” in which all cognitions are only valid with reservations. This position does not negate (nor assert) that the object per se (with the semiotical mediation stripped off) exists, since such assertions of “pure” existence are seen as necessarily hypothetical (that means, not withstanding philosophical criticism).

By his basic assumption, Peirce was led to reveal a problem concerning the subject matter of epistemology, since this assumption means in particular that there is no intuitive cognition in the classical sense (which is synonymous to “immediate”). Hence, one could no longer consider cognitions as objects; there is no intuitive cognition of an intuitive cognition. Intuition can be no more than a relation. “All the cognitive faculties we know of are relative, and consequently their products are relations”. According to this new point of view, intuition cannot any longer serve to found epistemology, in departure from the former reductionist attitude. A central argument of Peirce against reductionism or, as he puts it,

the reply to the argument that there must be a first is as follows: In retracing our way from our conclusions to premisses, or from determined cognitions to those which determine them, we finally reach, in all cases, a point beyond which the consciousness in the determined cognition is more lively than in the cognition which determines it.

Peirce gives some examples derived from physiological observations about perception, like the fact that the third dimension of space is inferred, and the blind spot of the retina. In this situation, the process of reduction loses its legitimacy since it no longer fulfills the function of cognition justification. At such a place, something happens which I would like to call an “exchange of levels”: the process of reduction is interrupted in that the things exchange the roles performed in the determination of a cognition: what was originally considered as determining is now determined by what was originally considered as asking for determination.

The idea that contents of cognition are necessarily provisional has an effect on the very concept of conditions for the possibility of cognitions. It seems that one can infer from Peirce’s words that what vouches for a cognition is not necessarily the cognition which determines it but the livelyness of our consciousness in the cognition. Here, “to vouch for a cognition” means no longer what it meant before (which was much the same as “to determine a cognition”), but it still means that the cognition is (provisionally) reliable. This conception of the livelyness of our consciousness roughly might be seen as a substitute for the capacity of intuition in Peirce’s epistemology – but only roughly, since it has a different coverage.

Task of the Philosopher. Thought of the Day 75.0

4578-004-B2A539B2

Poincaré in Science and Method discusses how “reasonable” axioms (theories) are chosen. In a section which is intended to cool down the expectations put in the “logistic” project, he points out the problem as follows:

Even admitting that it has been established that all theorems can be deduced by purely analytical processes, by simple logical combinations of a finite number of axioms, and that these axioms are nothing but conventions, the philosopher would still retain the right to seek the origin of these conventions, and to ask why they were judged preferable to the contrary conventions.

[ …] A selection must be made out of all the constructions that can be combined with the materials furnished by logic. the true geometrician makes this decision judiciously, because he is guided by a sure instinct, or by some vague consciousness of I know not what profounder and more hidden geometry, which alone gives a value to the constructed edifice.

Hence, Poincaré sees the task of the philosophers to be the explanation of how conventions came to be. At the end of the quotation, Poincaré tries to give such an explanation, namely in referring to an “instinct” (in the sequel, he mentions briefly that one can obviously ask where such an instinct comes from, but he gives no answer to this question). The pragmatist position to be developed will lead to an essentially similar, but more complete and clear point of view.

According to Poincaré’s definition, the task of the philosopher starts where that of the mathematician ends: for a mathematician, a result is right if he or she has a proof, that means, if the result can be logically deduced from the axioms; that one has to adopt some axioms is seen as a necessary evil, and one perhaps puts some energy in the project to minimize the number of axioms (this might have been how set theory become thought of as a foundation of mathematics). A philosopher, however, wants to understand why exactly these axioms and no other were chosen. In particular, the philosopher is concerned with the question whether the chosen axioms actually grasp the intended model. This question is justified since formal definitions are not automatically sufficient to grasp the intention of a concept; at the same time, the question is methodologically very hard, since ultimately a concept is available in mathematical proof only by a formal explication. At any rate, it becomes clear that the task of the philosopher is related to a criterion problem.

Georg Kreisel thinks that we do indeed have the capacity to decide whether a given model was intended or not:

many formal independence proofs consist in the construction of models which we recognize to be different from the intended notion. It is a fact of experience that one can be honest about such matters! When we are shown a ‘non-standard’ model we can honestly say that it was not intended. [ . . . ] If it so happens that the intended notion is not formally definable this may be a useful thing to know about the notion, but it does not cast doubt on its objectivity.

Poincaré could not yet know (but he was experienced enough a mathematician to “feel”) that axiom systems quite often fail to grasp the intended model. It was seldom the work of professional philosophers and often the byproduct of the actual mathematical work to point out such discrepancies.

Following Kant, one defines the task of epistemology thus: to determine the conditions of the possibility of the cognition of objects. Now, what is meant by “cognition of objects”? It is meant that we have an insight into (the truth of) propositions about the objects (we can then speak about the propositions as facts); and epistemology asks what are the conditions for the possibility of such an insight. Hence, epistemology is not concerned with what objects are (ontology), but with what (and how) we can know about them (ways of access). This notwithstanding, both things are intimately related, especially, in the Peircean stream of pragmatist philosophy. The 19th century (in particular Helmholtz) stressed against Kant the importance of physiological conditions for this access to objects. Nevertheless, epistemology is concerned with logic and not with the brain. Pragmatism puts the accent on the means of cognition – to which also the brain belongs.

Kant in his epistemology stressed that the object depends on the subject, or, more precisely, that the cognition of an object depends on the means of cognition used by the subject. For him, the decisive means of cognition was reason; thus, his epistemology was to a large degree critique of reason. Other philosophers disagreed about this special role of reason but shared the view that the task of philosophy is to criticise the means of cognition. For all of them, philosophy has to point out about what we can speak “legitimately”. Such a critical approach is implicitly contained in Poincaré’s description of the task of the philosopher.

Reichenbach decomposes the task of epistemology into different parts: guiding, justification and limitation of cognition. While justification is usually considered as the most important of the three aspects, the “task of the philosopher” as specified above following Poincaré is not limited to it. Indeed, the question why just certain axioms and no others were chosen is obviously a question concerning the guiding principles of cognition: which criteria are at work? Mathematics presents itself at its various historical stages as the result of a series of decisions on questions of the kind “Which objects should we consider? Which definitions should we make? Which theorems should we try to prove?” etc. – for short: instances of the “criterion problem”. Epistemology, has all the task to evoke these criteria – used but not evoked by the researchers themselves. For after all, these criteria cannot be without effect on the conditions for the possibility of cognition of the objects which one has decided to consider. (In turn, the conditions for this possibility in general determine the range of objects from which one has to choose.) However, such an epistemology has not the task to resolve the criterion problem normatively (that means to prescribe for the scientist which choices he has to make).

Discontinuous Reality. Thought of the Day 61.0

discontinuousReality-2015

Convention is an invention that plays a distinctive role in Poincaré’s philosophy of science. In terms of how they contribute to the framework of science, conventions are not empirical. They are presupposed in certain empirical tests, so they are (relatively) isolated from doubt. Yet they are not pure stipulations, or analytic, since conventional choices are guided by, and modified in the light of, experience. Finally they have a different character from genuine mathematical intuitions, which provide a fixed, a priori synthetic foundation for mathematics. Conventions are thus distinct from the synthetic a posteriori (empirical), the synthetic a priori and the analytic a priori.

The importance of Poincaré’s invention lies in the recognition of a new category of proposition and its centrality in scientific judgment. This is more important than the special place Poincaré gives Euclidean geometry. Nevertheless, it’s possible to accommodate some of what he says about the priority of Euclidean geometry with the use of non-Euclidean geometry in science, including the inapplicability of any geometry of constant curvature in physical theories of global space. Poincaré’s insistence on Euclidean geometry is based on criteria of simplicity and convenience. But these criteria surely entail that if giving up Euclidean geometry somehow results in an overall gain in simplicity then that would be condoned by conventionalism.

The a priori conditions on geometry – in particular the group concept, and the hypothesis of rigid body motion it encourages – might seem a lingering obstacle to a more flexible attitude towards applied geometry, or an empirical approach to physical space. However, just as the apriority of the intuitive continuum does not restrict physical theories to the continuous; so the apriority of the group concept does not mean that all possible theories of space must allow free mobility. This, too, can be “corrected”, or overruled, by new theories and new data, just as, Poincaré comes to admit, the new quantum theory might overrule our intuitive assumption that nature is continuous. That is, he acknowledges that reality might actually be discontinuous – despite the apriority of the intuitive continuum.

Poincaré and Geometry of Curvature. Thought of the Day 60.0

9683f20685891eeb1dd5e928d73f9115

It is not clear that Poincaré regarded Riemannian, variably curved, “geometry” as a bona fide geometry. On the one hand, his insistence on generality and the iterability of mathematical operations leads him to dismiss geometries of variable curvature as merely “analytic”. Distinctive of mathematics, he argues, is generality and the fact that induction applies to its processes. For geometry to be genuinely mathematical, its constructions must be everywhere iterable, so everywhere possible. If geometry is in some sense about rigid motion, then a manifold of variable curvature, especially where the degree of curvature depends on something contingent like the distribution of matter, would not allow a thoroughly mathematical, idealized treatment. Yet Poincaré also writes favorably about Riemannian geometries, defending them as mathematically coherent. Furthermore, he admits that geometries of constant curvature rest on a hypothesis – that of rigid body motion – that “is not a self evident truth”. In short, he seems ambivalent. Whether his conception of geometry includes or rules out variable curvature is unclear. We can surmise that he recognized Riemannian geometry as mathematical, and interesting, but as very different and more abstract than geometries of constant curvature, which are based on the further limitations discussed above (those motivated by a world satisfying certain empirical preconditions). These limitations enable key idealizations, which in turn allow constructions and synthetic proofs that we recognize as “geometric”.

Fock Space

619px-Fock-space

Fock space is just another separable infinite dimensional Hilbert space (and so isomorphic to all its separable infinite dimensional brothers). But the key is writing it down in a fashion that suggests a particle interpretation. In particular, suppose that H is the one-particle Hilbert space, i.e. the state space for a single particle. Now depending on whether our particle is a Boson or a Fermion, the state space of a pair of these particles is either Es(H ⊗ H) or Ea(H ⊗ H), where Es is the projection onto the vectors invariant under the permutation ΣH,H on H ⊗ H, and Ea is the projection onto vectors that change signs under ΣH,H. For

present purposes, we ignore these differences, and simply use H ⊗ H to denote one possibility or the other. Now, proceeding down the line, for n particles, we have the Hilbert space Hn ≡ H ⊗ · · · ⊗ H, etc..

A state in Hn is definitely a state of n particles. To get disjunctive states, we make use of the direct sum operation “⊕” on Hilbert spaces. So we define the Fock space F(H) over H as the infinite direct sum:

F (H ) = C ⊕ H ⊕ (H ⊗ H ) ⊕ (H ⊗ H ⊗ H ) ⊕ · · · .

So, the state vectors in Fock space include a state where there are no particles (the vector lies in the first summand), a state where there is one particle, a state where there are two particles, etc.. Furthermore, there are states that are superpositions of different numbers of particles.

One can spend time worrying about what it means to say that particle numbers can be superposed. But that is the “half empty cup” point of view. From the “half full cup” point of view, it makes sense to count particles. Indeed, the positive (unbounded) operator

N=0 ⊕ 1 ⊕ 2 ⊕ 3 ⊕ 4 ⊕···,

is the formal element of our model that permits us to talk about the number of particles.

In the category of Hilbert spaces, all separable Hilbert spaces are isomorphic – there is no difference between Fock space and the single particle space. If we are not careful, we could become confused about the bearer of the name “Fock space.”

The confusion goes away when we move to the appropriate category. According to Wigner’s analysis, a particle corresponds to an irreducible unitary representation of the identity component P of the Poincaré group. Then the single particle space and Fock space are distinct objects in the category of representations of P. The underlying Hilbert spaces of the two representations are both separable (and hence isomorphic as Hilbert spaces); but the two representations are most certainly not equivalent (one is irreducible, the other reducible).

Geometric Structure, Causation, and Instrumental Rip-Offs, or, How Does a Physicist Read Off the Physical Ontology From the Mathematical Apparatus?

maxresdefault6

The benefits of the various structuralist approaches in the philosophy of mathematics is that it allows both the mathematical realist and anti-realist to use mathematical structures without obligating a Platonism about mathematical objects, such as numbers – one can simply accept that, say, numbers exist as places in a larger structure, like the natural number system, rather than as some sort of independently existing, transcendent entities. Accordingly, a variation on a well-known mathematical structure, such as exchanging the natural numbers “3” and “7”, does not create a new structure, but merely gives the same structure “relabeled” (with “7” now playing the role of “3”, and visa-verse). This structuralist tactic is familiar to spacetime theorists, for not only has it been adopted by substantivalists to undermine an ontological commitment to the independent existence of the manifold points of M, but it is tacitly contained in all relational theories, since they would count the initial embeddings of all material objects and their relations in a spacetime as isomorphic.

A critical question remains, however: Since spacetime structure is geometric structure, how does the Structural Realism (SR) approach to spacetime differ in general from mathematical structuralism? Is the theory just mathematical structuralism as it pertains to geometry (or, more accurately, differential geometry), rather than arithmetic or the natural number series? While it may sound counter-intuitive, the SR theorist should answer this question in the affirmative – the reason being, quite simply, that the puzzle of how mathematical spacetime structures apply to reality, or are exemplified in the real world, is identical to the problem of how all mathematical structures are exemplified in the real world. Philosophical theories of mathematics, especially nominalist theories, commonly take as their starting point the fact that certain mathematical structures are exemplified in our common experience, while other are excluded. To take a simple example, a large collection of coins can exemplify the standard algebraic structure that includes commutative multiplication (e.g., 2 x 3 = 3 x 2), but not the more limited structure associated with, say, Hamilton’s quaternion algebra (where multiplication is non-commutative; 2 x 3 ≠ 3 x 2). In short, not all mathematical structures find real-world exemplars (although, for the minimal nominalists, these structures can be given a modal construction). The same holds for spacetime theories: empirical evidence currently favors the mathematical structures utilized in General Theory of Relativity, such that the physical world exemplifies, say, g, but a host of other geometric structures, such as the flat Newtonian metric, h, are not exemplified.

The critic will likely respond that there is substantial difference between the mathematical structures that appear in physical theories and the mathematics relevant to everyday experience. For the former, and not the latter, the mathematical structures will vary along with the postulated physical forces and laws; and this explains why there are a number of competing spacetime theories, and thus different mathematical structures, compatible with the same evidence: in Poincaré fashion, Newtonian rivals to GTR can still employ h as long as special distorting forces are introduced. Yet, underdetermination can plague even simple arithmetical experience, a fact well known in the philosophy of mathematics and in measurement theory. For example, in Charles Chihara, an assessment of the empiricist interpretation of mathematics prompts the following conclusion: “the fact that adding 5 gallons of alcohol to 2 gallons of water does not yield 7 gallons of liquid does not refute any law of logic or arithmetic [“5+2=7”] but only a mistaken physical assumption about the conservation of liquids when mixed”. While obviously true, Chihara could have also mentioned that, in order to capture our common-sense intuitions about mathematics, the application of the mathematical structure in such cases requires coordination with a physical measuring convention that preserves the identity of each individual entity, or unit, both before and after the mixing. In the mixing experiment, perhaps atoms should serve as the objects coordinated to the natural number series, since the stability of individual atoms would prevent the sort of blurring together of the individuals (“gallon of liquid”) that led to the arithmetically deviant results. By choosing a different coordination, the mixing experiment can thus be judged to uphold, or exemplify, the statement “5+2=7”. What all of this helps to show is that mathematics, for both complex geometrical spacetime structures and simple non-geometrical structures, cannot be empirically applied without stipulating physical hypotheses and/or conventions about the objects that model the mathematics. Consequently, as regards real world applications, there is no difference in kind between the mathematical structures that are exemplified in spacetime physics and in everyday observation; rather, they only differ in their degree of abstractness and the sophistication of the physical hypotheses or conventions required for their application. Both in the simple mathematical case and in the spacetime case, moreover, the decision to adopt a particular convention or hypothesis is normally based on a judgment of its overall viability and consistency with our total scientific view (a.k.a., the scientific method): we do not countenance a world where macroscopic objects can, against the known laws of physics, lose their identity by blending into one another (as in the addition example), nor do we sanction otherwise undetectable universal forces simply for the sake of saving a cherished metric.

Another significant shared feature of spacetime and mathematical structure is the apparent absence of causal powers or effects, even though the relevant structures seem to play some sort of “explanatory role” in the physical phenomena. To be more precise, consider the example of an “arithmetically-challenged” consumer who lacks an adequate grasp of addition: if he were to ask for an explanation of the event of adding five coins to another seven, and why it resulted in twelve, one could simply respond by stating, “5+7=12”, which is an “explanation” of sorts, although not in the scientific sense. On the whole, philosophers since Plato have found it difficult to offer a satisfactory account of the relationship between general mathematical structures (arithmetic/”5+7=12”) and the physical manifestations of those structures (the outcome of the coin adding). As succinctly put by Michael Liston:

Why should appeals to mathematical objects [numbers, etc.] whose very nature is non-physical make any contribution to sound inferences whose conclusions apply to physical objects?

One response to the question can be comfortably dismissed, nevertheless: mathematical structures did not cause the outcome of the coin adding, for this would seem to imply that numbers (or “5+7=12”) somehow had a mysterious, platonic influence over the course of material affairs.

In the context of the spacetime ontology debate, there has been a corresponding reluctance on the part of both sophisticated substantivalists and (R2, the rejection of substantivalist) relationists to explain how space and time differentiate the inertial and non-inertial motions of bodies; and, in particular, what role spacetime plays in the origins of non-inertial force effects. Returning once more to our universe with a single rotating body, and assuming that no other forces or causes, it would be somewhat peculiar to claim that the causal agent responsible for the observed force effects of the motion is either substantival spacetime or the relative motions of bodies (or, more accurately, the motion of bodies relative to a privileged reference frame, or possible trajectories, etc.). Yet, since it is the motion of the body relative to either substantival space, other bodies/fields, privileged frames, possible trajectories, etc., that explains (or identifies, defines) the presence of the non-inertial force effects of the acceleration of the lone rotating body, both theories are therefore in serious need of an explanation of the relationship between space and these force effects. The strict (R1) relationists face a different, if not less daunting, task; for they must reinterpret the standard formulations of, say, Newtonian theory in such a way that the rotation of our lone body in empty space, or the rotation of the entire universe, is not possible. To accomplish this goal, the (R1) relationist must draw upon different mathematical resources and adopt various physical assumptions that may, or may not, ultimately conflict with empirical evidence: for example, they must stipulate that the angular momentum of the entire universe is 0.

All participants in the spacetime ontology debate are confronted with the nagging puzzle of understanding the relationship between, on the one hand, the empirical behavior of bodies, especially the non-inertial forces, and, on the other hand, the apparently non-empirical, mathematical properties of the spacetime structure that are somehow inextricably involved in any adequate explanation of those non-inertial forces – namely, for the substantivalists and (R2) relationists, the affine structure,  that lays down the geodesic paths of inertially moving bodies. The task of explaining this connection between the empirical and abstract mathematical or quantitative aspects of spacetime theories is thus identical to elucidating the mathematical problem of how numbers relate to experience (e.g., how “5+7=12” figures in our experience of adding coins). Likewise, there exists a parallel in the fact that most substantivalists and (R2) relationists seem to shy away from positing a direct causal connection between material bodies and space (or privileged frames, possible trajectories, etc.) in order to account for non-inertial force effects, just as a mathematical realist would recoil from ascribing causal powers to numbers so as to explain our common experience of adding and subtracting.

An insight into the non-causal, mathematical role of spacetime structures can also be of use to the (R2) relationist in defending against the charge of instrumentalism, as, for instance, in deflecting Earman’s criticisms of Sklar’s “absolute acceleration” concept. Conceived as a monadic property of bodies, Sklar’s absolute acceleration does not accept the common understanding of acceleration as a species of relative motion, whether that motion is relative to substantival space, other bodies, or privileged reference frames. Earman’s objection to this strategy centers upon the utilization of spacetime structures in describing the primitive acceleration property: “it remains magic that the representative [of Sklar’s absolute acceleration] is neo-Newtonian acceleration

d2xi/dt2 + Γijk (dxj/dt)(dxk/dt) —– (1)

[i.e., the covariant derivative, or ∇ in coordinate form]”. Ultimately, Earman’s critique of Sklar’s (R2) relationism would seem to cut against all sophisticated (R2) hypotheses, for he seems to regard the exercise of these richer spacetime structures, like ∇, as tacitly endorsing the absolute/substantivalist side of the dispute:

..the Newtonian apparatus can be used to make the predictions and afterwards discarded as a convenient fiction, but this ploy is hardly distinguishable from instrumentalism, which, taken to its logical conclusion, trivializes the absolute-relationist debate.

The weakness of Earman’s argument should be readily apparent—since, to put it bluntly, does the equivalent use of mathematical statements, such as “5+7=12”, likewise obligate the mathematician to accept a realist conception of numbers (such that they exist independently of all exemplifying systems)? Yet, if the straightforward employment of mathematics does not entail either a realist or nominalist theory of mathematics (as most mathematicians would likely agree), then why must the equivalent use of the geometric structures of spacetime physics, e.g., ∇ require a substantivalist conception of ∇ as opposed to an (R2) relationist conception of ∇? Put differently, does a substantivalist commitment to whose overall function is to determine the straight-line trajectories of Neo-Newtonian spacetime, also necessitate a substantivalist commitment to its components, such as the vector d/dt along with its limiting process and mapping into ℜ? In short, how does a physicist read off the physical ontology from the mathematical apparatus? A non-instrumental interpretation of some component of the theory’s quantitative structure is often justified if that component can be given a plausible causal role (as in subatomic physics)—but, as noted above, ∇ does not appear to cause anything in spacetime theories. All told, Earman’s argument may prove too much, for if we accept his reasoning at face value, then the introduction of any mathematical or quantitative device that is useful in describing or measuring physical events would saddle the ontology with a bizarre type of entity (e.g., gross national product, average household family, etc.). A nice example of a geometric structure that provides a similarly useful explanatory function, but whose substantive existence we would be inclined to reject as well, is provided by Dieks’ example of a three-dimensional colour solid:

Different colours and their shades can be represented in various ways; one way is as points on a 3-dimensional colour solid. But the proposal to regard this ‘colour space’ as something substantive, needed to ground the concept of colour, would be absurd.

 

Are Categories Similar to Sets? A Folly, if General Relativity and Quantum Mechanics Think So.

maxresdefault3

The fundamental importance of the path integral suggests that it might be enlightening to simplify things somewhat by stripping away the knot observable K and studying only the bare partition functions of the theory, considered over arbitrary spacetimes. That is, consider the path integral

Z(M) = ∫ DA e (i ∫M S(A) —– (1)

where M is an arbitrary closed 3d manifold, that is, compact and without boundary, and S[A] is the Chern-Simons action. Immediately one is struck by the fact that, since the action is topological, the number Z(M) associated to M should be a topological invariant of M. This is a remarkably efficient way to produce topological invariants.

Poincaré Conjecture: If M is a closed 3-manifold, whose fundamental group π1(M), and all of whose homology groups Hi(M) are equal to those of S3, then M is homeomorphic to S3.

One therefore appreciates the simplicity of the quantum field theory approach to topological invariants, which runs as follows.

  1. Endow the space with extra geometric structure in the form of a connection (alternatively a field, a section of a line bundle, an embedding map into spacetime)
  2. Compute a number from this manifold-with-connection (the action)
  3. Sum over all connections.

This may be viewed as an extension of the general principle in mathematics that one should classify structures by the various kinds of extra structure that can live on them. Indeed, the Chern-Simons Lagrangian was originally introduced in mathematics in precisely this way. Chern-Weil theory provides access to the cohomology groups (that is, topological invariants) of a manifold M by introducing an arbitrary connection A on M, and then associating to A a closed form f(A) (for instance, via the Chern-Simons Lagrangian), whose cohomology class is, remarkably, independent of the original arbitrary choice of connection A. Quantum field theory takes this approach to the extreme by being far more ambitious; it associates to a connection A the actual numerical value of the action (usually obtained by integration over M) – this number certainly depends on the connection, but field theory atones for this by summing over all connections.

Quantum field theory is however, in its path integral manifestation, far more than a mere machine for computing numbers associated with manifolds. There is dynamics involved, for the natural purpose of path integrals is not to calculate bare partition functions such as equation (1), but rather to express the probability amplitude for a given field configuration to evolve into another. Thus one considers a 3d manifold M (spacetime) with boundary components Σ1 and Σ2 (space), and considers M as the evolution of space from its initial configuration Σ1 to its final configuration Σ2:

IMG_20170321_040643_HDR

This is known mathematically as a cobordism from Σ1 to Σ2. To a 2d closed manifold Σ we associate the space of fields A(Σ) living on Σ. A physical state Ψ corresponds to a functional on this space of fields. This is the Schrödinger picture of quantum field theory: if A ∈ A(Σ), then Ψ(A) represents the probability that the state known as Ψ will be found in the field A. Such a state evolves with time due to the dynamics of the theory; Ψ(A) → Ψ(A, t). The space of states has a natural basis, which consists of the delta functionals  – these are the states satisfying ⟨Â|Â′⟩ = δ(A − A′). Any arbitrary state Ψ may be expressed as a superposition of these basis states. The path integral instructs us how to compute the time evolution of states, by first expanding them in the  basis, and then specifying that the amplitude for a system in the state Â1 on the space Σ1 to be found in the state Â2 on the space Σ2 is given by:

〈Â2|U|Â1〉= ∫A | ∑2 = A2 A | ∑1 = A1 DA e i S[A] —– (2)

This equation is the fundamental formula of quantum field theory: ‘Perform a weighted sum over all possible fields (connections) living on spacetime that restrict to A1 and A2 on Σ1 and Σ2 respectively’. This formula constructs the time evolution operator U associated to the cobordism M.

In this way we see that, at the very heart of quantum mechanics and quantum field theory, is a formula which associates to every space-like manifold Σ a Hilbert space of fields A(Σ), and to every cobordism M from Σ1 to Σ2 a time evolution operator U(M) : Σ1 – Σ2. To specify a quantum field theory is nothing more than to give rules for constructing the Hilbert spaces A(Σ) and the rules (correlation functions) for calculating the time evolution operators U(M). This is precisely the statement that a quantum field theory is a functor from the cobordism category nCob to the category of Hilbert spaces Hilb.

A category C consists of a collection of objects, a collection of arrows f:a → b from any object a to any object b, a rule for composing arrows f:a → b and g : b → c to obtain an arrow g f : a → c, and for each object A an identity arrow 1a : a → a. These must satisfy the associative law f(gh) = (fg)h and the left and right unit laws 1af = f and f1a = f whenever these composites are defined. In many cases, the objects of a category are best thought of as sets equipped with extra structure, while the morphisms are functions preserving the structure. However, this is neither true for the category of Hilbert spaces nor for the category of cobordisms.

The fundamental idea of category theory is to consider the ‘external’ structure of the arrows between objects instead of the ‘internal’ structure of the objects themselves – that is, the actual elements inside an object – if indeed, an object is a set at all : it need not be, since category theory waives its right to ask questions about what is inside an object, but reserves its right to ask how one object is related to another.

A functor F : C → D from a category C to another category D is a rule which associates to each object a of C an object b of D, and to each arrow f :a → b in C a corresponding arrow F(f): F(a) → F(b) in D. This association must preserve composition and the units, that is, F(fg) = F(f)F(g) and F(1a) = 1F(a).

1. Set is the category whose objects are sets, and whose arrows are the functions from one set to another.

2. nCob is the category whose objects are closed (n − 1)-dimensional manifolds Σ, and whose arrows M : Σ1 → Σ2 are cobordisms, that is, n-dimensional manifolds having an input boundary Σ1 and an output boundary Σ2.

3. Hilb is the category whose objects are Hilbert spaces and whose arrows are the bounded linear operators from one Hilbert space to another.

The ‘new philosophy’ amounts to the following observation: The last two categories, nCob and Hilb, resemble each other far more than they do the first category, Set! If we loosely regard general relativity or geometry to be represented by nCob, and quantum mechanics to be represented by Hilb, then perhaps many of the difficulties in a theory of quantum gravity, and indeed in quantum mechanics itself, arise due to our silly insistence of thinking of these categories as similar to Set, when in fact the one should be viewed in terms of the other. That is, the notion of points and sets, while mathematically acceptable, might be highly unnatural to the subject at hand!

Bifurcation

The main insight that Poincaré brought to mechanics was to view the temporal behavior of a system as a succession of configurations in a state space. The most important consequence was his focus on the geometric and topological structure of the allowed states. Due to its geometric character, the approach he introduced has a kind of universality built in. Previously one would say that two systems are obviously different because their behavior is governed by different physical forces and constraints and because they are composed of different materials. Moreover, if their equations of motion, summarizing how the systems react and change state over time, are different, then their behavior is different.

To be concrete let’s take a driven pendulum and a superconducting Josephson junction in a microwave field. These are physical systems that are different in just these ways. One is made out of a stiff wood rod and a heavy weight, say; the other consists of a loop of superconducting metal and operates near absolute zero temperature. The pendulum’s state is given by the position and velocity of the weight; the Josephson junction’s state is determined by the flow of tunneling quantum mechanical electrons.

fluxqubit

In constrast to this notion of apparent difference, Poincaré’s view ignores the particular form of the governing equations, even forgets what the underlying variables mean, and instead just looks at the set of states and how a system moves through them. In this view, two systems, like the pendulum and Josephson junction, are the same if they have the same geometric structures in their state spaces. In fact, the pendulum and Josephson junction both exhibit the period-doubling route to chaos and so are very, very similar systems despite their initial superficial dissimilarity. In particular, the mechanisms that produce the period-doubling behavior and eventual deterministic chaos are the same in both. This type of universality allows one to understand the behavior and dynamics of systems in very many different branches of science within a unified framework. Poincaré’s approach gives a precise way for us to say how two systems are qualitatively the same.

Roughly speaking, a bifurcation is a qualitative change in an attractor’s structure as a control parameter is smoothly varied. For example, a simple equilibrium, or fixed point attractor, might give way to a periodic oscillation as the stress on a system increases. Similarly, a periodic attractor might become unstable and be replaced by a chaotic attractor.

screenshot

In Benard convection, to take a real world example, heat from the surface of the earth simply conducts its way to the top of the atmosphere until the rate of heat generation at the surface of the earth gets too high. At this point heat conduction breaks down and bodily motion of the air (wind!) sets in. The atmosphere develops pairs of convection cells, one rotating left and the other rotating right. In a dripping faucet at low pressure, drops come off the faucet with equal timing between them. As the pressure is increased the drops begin to fall with two drops falling close together, then a longer wait, then two drops falling close together again. In this case, a simple periodic process has given way to a periodic process with twice the period, a process described as “period doubling”. If the flow rate of water through the faucet is increased further, often an irregular dripping is found and the behavior can become chaotic.

To Err or Not? Neo-Kantianism’s Logical Flaw. Note Quote.

According to Bertrand Russell, the sense in which every object is ‘one’ is a very shadowy sense because it is applicable to everything alike. However, Russell argues, the sense in which a class may be said to have one member is quite precise. “A class u has one member when u is not null, and ‘x and y are us’ implies ‘x is identical with y’.” In this case the one-ness is a property of a class and Russell calls this class a unit-class. Thus, Russell claims further, the number ‘one’ is not to be asserted of terms but of classes having one member in the above-defined sense. The same distinction between the different uses of ‘one’ was also made by Frege and Couturat. Frege says that the sense in which every object is ‘one’ is very imprecise, that is, every single object possesses this property. However, Frege argues that when one speaks of ‘the number one’, one indicates by means of the definite article a definite and unique object of scientific study. In his reply to Poincaré’s critique of the logicist programme, Couturat says that the confusion which exists in Poincaré’s mind arises from the double meaning of the word for ‘one’, that is, it is used both as a name of a number and as an indefinite article:

To sum up, it is not enough to conceive any one object to conceive the number one, nor to think of two objects together to have by that alone the idea of the number two.

According to Couturat, from the fact that the proposition “x and y are the elements of the class u” contains the symbols x and y one should not conclude that the number two is implied in this proposition. As a result, from the viewpoint of Russell, Couturat and Frege, the neo-Kantians are making here an elementary logical mistake. This awakens an interesting question. Why the neo-Kantians did not notice the mistake they had made? The answer is not that they would not have been aware of the opinion of the logicists. Both Cohn and Cassirer discuss the above-mentioned passage in Russell’s Principles. However, although Cohn and Cassirer were familiar with the distinction presented by Russell, it did not convince them. In Cohn’s view, Russell’s unit-class does not define ‘one’ but ‘only one’. As Cohn sees it, ‘only one’ means the limitation of a class to one object. Thus Russell’s ‘unit-class’ already presupposes that an object is seen as a unit. As a result, Russell’s definition of ‘one’ is unsuccessful since it already presupposes the number ‘one’. Cassirer, too, refers to Russell’s explanation, according to which it is naturally incontestable that every member of a class is in some sense one, but, Cassirer says, it does not follow from this that the concept of ‘one’ is presupposed. Cassirer mentions also Russell’s explanation according to which the meaning of the assertion that a class u possesses ‘one’ member is determined by the fact that this class is not null and that if x and y are u, then x is identical with y. According to Cassirer, the logical function of number is here not so much deduced as rather described by a technical circumlocution. Cassirer argues that in order to comprehend Russell’s explanation it is necessary that the term x is understood as identical with itself, and at the same time it is related to another term y and the former is judged as agreeing with or differing from the latter. In Cassirer’s view, if this process of positing and differentiation is accepted, then all that has been done will be to presuppose the number in the sense of the theory of ordinal number.

The neo-Kantian critique cannot be explained away as a mere logical error. The real reason why they did not accept the distinction is that to accept it would be to accept at least part of the logicist programme. As Warren Goldfarb has pointed out, Poincaré’s argument will be logically in error only if one simultaneously accepts the analysis of notions ‘in no case’ and ‘a class with one object’ that was first made available through modern mathematical logic. In other words, the logicists claim that the appearance of circularity is eliminated when one distinguishes uses of numerical expressions that can be replaced by purely quantificational devices from the full-blooded uses of such expressions that the formal definition is meant to underwrite. Hence the notions ‘in no case’ and ‘a class with one object’ do not presuppose any number theory if one simultaneously accepts the analysis which first-order quantificational logic provides for them. Poincaré does not accept this analysis, and, as result, he can bring the charge of petitio principii.

Like Poincaré, the neo-Kantians were not ready to accept Russell’s analysis of the expression ‘a class with one object’. As they see it, although the notion ‘a class with one object’ does not presuppose the number ‘one’ if one accepts the logicist definition of number, it will presuppose it if one advocates a neo-Kantian theory of number. According to Cassirer, the concept of number is the first and truest expression of rational method in general. Later Cassirer added that number is not merely a product of pure thought but its very prototype and source. It not only originates from the pure regularities of thought but designates the primary and original act to which these regularities ultimately go back. In Natorp’s view, number is the purest and simplest product of thought. Natorp claims that the first precondition for the logical understanding of number is the insight that number has nothing to do with the existing things but that number is only concerned with the pure regularities of thought. Natorp connects number to the fundamental logical function of quantity. In his view, the quantitative function of thought is produced when multiplicity is singled out from the fundamental relation between unity and multiplicity. Moreover, multiplicity is a plurality of distinguishable elements. Plurality, in turn, is necessarily a plurality of unities. Thus unity in the sense of numerical oneness is the unavoidable starting-point, the indispensable foundation of every quantitative positing of pure thought. According to Natorp, the quantitative positing of thought proceeds in three steps. First, pure thought posits something as one. What is posited as one is not important (it can be the world, an atom, and so on). It is only something to which the thought attaches the character of oneness. Second, the positing of the one can be repeated in the sense that while the one remains posited, we can posit always another in comparison with it. This is the way in which we attain plurality. Third and last, we collect the individual positings into a whole, that is, to a new unity in the sense of a unity of several. In this way we attain a definite plurality, that is, “so much” as distinguished from an indefinite set. In other words, one and one and one, and so forth, are here joined to new mental unities (duality, triplicity, and so forth).

According to Cohn, the natural numbers are the most abstract objects possible. Everything thinkable can be an object, and every object has two elements: the thinking-form and the objectivity. The thinking-form belongs to every object, and Cohn calls it “positing”. It can be described by saying that every object is identical with itself. This formal definiteness of an object has nothing to do with the determination of an object with regard to content. Since the thinking-form belongs to every object in the same way, it alone is not enough to form any specific object. The particularity of any individual object, or as Cohn puts it, the objectivity of any individual object, is something new and foreign when compared to the thinking-form of the object. In other words, Cohn argues that the necessary elements of every object are the thinking-form, and the objectivity. As a result, natural numbers are objects which have the thinking-form of identity and the minimum of objectivity, that is, the form of identity must be thought to be filled with something in some way or other. Moreover, Cohn says that his theory of natural numbers presupposes the possibility of arbitrary object-formation, that is, the possibility to construct arbitrarily many objects. On the basis of these two logical presuppositions, Cohn says that we are able to form arbitrarily many objects which are all equal with each other. According to Cohn, all of these objects can be described by the same symbol 1, and after this operation the fundamental equation 1 = 1 can be presented. Cohn says that the two separate symbols 1 in the equation signify different unities and the sign of equality means only that in any arithmetical relation any arbitrary unity can be replaced with any other unity. Moreover, Cohn says that we can collect an arbitrary number of objects into an aggregate, that is, into a new object. This is expressed by the repeated use of the word ‘and’. In arithmetic the combination of unities into a new unity has the form: 1 + 1 + 1 and so on (when ‘and’ is replaced by ‘+’). The most simple combination (1 + 1) can be described as 2, the following one (1 + 1 + 1) as 3, and so on. Thus a new number can always be attained by adding a new unity.