The Natural Theoretic of Electromagnetism. Thought of the Day 147.0

pRwcC

In Maxwell’s theory, the field strength F = 1/2Fμν dxμ ∧ dxν is a real 2-form on spacetime, and thence a natural object at the same time. The homogeneous Maxwell equation dF = 0 is an equation involving forms and it has a well-known local solution F = dA’, i.e. there exists a local spacetime 1-form A’ which is a potential for the field strength F. Of course, if spacetime is contractible, as e.g. for Minkowski space, the solution is also a global one. As is well-known, in the non-commutative Yang-Mills theory case the field strength F = 1/2FAμν TA ⊗ dxμ ∧ dxν is no longer a spacetime form. This is a somewhat trivial remark since the transformation laws of such field strength are obtained as the transformation laws of the curvature of a principal connection with values in the Lie algebra of some (semisimple) non-Abelian Lie group G (e.g. G = SU(n), n 2 ≥ 2). However, the common belief that electromagnetism is to be intended as the particular case (for G =U(1)) of a non-commutative theory is not really physically evident. Even if we subscribe this common belief, which is motivated also by the tremendous success of the quantized theory, let us for a while discuss electromagnetism as a standalone theory.

From a mathematical viewpoint this is a (different) approach to electromagnetism and the choice between the two can be dealt with on a physical ground only. Of course the 1-form A’ is defined modulo a closed form, i.e. locally A” = A’ + dα is another solution.

How can one decide whether the potential of electromagnetism should be considered as a 1-form or rather as a principal connection on a U(1)-bundle? First of all we notice that by a standard hole argument (one can easily define compact supported closed 1-forms, e.g. by choosing the differential of compact supported functions which always exist on a paracompact manifold) the potentials A and A’ represent the same physical situation. On the other hand, from a mathematical viewpoint we would like the dynamical field, i.e. the potential A’, to be a global section of some suitable configuration bundle. This requirement is a mathematical one, motivated on the wish of a well-defined geometrical perspective based on global Variational Calculus.

The first mathematical way out is to restrict attention to contractible spacetimes, where A’ may be always chosen to be global. Then one can require the gauge transformations A” = A’ + dα to be Lagrangian symmetries. In this way, field equations select a whole equivalence class of gauge-equivalent potentials, a procedure which solves the hole argument problem. In this picture the potential A’ is really a 1-form, which can be dragged along spacetime diffeomorphism and which admits the ordinary Lie derivatives of 1-forms. Unfortunately, the restriction to contractible spacetimes is physically unmotivated and probably wrong.

Alternatively, one can restrict electromagnetic fields F, deciding that only exact 2-forms F are allowed. That actually restricts the observable physical situations, by changing the homogeneous Maxwell equations (i.e. Bianchi identities) by requiring that F is not only closed but exact. One should in principle be able to empirically reject this option.

On non-contractible spacetimes, one is necessarily forced to resort to a more “democratic” attitude. The spacetime is covered by a number of patches Uα. On each patch Uα one defines a potential A(α). In the intersection of two patches the two potentials A(α) and A(β) may not agree. In each patch, in fact, the observer chooses his own conventions and he finds a different representative of the electromagnetic potential, which is related by a gauge transformation to the representatives chosen in the neighbour patch(es). Thence we have a family of gauge transformations, one in each intersection Uαβ, which obey cocycle identities. If one recognizes in them the action of U(1) then one can build a principal bundle P = (P, M, π; U(1)) and interpret the ensuing potential as a connection on P. This leads way to the gauge natural formalism.

Anyway this does not close the matter. One can investigate if and when the principal bundle P, in addition to the obvious principal structure, can be also endowed with a natural structure. If that were possible then the bundle of connections Cp (which is associated to P) would also be natural. The problem of deciding whether a given gauge natural bundle can be endowed with a natural structure is quite difficult in general and no full theory is yet completely developed in mathematical terms. That is to say, there is no complete classification of the topological and differential geometric conditions which a principal bundle P has to satisfy in order to ensure that, among the principal trivializations which determine its gauge natural structure, one can choose a sub-class of trivializations which induce a purely natural bundle structure. Nor it is clear how many inequivalent natural structures a good principal bundle may support. Though, there are important examples of bundles which support at the same time a natural and a gauge natural structure. Actually any natural bundle is associated to some frame bundle L(M), which is principal; thence each natural bundle is also gauge natural in a trivial way. Since on any paracompact manifold one can choose a global Riemannian metric g, the corresponding tangent bundle T(M) can be associated to the orthonormal frame bundle O(M, g) besides being obviously associated to L(M). Thence the natural bundle T(M) may be also endowed with a gauge natural bundle structure with structure group O(m). And if M is orientable the structure can be further reduced to a gauge natural bundle with structure group SO(m).

Roughly speaking, the task is achieved by imposing restrictions to cocycles which generate T(M) according to the prescription by imposing a privileged class of changes of local laboratories and sets of measures. Imposing the cocycle ψ(αβ) to take its values in O(m) rather than in the larger group GL(m). Inequivalent gauge natural structures are in one-to-one correspondence with (non isometric) Riemannian metrics on M. Actually whenever there is a Lie group homomorphism ρ : GU(m) → G for some s onto some given Lie group G we can build a natural G-principal bundle on M. In fact, let (Uα, ψ(α)) be an atlas of the given manifold M, ψ(αβ) be its transition functions and jψ(αβ) be the induced transition functions of L(M). Then we can define a G-valued cocycle on M by setting ρ(jψ(αβ)) and thence a (unique up to fibered isomorphisms) G-principal bundle P(M) = (P(M), M, π; G). The bundle P(M), as well as any gauge natural bundle associated to it, is natural by construction. Now, defining a whole family of natural U(1)-bundles Pq(M) by using the bundle homomorphisms

ρq: GL(m) → U(1): J ↦ exp(iq ln det|J|) —– (1)

where q is any real number and In denotes the natural logarithm. In the case q = 0 the image of ρ0 is the trivial group {I}; and, all the induced bundles are trivial, i.e. P = M x U(1).

The natural lift φ’ of a diffeomorphism φ: M → M is given by

φ'[x, e]α = [φ(x), eiq ln det|J|. e]α —– (2)

where J is the Jacobin of the morphism φ. The bundles Pq(M) are all trivial since they allow a global section. In fact, on any manifold M, one can define a global Riemannian metric g, where the local sections glue together.

Since the bundles Pq(M) are all trivial, they are all isomorphic to M x U(1) as principal U(1)-bundles, though in a non-canonical way unless q = 0. Any two of the bundles Pq1(M) and Pq2(M) for two different values of q are isomorphic as principal bundles but the isomorphism obtained is not the lift of a spacetime diffeomorphism because of the two different values of q. Thence they are not isomorphic as natural bundles. We are thence facing a very interesting situation: a gauge natural bundle C associated to the trivial principal bundle P can be endowed with an infinite family of natural structures, one for each q ∈ R; each of these natural structures can be used to regard principal connections on P as natural objects on M and thence one can regard electromagnetism as a natural theory.

Now that the mathematical situation has been a little bit clarified, it is again a matter of physical interpretation. One can in fact restrict to electromagnetic potentials which are a priori connections on a trivial structure bundle P ≅ M x U(1) or to accept that more complicated situations may occur in Nature. But, non-trivial situations are still empirically unsupported, at least at a fundamental level.

Hypostatic Abstraction. Thought of the Day 138.0

maxresdefault

Hypostatic abstraction is linguistically defined as the process of making a noun out of an adjective; logically as making a subject out of a predicate. The idea here is that in order to investigate a predicate – which other predicates it is connected to, which conditions it is subjected to, in short to test its possible consequences using Peirce’s famous pragmatic maxim – it is necessary to posit it as a subject for investigation.

Hypostatic abstraction is supposed to play a crucial role in the reasoning process for several reasons. The first is that by making a thing out of a thought, it facilitates the possibility for thought to reflect critically upon the distinctions with which it operates, to control them, reshape them, combine them. Thought becomes emancipated from the prison of the given, in which abstract properties exist only as Husserlian moments, and even if prescission may isolate those moments and induction may propose regularities between them, the road for thought to the possible establishment of abstract objects and the relations between them seems barred. The object created by a hypostatic abstraction is a thing, but it is of course no actually existing thing, rather it is a scholastic ens rationis, it is a figment of thought. It is a second intention thought about a thought – but this does not, in Peirce’s realism, imply that it is necessarily fictitious. In many cases it may indeed be, but in other cases we may hit upon an abstraction having real existence:

Putting aside precisive abstraction altogether, it is necessary to consider a little what is meant by saying that the product of subjectal abstraction is a creation of thought. (…) That the abstract subject is an ens rationis, or creation of thought does not mean that it is a fiction. The popular ridicule of it is one of the manifestations of that stoical (and Epicurean, but more marked in stoicism) doctrine that existence is the only mode of being which came in shortly before Descartes, in concsequence of the disgust and resentment which progressive minds felt for the Dunces, or Scotists. If one thinks of it, a possibility is a far more important fact than any actuality can be. (…) An abstraction is a creation of thought; but the real fact which is important in this connection is not that actual thinking has caused the predicate to be converted into a subject, but that this is possible. The abstraction, in any important sense, is not an actual thought but a general type to which thought may conform.

The seemingly scepticist pragmatic maxim never ceases to surprise: if we take all possible effects we can conceive an object to have, then our conception of those effects is identical with our conception of that object, the maxim claims – but if we can conceive of abstract properties of the objects to have effects, then they are part of our conception of it, and hence they must possess reality as well. An abstraction is a possible way for an object to behave – and if certain objects do in fact conform to this behavior, then that abstraction is real; it is a ‘real possibility’ or a general object. If not, it may still retain its character of possibility. Peirce’s definitions of hypostatic abstractions now and then confuse this point. When he claims that

An abstraction is a substance whose being consists in the truth of some proposition concerning a more primary substance,

then the abstraction’s existence depends on the truth of some claim concerning a less abstract substance. But if the less abstract substance in question does not exist, and the claim in question consequently will be meaningless or false, then the abstraction will – following that definition – cease to exist. The problem is only that Peirce does not sufficiently clearly distinguish between the really existing substances which abstractive expressions may refer to, on the one hand, and those expressions themselves, on the other. It is the same confusion which may make one shuttle between hypostatic abstraction as a deduction and as an abduction. The first case corresponds to there actually existing a thing with the quality abstracted, and where we consequently may expect the existence of a rational explanation for the quality, and, correlatively, the existence of an abstract substance corresponding to the supposed ens rationis – the second case corresponds to the case – or the phase – where no such rational explanation and corresponding abstract substance has yet been verified. It is of course always possible to make an abstraction symbol, given any predicate – whether that abstraction corresponds to any real possibility is an issue for further investigation to estimate. And Peirce’s scientific realism makes him demand that the connections to actual reality of any abstraction should always be estimated (The Essential Peirce):

every kind of proposition is either meaningless or has a Real Secondness as its object. This is a fact that every reader of philosophy should carefully bear in mind, translating every abstractly expressed proposition into its precise meaning in reference to an individual experience.

This warning is directed, of course, towards empirical abstractions which require the support of particular instances to be pragmatically relevant but could hardly hold for mathematical abstraction. But in any case hypostatic abstraction is necessary for the investigation, be it in pure or empirical scenarios.

Is There a Philosophy of Bundles and Fields? Drunken Risibility.

The bundle formulation of field theory is not at all motivated by just seeking a full mathematical generality; on the contrary it is just an empirical consequence of physical situations that concretely happen in Nature. One among the simplest of these situations may be that of a particle constrained to move on a sphere, denoted by S2; the physical state of such a dynamical system is described by providing both the position of the particle and its momentum, which is a tangent vector to the sphere. In other words, the state of this system is described by a point of the so-called tangent bundle TS2 of the sphere, which is non-trivial, i.e. it has a global topology which differs from the (trivial) product topology of S2 x R2. When one seeks for solutions of the relevant equations of motion some local coordinates have to be chosen on the sphere, e.g. stereographic coordinates covering the whole sphere but a point (let us say the north pole). On such a coordinate neighbourhood (which is contractible to a point being a diffeomorphic copy of R2) there exists a trivialization of the corresponding portion of the tangent bundle of the sphere, so that the relevant equations of motion can be locally written in R2 x R2. At the global level, however, together with the equations, one should give some boundary conditions which will ensure regularity in the north pole. As is well known, different inequivalent choices are possible; these boundary conditions may be considered as what is left in the local theory out of the non-triviality of the configuration bundle TS2.

Moreover, much before modem gauge theories or even more complicated new field theories, the theory of General Relativity is the ultimate proof of the need of a bundle framework to describe physical situations. Among other things, in fact, General Relativity assumes that spacetime is not the “simple” Minkowski space introduced for Special Relativity, which has the topology of R4. In general it is a Lorentzian four-dimensional manifold possibly endowed with a complicated global topology. On such a manifold, the choice of a trivial bundle M x F as the configuration bundle for a field theory is mathematically unjustified as well as physically wrong in general. In fact, as long as spacetime is a contractible manifold, as Minkowski space is, all bundles on it are forced to be trivial; however, if spacetime is allowed to be topologically non-trivial, then trivial bundles on it are just a small subclass of all possible bundles among which the configuration bundle can be chosen. Again, given the base M and the fiber F, the non-unique choice of the topology of the configuration bundle corresponds to different global requirements.

A simple purely geometrical example can be considered to sustain this claim. Let us consider M = S1 and F = (-1, 1), an interval of the real line R; then ∃ (at least) countably many “inequivalent” bundles other than the trivial one Mö0 = S1 X F , i.e. the cylinder, as shown

Untitled

Furthermore the word “inequivalent” can be endowed with different meanings. The bundles shown in the figure are all inequivalent as embedded bundles (i.e. there is no diffeomorphism of the ambient space transforming one into the other) but the even ones (as well as the odd ones) are all equivalent among each other as abstract (i.e. not embedded) bundles (since they have the same transition functions).

The bundles Mön (n being any positive integer) can be obtained from the trivial bundle Mö0 by cutting it along a fiber, twisting n-times and then glueing again together. The bundle Mö1 is called the Moebius band (or strip). All bundles Mön are canonically fibered on S1, but just Mö0 is trivial. Differences among such bundles are global properties, which for example imply that the even ones Mö2k allow never-vanishing sections (i.e. field configurations) while the odd ones Mö2k+1 do not.

10 or 11 Dimensions? Phenomenological Conundrum. Drunken Risibility.

supersymmetry_experienc_2014_02

It is not the fact that we are living in a ten-dimensional world which forces string theory to a ten-dimensional description. It is that perturbative string theories are only anomaly-free in ten dimensions; and they contain gravitons only in a ten-dimensional formulation. The resulting question, how the four-dimensional spacetime of phenomenology comes off from ten-dimensional perturbative string theories (or its eleven-dimensional non-perturbative extension: the mysterious M theory), led to the compactification idea and to the braneworld scenarios.

It is not the fact that empirical indications for supersymmetry were found, that forces consistent string theories to include supersymmetry. Without supersymmetry, string theory has no fermions and no chirality, but there are tachyons which make the vacuum instable; and supersymmetry has certain conceptual advantages: it leads very probably to the finiteness of the perturbation series, thereby avoiding the problem of non-renormalizability which haunted all former attempts at a quantization of gravity; and there is a close relation between supersymmetry and Poincaré invariance which seems reasonable for quantum gravity. But it is clear that not all conceptual advantages are necessarily part of nature – as the example of the elegant, but unsuccessful Grand Unified Theories demonstrates.

Apart from its ten (or eleven) dimensions and the inclusion of supersymmetry – both have more or less the character of only conceptually, but not empirically motivated ad-hoc assumptions – string theory consists of a rather careful adaptation of the mathematical and model-theoretical apparatus of perturbative quantum field theory to the quantized, one-dimensionally extended, oscillating string (and, finally, of a minimal extension of its methods into the non-perturbative regime for which the declarations of intent exceed by far the conceptual successes). Without any empirical data transcending the context of our established theories, there remains for string theory only the minimal conceptual integration of basic parts of the phenomenology already reproduced by these established theories. And a significant component of this phenomenology, namely the phenomenology of gravitation, was already used up in the selection of string theory as an interesting approach to quantum gravity. Only, because string theory – containing gravitons as string states – reproduces in a certain way the phenomenology of gravitation, it is taken seriously.

But consistency requirements, the minimal inclusion of basic phenomenological constraints, and the careful extension of the model-theoretical basis of quantum field theory are not sufficient to establish an adequate theory of quantum gravity. Shouldn’t the landscape scenario of string theory be understood as a clear indication, not only of fundamental problems with the reproduction of the gauge invariances of the standard model of quantum field theory (and the corresponding phenomenology), but of much more severe conceptual problems? Almost all attempts at a solution of the immanent and transcendental problems of string theory seem to end in the ambiguity and contingency of the multitude of scenarios of the string landscape. That no physically motivated basic principle is known for string theory and its model-theoretical procedures might be seen as a problem which possibly could be overcome in future developments. But, what about the use of a static background spacetime in string theory which falls short of the fundamental insights of general relativity and which therefore seems to be completely unacceptable for a theory of quantum gravity?

At least since the change of context (and strategy) from hadron physics to quantum gravity, the development of string theory was dominated by immanent problems which led with their attempted solutions deeper. The result of this successively increasing self- referentiality is a more and more enhanced decoupling from phenomenological boundary conditions and necessities. The contact with the empirical does not increase, but gets weaker and weaker. The result of this process is a labyrinthic mathematical structure with a completely unclear physical relevance

Individuation. Thought of the Day 91.0

Figure-6-Concepts-of-extensionality

The first distinction is between two senses of the word “individuation” – one semantic, the other metaphysical. In the semantic sense of the word, to individuate an object is to single it out for reference in language or in thought. By contrast, in the metaphysical sense of the word, the individuation of objects has to do with “what grounds their identity and distinctness.” Sets are often used to illustrate the intended notion of “grounding.” The identity or distinctness of sets is said to be “grounded” in accordance with the principle of extensionality, which says that two sets are identical iff they have precisely the same elements:

SET(x) ∧ SET(y) → [x = y ↔ ∀u(u ∈ x ↔ u ∈ y)]

The metaphysical and semantic senses of individuation are quite different notions, neither of which appears to be reducible to or fully explicable in terms of the other. Since sufficient sense cannot be made of the notion of “grounding of identity” on which the metaphysical notion of individuation is based, focusing on the semantic notion of individuation is an easy way out. This choice of focus means that our investigation is a broadly empirical one drawn on empirical linguistics and psychology.

What is the relation between the semantic notion of individuation and the notion of a criterion of identity? It is by means of criteria of identity that semantic individuation is effected. Singling out an object for reference involves being able to distinguish this object from other possible referents with which one is directly presented. The final distinction is between two types of criteria of identity. A one-level criterion of identity says that two objects of some sort F are identical iff they stand in some relation RF:

Fx ∧ Fy → [x = y ↔ RF(x,y)]

Criteria of this form operate at just one level in the sense that the condition for two objects to be identical is given by a relation on these objects themselves. An example is the set-theoretic principle of extensionality.

A two-level criterion of identity relates the identity of objects of one sort to some condition on entities of another sort. The former sort of objects are typically given as functions of items of the latter sort, in which case the criterion takes the following form:

f(α) = f(β) ↔ α ≈ β

where the variables α and β range over the latter sort of item and ≈ is an equivalence relation on such items. An example is Frege’s famous criterion of identity for directions:

d(l1) = d(l2) ↔ l1 || l2

where the variables l1 and l2 range over lines or other directed items. An analogous two-level criterion relates the identity of geometrical shapes to the congruence of things or figures having the shapes in question. The decision to focus on the semantic notion of individuation makes it natural to focus on two-level criteria. For two-level criteria of identity are much more useful than one-level criteria when we are studying how objects are singled out for reference. A one-level criterion provides little assistance in the task of singling out objects for reference. In order to apply a one-level criterion, one must already be capable of referring to objects of the sort in question. By contrast, a two-level criterion promises a way of singling out an object of one sort in terms of an item of another and less problematic sort. For instance, when Frege investigated how directions and other abstract objects “are given to us”, although “we cannot have any ideas or intuitions of them”, he proposed that we relate the identity of two directions to the parallelism of the two lines in terms of which these directions are presented. This would be explanatory progress since reference to lines is less puzzling than reference to directions.

Weyl and Automorphism of Nature. Drunken Risibility.

MTH6105spider

In classical geometry and physics, physical automorphisms could be based on the material operations used for defining the elementary equivalence concept of congruence (“equality and similitude”). But Weyl started even more generally, with Leibniz’ explanation of the similarity of two objects, two things are similar if they are indiscernible when each is considered by itself. Here, like at other places, Weyl endorsed this Leibnzian argument from the point of view of “modern physics”, while adding that for Leibniz this spoke in favour of the unsubstantiality and phenomenality of space and time. On the other hand, for “real substances” the Leibnizian monads, indiscernability implied identity. In this way Weyl indicated, prior to any more technical consideration, that similarity in the Leibnizian sense was the same as objective equality. He did not enter deeper into the metaphysical discussion but insisted that the issue “is of philosophical significance far beyond its purely geometric aspect”.

Weyl did not claim that this idea solves the epistemological problem of objectivity once and for all, but at least it offers an adequate mathematical instrument for the formulation of it. He illustrated the idea in a first step by explaining the automorphisms of Euclidean geometry as the structure preserving bijective mappings of the point set underlying a structure satisfying the axioms of “Hilbert’s classical book on the Foundations of Geometry”. He concluded that for Euclidean geometry these are the similarities, not the congruences as one might expect at a first glance. In the mathematical sense, we then “come to interpret objectivity as the invariance under the group of automorphisms”. But Weyl warned to identify mathematical objectivity with that of natural science, because once we deal with real space “neither the axioms nor the basic relations are given”. As the latter are extremely difficult to discern, Weyl proposed to turn the tables and to take the group Γ of automorphisms, rather than the ‘basic relations’ and the corresponding relata, as the epistemic starting point.

Hence we come much nearer to the actual state of affairs if we start with the group Γ of automorphisms and refrain from making the artificial logical distinction between basic and derived relations. Once the group is known, we know what it means to say of a relation that it is objective, namely invariant with respect to Γ.

By such a well chosen constitutive stipulation it becomes clear what objective statements are, although this can be achieved only at the price that “…we start, as Dante starts in his Divina Comedia, in mezzo del camin”. A phrase characteristic for Weyl’s later view follows:

It is the common fate of man and his science that we do not begin at the beginning; we find ourselves somewhere on a road the origin and end of which are shrouded in fog.

Weyl’s juxtaposition of the mathematical and the physical concept of objectivity is worthwhile to reflect upon. The mathematical objectivity considered by him is relatively easy to obtain by combining the axiomatic characterization of a mathematical theory with the epistemic postulate of invariance under a group of automorphisms. Both are constituted in a series of acts characterized by Weyl as symbolic construction, which is free in several regards. For example, the group of automorphisms of Euclidean geometry may be expanded by “the mathematician” in rather wide ways (affine, projective, or even “any group of transformations”). In each case a specific realm of mathematical objectivity is constituted. With the example of the automorphism group Γ of (plane) Euclidean geometry in mind Weyl explained how, through the use of Cartesian coordinates, the automorphisms of Euclidean geometry can be represented by linear transformations “in terms of reproducible numerical symbols”.

For natural science the situation is quite different; here the freedom of the constitutive act is severely restricted. Weyl described the constraint for the choice of Γ at the outset in very general terms: The physicist will question Nature to reveal him her true group of automorphisms. Different to what a philosopher might expect, Weyl did not mention, the subtle influences induced by theoretical evaluations of empirical insights on the constitutive choice of the group of automorphisms for a physical theory. He even did not restrict the consideration to the range of a physical theory but aimed at Nature as a whole. Still basing on his his own views and radical changes in the fundamental views of theoretical physics, Weyl hoped for an insight into the true group of automorphisms of Nature without any further specifications.

Proca’s Abelian Sector: Approximate Equivalence. Thought of the Day 62.0

relativistic_quantum_electrodynamics

The underdetermination between the quantized Maxwell theory and the lower-mass quantized Proca theories is permanent (at least unless a photon mass is detected, in which case Proca wins). It does not immediately follow that our best science leaves the photon mass unspecified apart from empirical bounds, however. Electromagnetism can be unified with an SU(2) Yang-Mills field describing the weak nuclear force into the electroweak theory. The resulting electroweak unification of course is not simply a logical conjunction of the electromagnetic and weak theories; the theories undergoing unification are modified in the process. Maxwell’s theory can participate in this unification; can Proca theories participate while preserving renormalizability and unitarity? Probably they can. Thus evidently the underdetermination between Maxwell and Proca persists even in electroweak theory, though this unresolved rivalry is not widely noticed. There is some non-uniqueness in the photon mass term, partly due to the rotation by the weak mixing angle between the original fields in the SU(2) × U(1) group and the mass eigenstates after spontaneous symmetry breaking. Thus the physical photon is not simply the field corresponding to the original U(1) group, contrary to naive expectations. There are also various empirically negligible but perhaps conceptually important effects that can arise in such theories. Among these are charge dequantization – the charges of charged particles are no longer integral multiples of a smallest charge – and perhaps charge non-conservation. Crucial to the possibility of including a Proca-type mass term (as opposed to merely getting mass by spontaneous symmetry breaking) is the non-semi-simple nature of the gauge group SU(2) × U(1): this group has a subgroup U(1) that is Abelian and that commutes with the whole of the larger group. Were the electroweak theory to be embedded in some larger semi-simple group such as SU(5), then no Proca mass term could be included.

Discontinuous Reality. Thought of the Day 61.0

discontinuousReality-2015

Convention is an invention that plays a distinctive role in Poincaré’s philosophy of science. In terms of how they contribute to the framework of science, conventions are not empirical. They are presupposed in certain empirical tests, so they are (relatively) isolated from doubt. Yet they are not pure stipulations, or analytic, since conventional choices are guided by, and modified in the light of, experience. Finally they have a different character from genuine mathematical intuitions, which provide a fixed, a priori synthetic foundation for mathematics. Conventions are thus distinct from the synthetic a posteriori (empirical), the synthetic a priori and the analytic a priori.

The importance of Poincaré’s invention lies in the recognition of a new category of proposition and its centrality in scientific judgment. This is more important than the special place Poincaré gives Euclidean geometry. Nevertheless, it’s possible to accommodate some of what he says about the priority of Euclidean geometry with the use of non-Euclidean geometry in science, including the inapplicability of any geometry of constant curvature in physical theories of global space. Poincaré’s insistence on Euclidean geometry is based on criteria of simplicity and convenience. But these criteria surely entail that if giving up Euclidean geometry somehow results in an overall gain in simplicity then that would be condoned by conventionalism.

The a priori conditions on geometry – in particular the group concept, and the hypothesis of rigid body motion it encourages – might seem a lingering obstacle to a more flexible attitude towards applied geometry, or an empirical approach to physical space. However, just as the apriority of the intuitive continuum does not restrict physical theories to the continuous; so the apriority of the group concept does not mean that all possible theories of space must allow free mobility. This, too, can be “corrected”, or overruled, by new theories and new data, just as, Poincaré comes to admit, the new quantum theory might overrule our intuitive assumption that nature is continuous. That is, he acknowledges that reality might actually be discontinuous – despite the apriority of the intuitive continuum.

Poincaré and Geometry of Curvature. Thought of the Day 60.0

9683f20685891eeb1dd5e928d73f9115

It is not clear that Poincaré regarded Riemannian, variably curved, “geometry” as a bona fide geometry. On the one hand, his insistence on generality and the iterability of mathematical operations leads him to dismiss geometries of variable curvature as merely “analytic”. Distinctive of mathematics, he argues, is generality and the fact that induction applies to its processes. For geometry to be genuinely mathematical, its constructions must be everywhere iterable, so everywhere possible. If geometry is in some sense about rigid motion, then a manifold of variable curvature, especially where the degree of curvature depends on something contingent like the distribution of matter, would not allow a thoroughly mathematical, idealized treatment. Yet Poincaré also writes favorably about Riemannian geometries, defending them as mathematically coherent. Furthermore, he admits that geometries of constant curvature rest on a hypothesis – that of rigid body motion – that “is not a self evident truth”. In short, he seems ambivalent. Whether his conception of geometry includes or rules out variable curvature is unclear. We can surmise that he recognized Riemannian geometry as mathematical, and interesting, but as very different and more abstract than geometries of constant curvature, which are based on the further limitations discussed above (those motivated by a world satisfying certain empirical preconditions). These limitations enable key idealizations, which in turn allow constructions and synthetic proofs that we recognize as “geometric”.

Of Magnitudes, Metrization and Materiality of Abstracto-Concrete Objects.

im6gq0

The possibility of introducing magnitudes in a certain domain of concrete material objects is by no means immediate, granted or elementary. First of all, it is necessary to find a property of such objects that permits to compare them, so that a quasi-serial ordering be introduced in their set, that is a total linear ordering not excluding that more than one object may occupy the same position in the series. Such an ordering must then undergo a metrization, which depends on finding a fundamental measuring procedure permitting the determination of a standard sample to which the unit of measure can be bound. This also depends on the existence of an operation of physical composition, which behaves additively with respect to the quantity which we intend to measure. Only if all these conditions are satisfied will it be possible to introduce a magnitude in a proper sense, that is a function which assigns to each object of the material domain a real number. This real number represents the measure of the object with respect to the intended magnitude. This condition, by introducing an homomorphism between the domain of the material objects and that of the positive real numbers, transforms the language of analysis (that is of the concrete theory of real numbers) into a language capable of speaking faithfully and truly about those physical objects to which it is said that such a magnitude belongs.

Does the success of applying mathematics in the study of the physical world mean that this world has a mathematical structure in an ontological sense, or does it simply mean that we find in mathematics nothing but a convenient practical tool for putting order in our representations of the world? Neither of the answers to this question is right, and this is because the question itself is not correctly raised. Indeed it tacitly presupposes that the endeavour of our scientific investigations consists in facing the reality of “things” as it is, so to speak, in itself. But we know that any science is uniquely concerned with a limited “cut” operated in reality by adopting a particular point of view, that is concretely manifested by adopting a restricted number of predicates in the discourse on reality. Several skilful operational manipulations are needed in order to bring about a homomorphism with the structure of the positive real numbers. It is therefore clear that the objects that are studied by an empirical theory are by no means the rough things of everyday experience, but bundles of “attributes” (that is of properties, relations and functions), introduced through suitable operational procedures having often the explicit and declared goal of determining a concrete structure as isomorphic, or at least homomorphic, to the structure of real numbers or to some other mathematical structure. But now, if the objects of an empirical theory are entities of this kind, we are fully entitled to maintain that they are actually endowed with a mathematical structure: this is simply that structure which we have introduced through our operational procedures. However, this structure is objective and real and, with respect to it, the mathematized discourse is far from having a purely conventional and pragmatic function, with the goal of keeping our ideas in order: it is a faithful description of this structure. Of course, we could never pretend that such a discourse determines the structure of reality in a full and exhaustive way, and this for two distinct reasons: In the first place, reality (both in the sense of the totality of existing things, and of the ”whole” of any single thing), is much richer than the particular “slide” that it is possible to cut out by means of our operational manipulations. In the second place, we must be aware that a scientific object, defined as a structured set of attributes, is an abstract object, is a conceptual construction that is perfectly defined just because it is totally determined by a finite list of predicates. But concrete objects are by no means so: they are endowed with a great deal of attributes of an indefinite variety, so that they can at best exemplify with an acceptable approximation certain abstract objects that are totally encoding a given set of attributes through their corresponding predicates. The reason why such an exemplification can only be partial is that the different attributes that are simultaneously present in a concrete object are, in a way, mutually limiting themselves, so that this object does never fully exemplify anyone of them. This explains the correct sense of such common and obvious remarks as: “a rigid body, a perfect gas, an adiabatic transformation, a perfect elastic recoil, etc, do not exist in reality (or in Nature)”. Sometimes this remark is intended to vehiculate the thesis that these are nothing but intellectual fictions devoid of any correspondence with reality, but instrumentally used by scientists in order to organize their ideas. This interpretation is totally wrong, and is simply due to a confusion between encoding and exemplifying: no concrete thing encodes any finite and explicit number of characteristics that, on the contrary, can be appropriately encoded in a concept. Things can exemplify several concepts, while concepts (or abstract objects) do not exemplify the attributes they encode. Going back to the distinction between sense on the one hand, and reference or denotation on the other hand, we could also say that abstract objects belong to the level of sense, while their exemplifications belong to the level of reference, and constitute what is denoted by them. It is obvious that in the case of empirical sciences we try to construct conceptual structures (abstract objects) having empirical denotations (exemplified by concrete objects). If one has well understood this elementary but important distinction, one is in the position of correctly seeing how mathematics can concern physical objects. These objects are abstract objects, are structured sets of predicates, and there is absolutely nothing surprising in the fact that they could receive a mathematical structure (for example, a structure isomorphic to that of the positive real numbers, or to that of a given group, or of an abstract mathematical space, etc.). If it happens that these abstract objects are exemplified by concrete objects within a certain degree of approximation, we are entitled to say that the corresponding mathematical structure also holds true (with the same degree of approximation) for this domain of concrete objects. Now, in the case of physics, the abstract objects are constructed by isolating certain ontological attributes of things by means of concrete operations, so that they actually refer to things, and are exemplified by the concrete objects singled out by means of such operations up to a given degree of approximation or accuracy. In conclusion, one can maintain that mathematics constitutes at the same time the most exact language for speaking of the objects of the domain under consideration, and faithfully mirrors the concrete structure (in an ontological sense) of this domain of objects. Of course, it is very reasonable to recognize that other aspects of these things (or other attributes of them) might not be treatable by means of the particular mathematical language adopted, and this may imply either that these attributes could perhaps be handled through a different available mathematical language, or even that no mathematical language found as yet could be used for handling them.