10 or 11 Dimensions? Phenomenological Conundrum. Drunken Risibility.

supersymmetry_experienc_2014_02

It is not the fact that we are living in a ten-dimensional world which forces string theory to a ten-dimensional description. It is that perturbative string theories are only anomaly-free in ten dimensions; and they contain gravitons only in a ten-dimensional formulation. The resulting question, how the four-dimensional spacetime of phenomenology comes off from ten-dimensional perturbative string theories (or its eleven-dimensional non-perturbative extension: the mysterious M theory), led to the compactification idea and to the braneworld scenarios.

It is not the fact that empirical indications for supersymmetry were found, that forces consistent string theories to include supersymmetry. Without supersymmetry, string theory has no fermions and no chirality, but there are tachyons which make the vacuum instable; and supersymmetry has certain conceptual advantages: it leads very probably to the finiteness of the perturbation series, thereby avoiding the problem of non-renormalizability which haunted all former attempts at a quantization of gravity; and there is a close relation between supersymmetry and Poincaré invariance which seems reasonable for quantum gravity. But it is clear that not all conceptual advantages are necessarily part of nature – as the example of the elegant, but unsuccessful Grand Unified Theories demonstrates.

Apart from its ten (or eleven) dimensions and the inclusion of supersymmetry – both have more or less the character of only conceptually, but not empirically motivated ad-hoc assumptions – string theory consists of a rather careful adaptation of the mathematical and model-theoretical apparatus of perturbative quantum field theory to the quantized, one-dimensionally extended, oscillating string (and, finally, of a minimal extension of its methods into the non-perturbative regime for which the declarations of intent exceed by far the conceptual successes). Without any empirical data transcending the context of our established theories, there remains for string theory only the minimal conceptual integration of basic parts of the phenomenology already reproduced by these established theories. And a significant component of this phenomenology, namely the phenomenology of gravitation, was already used up in the selection of string theory as an interesting approach to quantum gravity. Only, because string theory – containing gravitons as string states – reproduces in a certain way the phenomenology of gravitation, it is taken seriously.

But consistency requirements, the minimal inclusion of basic phenomenological constraints, and the careful extension of the model-theoretical basis of quantum field theory are not sufficient to establish an adequate theory of quantum gravity. Shouldn’t the landscape scenario of string theory be understood as a clear indication, not only of fundamental problems with the reproduction of the gauge invariances of the standard model of quantum field theory (and the corresponding phenomenology), but of much more severe conceptual problems? Almost all attempts at a solution of the immanent and transcendental problems of string theory seem to end in the ambiguity and contingency of the multitude of scenarios of the string landscape. That no physically motivated basic principle is known for string theory and its model-theoretical procedures might be seen as a problem which possibly could be overcome in future developments. But, what about the use of a static background spacetime in string theory which falls short of the fundamental insights of general relativity and which therefore seems to be completely unacceptable for a theory of quantum gravity?

At least since the change of context (and strategy) from hadron physics to quantum gravity, the development of string theory was dominated by immanent problems which led with their attempted solutions deeper. The result of this successively increasing self- referentiality is a more and more enhanced decoupling from phenomenological boundary conditions and necessities. The contact with the empirical does not increase, but gets weaker and weaker. The result of this process is a labyrinthic mathematical structure with a completely unclear physical relevance

Advertisements

Geometry and Localization: An Unholy Alliance? Thought of the Day 95.0

SYM5

There are many misleading metaphors obtained from naively identifying geometry with localization. One which is very close to that of String Theory is the idea that one can embed a lower dimensional Quantum Field Theory (QFT) into a higher dimensional one. This is not possible, but what one can do is restrict a QFT on a spacetime manifold to a submanifold. However if the submanifold contains the time axis (a ”brane”), the restricted theory has too many degrees of freedom in order to merit the name ”physical”, namely it contains as many as the unrestricted; the naive idea that by using a subspace one only gets a fraction of phase space degrees of freedom is a delusion, this can only happen if the subspace does not contain a timelike line as for a null-surface (holographic projection onto a horizon).

The geometric picture of a string in terms of a multi-component conformal field theory is that of an embedding of an n-component chiral theory into its n-dimensional component space (referred to as a target space), which is certainly a string. But this is not what modular localization reveals, rather those oscillatory degrees of freedom of the multicomponent chiral current go into an infinite dimensional Hilbert space over one localization point and do not arrange themselves according according to the geometric source-target idea. A theory of this kind is of course consistent but String Theory is certainly a very misleading terminology for this state of affairs. Any attempt to imitate Feynman rules by replacing word lines by word sheets (of strings) may produce prescriptions for cooking up some mathematically interesting functions, but those results can not be brought into the only form which counts in a quantum theory, namely a perturbative approach in terms of operators and states.

String Theory is by no means the only area in particle theory where geometry and modular localization are at loggerheads. Closely related is the interpretation of the Riemann surfaces, which result from the analytic continuation of chiral theories on the lightray/circle, as the ”living space” in the sense of localization. The mathematical theory of Riemann surfaces does not specify how it should be realized; if its refers to surfaces in an ambient space, a distinguished subgroup of Fuchsian group or any other of the many possible realizations is of no concern for a mathematician. But in the context of chiral models it is important not to confuse the living space of a QFT with its analytic continuation.

Whereas geometry as a mathematical discipline does not care about how it is concretely realized the geometrical aspects of modular localization in spacetime has a very specific geometric content namely that which can be encoded in subspaces (Reeh-Schlieder spaces) generated by operator subalgebras acting onto the vacuum reference state. In other words the physically relevant spacetime geometry and the symmetry group of the vacuum is contained in the abstract positioning of certain subalgebras in a common Hilbert space and not that which comes with classical theories.

Kant and Non-Euclidean Geometries. Thought of the Day 94.0

ei5yC

The argument that non-Euclidean geometries contradict Kant’s doctrine on the nature of space apparently goes back to Hermann Helmholtz and was retaken by several philosophers of science such as Hans Reichenbach (The Philosophy of Space and Time) who devoted much work to this subject. In a essay written in 1870, Helmholtz argued that the axioms of geometry are not a priori synthetic judgments (in the sense given by Kant), since they can be subjected to experiments. Given that Euclidian geometry is not the only possible geometry, as was believed in Kant’s time, it should be possible to determine by means of measurements whether, for instance, the sum of the three angles of a triangle is 180 degrees or whether two straight parallel lines always keep the same distance among them. If it were not the case, then it would have been demonstrated experimentally that space is not Euclidean. Thus the possibility of verifying the axioms of geometry would prove that they are empirical and not given a priori.

Helmholtz developed his own version of a non-Euclidean geometry on the basis of what he believed to be the fundamental condition for all geometries: “the possibility of figures moving without change of form or size”; without this possibility, it would be impossible to define what a measurement is. According to Helmholtz:

the axioms of geometry are not concerned with space-relations only but also at the same time with the mechanical deportment of solidest bodies in motion.

Nevertheless, he was aware that a strict Kantian might argue that the rigidity of bodies is an a priori property, but

then we should have to maintain that the axioms of geometry are not synthetic propositions… they would merely define what qualities and deportment a body must have to be recognized as rigid.

At this point, it is worth noticing that Helmholtz’s formulation of geometry is a rudimentary version of what was later developed as the theory of Lie groups. As for the transport of rigid bodies, it is well known that rigid motion cannot be defined in the framework of the theory of relativity: since there is no absolute simultaneity of events, it is impossible to move all parts of a material body in a coordinated and simultaneous way. What is defined as the length of a body depends on the reference frame from where it is observed. Thus, it is meaningless to invoke the rigidity of bodies as the basis of a geometry that pretend to describe the real world; it is only in the mathematical realm that the rigid displacement of a figure can be defined in terms of what mathematicians call a congruence.

Arguments similar to those of Helmholtz were given by Reichenbach in his intent to refute Kant’s doctrine on the nature of space and time. Essentially, the argument boils down to the following: Kant assumed that the axioms of geometry are given a priori and he only had classical geometry in mind, Einstein demonstrated that space is not Euclidean and that this could be verified empirically, ergo Kant was wrong. However, Kant did not state that space must be Euclidean; instead, he argued that it is a pure form of intuition. As such, space has no physical reality of its own, and therefore it is meaningless to ascribe physical properties to it. Actually, Kant never mentioned Euclid directly in his work, but he did refer many times to the physics of Newton, which is based on classical geometry. Kant had in mind the axioms of this geometry which is a most powerful tool of Newtonian mechanics. Actually, he did not even exclude the possibility of other geometries, as can be seen in his early speculations on the dimensionality of space.

The important point missed by Reichenbach is that Riemannian geometry is necessarily based on Euclidean geometry. More precisely, a Riemannian space must be considered as locally Euclidean in order to be able to define basic concepts such as distance and parallel transport; this is achieved by defining a flat tangent space at every point, and then extending all properties of this flat space to the globally curved space (Luther Pfahler Eisenhart Riemannian Geometry). To begin with, the structure of a Riemannian space is given by its metric tensor gμν from which the (differential) length is defined as ds2 = gμν dxμ dxν; but this is nothing less than a generalization of the usual Pythagoras theorem in Euclidean space. As for the fundamental concept of parallel transport, it is taken directly from its analogue in Euclidean space: it refers to the transport of abstract (not material, as Helmholtz believed) figures in such a space. Thus Riemann’s geometry cannot be free of synthetic a priori propositions because it is entirely based upon concepts such as length and congruence taken form Euclid. We may conclude that Euclids geometry is the condition of possibility for a more general geometry, such as Riemann’s, simply because it is the natural geometry adapted to our understanding; Kant would say that it is our form of grasping space intuitively. The possibility of constructing abstract spaces does not refute Kant’s thesis; on the contrary, it reinforces it.

Constructivism. Note Quote.

f110f2532724a581461f7024fdde344c

Constructivism, as portrayed by its adherents, “is the idea that we construct our own world rather than it being determined by an outside reality”. Indeed, a common ground among constructivists of different persuasion lies in a commitment to the idea that knowledge is actively built up by the cognizing subject. But, whereas individualistic constructivism (which is most clearly enunciated by radical constructivism) focuses on the biological/psychological mechanisms that lead to knowledge construction, sociological constructivism focuses on the social factors that influence learning.

Let us briefly consider certain fundamental assumptions of individualistic constructivism. The first issue a constructivist theory of cognition ought to elucidate concerns of course the raw materials on which knowledge is constructed. On this issue, von Glaserfeld, an eminent representative of radical constructivism, gives a categorical answer: “from the constructivist point of view, the subject cannot transcend the limits of individual experience” (Michael R. Matthews Constructivism in Science Education_ A Philosophical Examination). This statement presents the keystone of constructivist epistemology, which conclusively asserts that “the only tools available to a ‘knower’ are the senses … [through which] the individual builds a picture of the world”. What is more, the so formed mental pictures do not shape an ‘external’ to the subject world, but the distinct personal reality of each individual. And this of course entails, in its turn, that the responsibility for the gained knowledge lies with the constructor; it cannot be shifted to a pre-existing world. As Ranulph Glanville confesses, “reality is what I sense, as I sense it, when I’m being honest about it” .

In this way, individualistic constructivism estranges the cognizing subject from the external world. Cognition is not considered as aiming at the discovery and investigation of an ‘independent’ world; it is viewed as a ‘tool’ that exclusively serves the adaptation of the subject to the world as it is experienced. From this perspective, ‘knowledge’ acquires an entirely new meaning. In the expression of von Glaserfeld,

the word ‘knowledge’ refers to conceptual structures that epistemic agents, given the range of present experience, within their tradition of thought and language, consider viable….[Furthermore] concepts have to be individually built up by reflective abstraction; and reflective abstraction is not a matter of looking closer but at operating mentally in a way that happens to be compatible with the perceptual material at hand.

To say it briefly, ‘knowledge’ signifies nothing more than an adequate organization of the experiential world, which makes the cognizing subject capable to effectively manipulate its perceptual experience.

It is evident that such insights, precluding any external point of reference, have impacts on knowledge evaluation. Indeed, the ascertainment that “for constructivists there are no structures other than those which the knower forms by its own activity” (Michael R. MatthewsConstructivism in Science Education A Philosophical Examination) yields unavoidably the conclusion drawn by Gerard De Zeeuw that “there is no mind-independent yardstick against which to measure the quality of any solution”. Hence, knowledge claims should not be evaluated by reference to a supposed ‘external’ world, but only by reference to their internal consistency and personal utility. This is precisely the reason that leads von Glaserfeld to suggest the substitution of the notion of “truth” by the notion of “viability” or “functional fit”: knowledge claims are appraised as “true”, if they “functionally fit” into the subject’s experiential world; and to find a “fit” simply means not to notice any discrepancies. This functional adaptation of ‘knowledge’ to experience is what finally secures the intended “viability”.

In accordance with the constructivist view, the notion of ‘object’, far from indicating any kind of ‘existence’, it explicitly refers to a strictly personal construction of the cognizing subject. Specifically, “any item of the furniture of someone’s experiential world can be called an ‘object’” (von Glaserfeld). From this point of view, the supposition that “the objects one has isolated in his experience are identical with those others have formed … is an illusion”. This of course deprives language of any rigorous criterion of objectivity; its physical-object statements, being dependent upon elements that are derived from personal experience, cannot be considered to reveal attributes of the objects as they factually are. Incorporating concepts whose meaning is highly associated with the individual experience of the cognizing subject, these statements form at the end a personal-specific description of the world. Conclusively, for constructivists the term ‘objectivity’ “shows no more than a relative compatibility of concepts” in situations where individuals have had occasion to compare their “individual uses of the particular words”.

From the viewpoint of radical constructivism, science, being a human enterprise, is amenable, by its very nature, to human limitations. It is then naturally inferred on constructivist grounds that “science cannot transcend [just as individuals cannot] the domain of experience” (von Glaserfeld). This statement, indicating that there is no essential differentiation between personal and scientific knowledge, permits, for instance, John Staver to assert that “for constructivists, observations, objects, events, data, laws and theory do not exist independent of observers. The lawful and certain nature of natural phenomena is a property of us, those who describe, not of nature, what is described”. Accordingly, by virtue of the preceding premise, one may argue that “scientific theories are derived from human experience and formulated in terms of human concepts” (von Glaserfeld).

In the framework now of social constructivism, if one accepts that the term ‘knowledge’ means no more than “what is collectively endorsed” (David Bloor Knowledge and Social Imagery), he will probably come to the conclusion that “the natural world has a small or non-existent role in the construction of scientific knowledge” (Collins). Or, in a weaker form, one can postulate that “scientific knowledge is symbolic in nature and socially negotiated. The objects of science are not the phenomena of nature but constructs advanced by the scientific community to interpret nature” (Rosalind Driver et al.). It is worth remarking that both views of constructivism eliminate, or at least downplay, the role of the natural world in the construction of scientific knowledge.

It is evident that the foregoing considerations lead most versions of constructivism to ultimately conclude that the very word ‘existence’ has no meaning in itself. It does acquire meaning only by referring to individuals or human communities. The acknowledgement of this fact renders subsequently the notion of ‘external’ physical reality useless and therefore redundant. As Riegler puts it, within the constructivist framework, “an external reality is neither rejected nor confirmed, it must be irrelevant”.

Pluralist Mathematics, Minimalist Philosophy: Hans Reichenbach. Drunken Risibility.

H_Reichenbach

Hans Reichenbach relativized the notion of the constitutive a priori. The key observation concerns the fundamental difference between definitions in pure geometry and definitions in physical geometry. In pure geometry there are two kinds of definition: first, there are the familiar explicit definitions; second, there are implicit definitions, that is the kind of definition whereby such fundamental terms as ‘point’, ‘line’, and ‘surface’ are to derive their meaning from the fundamental axioms governing them. But in physical geometry a new kind of definition emerges – that of a physical (or coordinative) definition:

The physical definition takes the meaning of the concept for granted and coordinates to it a physical thing; it is a coordinative definition. Physical definitions, therefore, consist in the coordination of a mathematical definition to a “piece of reality”; one might call them real definitions. (Reichenbach, 8)

Now there are two important points about physical definitions. First, some such correlation between a piece of mathematics and “a piece of physical reality” is necessary if one is to articulate the laws of physics (e.g. consider “force-free moving bodies travel in straight lines”). Second, given a piece of pure mathematics there is a great deal of freedom in choosing the coordinative definitions linking it to “a piece of physical reality”, since… coordinative definitions are arbitrary, and “truth” and “falsehood” are not applicable to them. So we have here a conception of the a priori which (by the first point) is constitutive (of the empirical significance of the laws of physics) and (by the second point) is relative. Moreover, on Reichenbach’s view, in choosing between two empirically equivalent theories that involve different coordinative definitions, there is no issue of “truth” – there is only the issue of simplicity. In his discussion of Einstein’s particular definition of simultaneity, after noting its simplicity, Reichenbach writes: “This simplicity has nothing to do with the truth of the theory. The truth of the axioms decides the empirical truth, and every theory compatible with them which does not add new empirical assumptions is equally true.” (p 11)

Now, Reichenbach went beyond this and he held a more radical thesis – in addition to advocating pluralism with respect to physical geometry (something made possible by the free element in coordinative definitions), he advocated pluralism with respect to pure mathematics (such as arithmetic and set theory). According to Reichenbach, this view is made possible by the axiomatic conception of Hilbert, wherein axioms are treated as “implicit definitions” of the fundamental terms:

The problem of the axioms of mathematics was solved by the discovery that they are definitions, that is, arbitrary stipulations which are neither true nor false, and that only the logical properties of a system – its consistency, independence, uniqueness, and completeness – can be subjects of critical investigation. (p 3)

It needs to be stressed here that Reichenbach is extending the Hilbertian thesis concerning implicit definitions since although Hilbert held this thesis with regard to formal geometry he did not hold it with regard to arithmetic.

On this view there is a plurality of consistent formal systems and the notions of “truth” and “falsehood” do not apply to these systems; the only issue in choosing one system over another is one of convenience for the purpose at hand and this is brought out by investigating their metamathematical properties, something that falls within the provenance of “critical investigation”, where there is a question of truth and falsehood. This radical form of pluralism came to be challenged by Gödel’s discovery of the incompleteness theorems. To begin with, through the arithmetization of syntax, the metamathematical notions that Reichenbach takes to fall within the provenance of “critical investigation” were themselves seen to be a part of arithmetic. Thus, one cannot, on pain of inconsistency, say that there is a question of truth and falsehood with regard to the former but not the latter. More importantly, the incompleteness theorems buttressed the view that truth outstrips consistency. This is most clearly seen using Rosser’s strengthening of the first incompleteness theorem as follows: Let T be an axiom system of arithmetic that (a) falls within the provenance of “critical investigation” and (b) is sufficiently strong to prove the incompleteness theorem. A natural choice for such an axiom system is Primitive Recursive Arithmetic (PRA) but much weaker systems suffice, for example, IΔ0 + exp. Either of these systems can be taken as T. Assuming that T is consistent (something which falls within the provenance of “critical investigation”), by Rosser’s strengthening of the first incompleteness theorem, there is a Π01-sentence φ such that (provably within T + Con(T )) both T + φ and T + ¬φ are consistent. However, not both systems are equally legitimate. For it is easily seen that if a Π01-sentence φ is independent from such a theory, then it must be true. The point being that T is ∑10-complete (provably so in T). So, although T + ¬φ is consistent, it proves a false arithmetical statement.

Superstrings as Grand Unifier. Thought of the Day 86.0

1*An_5-O6idFfNx4Bs_c8lig

The first step of deriving General Relativity and particle physics from a common fundamental source may lie within the quantization of the classical string action. At a given momentum, quantized strings exist only at discrete energy levels, each level containing a finite number of string states, or particle types. There are huge energy gaps between each level, which means that the directly observable particles belong to a small subset of string vibrations. In principle, a string has harmonic frequency modes ad infinitum. However, the masses of the corresponding particles get larger, and decay to lighter particles all the quicker.

Most importantly, the ground energy state of the string contains a massless, spin-two particle. There are no higher spin particles, which is fortunate since their presence would ruin the consistency of the theory. The presence of a massless spin-two particle is undesirable if string theory has the limited goal of explaining hadronic interactions. This had been the initial intention. However, attempts at a quantum field theoretic description of gravity had shown that the force-carrier of gravity, known as the graviton, had to be a massless spin-two particle. Thus, in string theory’s comeback as a potential “theory of everything,” a curse turns into a blessing.

Once again, as with the case of supersymmetry and supergravity, we have the astonishing result that quantum considerations require the existence of gravity! From this vantage point, right from the start the quantum divergences of gravity are swept away by the extended string. Rather than being mutually exclusive, as it seems at first sight, quantum physics and gravitation have a symbiotic relationship. This reinforces the idea that quantum gravity may be a mandatory step towards the unification of all forces.

Unfortunately, the ground state energy level also includes negative-mass particles, known as tachyons. Such particles have light speed as their limiting minimum speed, thus violating causality. Tachyonic particles generally suggest an instability, or possibly even an inconsistency, in a theory. Since tachyons have negative mass, an interaction involving finite input energy could result in particles of arbitrarily high energies together with arbitrarily many tachyons. There is no limit to the number of such processes, thus preventing a perturbative understanding of the theory.

An additional problem is that the string states only include bosonic particles. However, it is known that nature certainly contains fermions, such as electrons and quarks. Since supersymmetry is the invariance of a theory under the interchange of bosons and fermions, it may come as no surprise, post priori, that this is the key to resolving the second issue. As it turns out, the bosonic sector of the theory corresponds to the spacetime coordinates of a string, from the point of view of the conformal field theory living on the string worldvolume. This means that the additional fields are fermionic, so that the particle spectrum can potentially include all observable particles. In addition, the lowest energy level of a supersymmetric string is naturally massless, which eliminates the unwanted tachyons from the theory.

The inclusion of supersymmetry has some additional bonuses. Firstly, supersymmetry enforces the cancellation of zero-point energies between the bosonic and fermionic sectors. Since gravity couples to all energy, if these zero-point energies were not canceled, as in the case of non-supersymmetric particle physics, then they would have an enormous contribution to the cosmological constant. This would disagree with the observed cosmological constant being very close to zero, on the positive side, relative to the energy scales of particle physics.

Also, the weak, strong and electromagnetic couplings of the Standard Model differ by several orders of magnitude at low energies. However, at high energies, the couplings take on almost the same value, almost but not quite. It turns out that a supersymmetric extension of the Standard Model appears to render the values of the couplings identical at approximately 1016 GeV. This may be the manifestation of the fundamental unity of forces. It would appear that the “bottom-up” approach to unification is winning. That is, gravitation arises from the quantization of strings. To put it another way, supergravity is the low-energy limit of string theory, and has General Relativity as its own low-energy limit.

Philosophy of Dimensions: M-Theory. Thought of the Day 85.0

diagram

Superstrings provided a perturbatively finite theory of gravity which, after compactification down to 3+1 dimensions, seemed potentially capable of explaining the strong, weak and electromagnetic forces of the Standard Model, including the required chiral representations of quarks and leptons. However, there appeared to be not one but five seemingly different but mathematically consistent superstring theories: the E8 × E8 heterotic string, the SO(32) heterotic string, the SO(32) Type I string, and Types IIA and IIB strings. Each of these theories corresponded to a different way in which fermionic degrees of freedom could be added to the string worldsheet.

Supersymmetry constrains the upper limit on the number of spacetime dimensions to be eleven. Why, then, do superstring theories stop at ten? In fact, before the “first string revolution” of the mid-1980’s, many physicists sought superunification in eleven-dimensional supergravity. Solutions to this most primitive supergravity theory include the elementary supermembrane and its dual partner, the solitonic superfivebrane. These are supersymmetric objects extended over two and five spatial dimensions, respectively. This brings to mind another question: why do superstring theories generalize zero-dimensional point particles only to one-dimensional strings, rather than p-dimensional objects?

During the “second superstring revolution” of the mid-nineties it was found that, in addition to the 1+1-dimensional string solutions, string theory contains soliton-like Dirichlet branes. These Dp-branes have p + 1-dimensional worldvolumes, which are hyperplanes in 9 + 1-dimensional spacetime on which strings are allowed to end. If a closed string collides with a D-brane, it can turn into an open string whose ends move along the D-brane. The end points of such an open string satisfy conventional free boundary conditions along the worldvolume of the D-brane, and fixed (Dirichlet) boundary conditions are obeyed in the 9 − p dimensions transverse to the D-brane.

D-branes make it possible to probe string theories non-perturbatively, i.e., when the interactions are no longer assumed to be weak. This more complete picture makes it evident that the different string theories are actually related via a network of “dualities.” T-dualities relate two different string theories by interchanging winding modes and Kaluza-Klein states, via R → α′/R. For example, Type IIA string theory compactified on a circle of radius R is equivalent to Type IIB string theory compactified on a circle of radius 1/R. We have a similar relation between E8 × E8 and SO(32) heterotic string theories. While T-dualities remain manifest at weak-coupling, S-dualities are less well-established strong/weak-coupling relationships. For example, the SO(32) heterotic string is believed to be S-dual to the SO(32) Type I string, while the Type IIB string is self-S-dual. There is a duality of dualities, in which the T-dual of one theory is the S-dual of another. Compactification on various manifolds often leads to dualities. The heterotic string compactified on a six-dimensional torus T6 is believed to be self-S-dual. Also, the heterotic string on T4 is dual to the type II string on four-dimensional K3. The heterotic string on T6 is dual to the Type II string on a Calabi-Yau manifold. The Type IIA string on a Calabi-Yau manifold is dual to the Type IIB string on the mirror Calabi-Yau manifold.

This led to the discovery that all five string theories are actually different sectors of an eleven-dimensional non-perturbative theory, known as M-theory. When M-theory is compactified on a circle S1 of radius R11, it leads to the Type IIA string, with string coupling constant gs = R3/211. Thus, the illusion that this string theory is ten-dimensional is a remnant of weak-coupling perturbative methods. Similarly, if M-theory is compactified on a line segment S1/Z2, then the E8 × E8 heterotic string is recovered.

Just as a given string theory has a corresponding supergravity in its low-energy limit, eleven-dimensional supergravity is the low-energy limit of M-theory. Since we do not yet know what the full M-theory actually is, many different names have been attributed to the “M,” including Magical, Mystery, Matrix, and Membrane! Whenever we refer to “M-theory,” we mean the theory which subsumes all five string theories and whose low-energy limit is eleven-dimensional supergravity. We now have an adequate framework with which to understand a wealth of non-perturbative phenomena. For example, electric-magnetic duality in D = 4 is a consequence of string-string duality in D = 6, which in turn is the result of membrane-fivebrane duality in D = 11. Furthermore, the exact electric-magnetic duality has been extended to an effective duality of non-conformal N = 2 Seiberg-Witten theory, which can be derived from M-theory. In fact, it seems that all supersymmetric quantum field theories with any gauge group could have a geometrical interpretation through M-theory, as worldvolume fields propagating on a common intersection of stacks of p-branes wrapped around various cycles of compactified manifolds.

In addition, while perturbative string theory has vacuum degeneracy problems due to the billions of Calabi-Yau vacua, the non-perturbative effects of M-theory lead to smooth transitions from one Calabi-Yau manifold to another. Now the question to ask is not why do we live in one topology but rather why do we live in a particular corner of the unique topology. M-theory might offer a dynamical explanation of this. While supersymmetry ensures that the high-energy values of the Standard Model coupling constants meet at a common value, which is consistent with the idea of grand unification, the gravitational coupling constant just misses this meeting point. In fact, M-theory may resolve long-standing cosmological and quantum gravitational problems. For example, M-theory accounts for a microscopic description of black holes by supplying the necessary non-perturbative components, namely p-branes. This solves the problem of counting black hole entropy by internal degrees of freedom.