Fibrations of Elliptic Curves in F-Theory.

Pictures_network

F-theory compactifications are by definition compactifications of the type IIB string with non-zero, and in general non-constant string coupling – they are thus intrinsically non-perturbative. F-theory may also seen as a construction to geometrize (and thereby making manifest) certain features pertaining to the S-duality of the type IIB string.

Let us first recapitulate the most important massless bosonic fields of the type IIB string. From the NS-NS sector, we have the graviton gμν, the antisymmetric 2-form field B as well as the dilaton φ; the latter, when exponentiated, serves as the coupling constant of the theory. Moreover, from the R-R sector we have the p-form tensor fields C(p) with p = 0,2,4. It is also convenient to include the magnetic duals of these fields, B(6), C(6) and C(8) (C(4) has self-dual field strength). It is useful to combine the dilaton with the axion into one complex field:

τIIB ≡ C(0) + ie —– (1)

The S-duality then acts via projective SL(2, Z) transformations in the canonical manner:

τIIB → (aτIIB + b)/(cτIIB + d) with a, b, c, d ∈ Z and ad – bc = 1

Furthermore, it acts via simple matrix multiplication on the other fields if these are grouped into doublets (B(2)C(2)), (B(6)C(4)), while C(4) stays invariant.

The simplest F-theory compactifications are the highest dimensional ones, and simplest of all is the compactification of the type IIB string on the 2-sphere, P1. However, as the first Chern class does not vanish: C1(P1) = – 2, this by itself cannot be a good, supersymmetry preserving background. The remedy is to add extra 7-branes to the theory, which sit at arbitrary points zi on the P1, and otherwise fill the 7+1 non-compact space-time dimensions. If this is done in the right way, C1(P1) is cancelled, thereby providing a consistent background.

Untitled

Encircling the location of a 7-brane in the z-plane leads to a jump of the perceived type IIB string coupling, τIIB →τIIB  +1.

To explain how this works, consider first a single D7-brane located at an arbitrary given point z0 on the P1. A D7-brane carries by definition one unit of D7-brane charge, since it is a unit source of C(8). This means that is it magnetically charged with respect to the dual field C(0), which enters in the complexified type IIB coupling in (1). As a consequence, encircling the plane location z0 will induce a non-trivial monodromy, that is, a jump on the coupling. But this then implies that in the neighborhood of the D7-brane, we must have a non-constant string coupling of the form: τIIB(z) = 1/2πiIn[z – z0]; we thus indeed have a truly non-perturbative situation.

In view of the SL(2, Z) action on the string coupling (1), it is natural to interpret it as a modular parameter of a two-torus, T2, and this is what then gives a geometrical meaning to the S-duality group. This modular parameter τIIB = τIIB(Z) is not constant over the P1 compactification manifold, the shape of the T2 will accordingly vary along P1. The relevant geometrical object will therefore not be the direct product manifold T2 x P1, but rather a fibration of T2 over P1

Untitled

Fibration of an elliptic curve over P1, which in total makes a K3 surface.

The logarithmic behavior of τIIB(z) in the vicinity of a 7-brane means that the T2 fiber is singular at the brane location. It is known from mathematics that each of such singular fibers contributes 1/12 to the first Chern class. Therefore we need to put 24 of them in order to have a consistent type IIB background with C1 = 0. The mathematical data: “Tfibered over P1 with 24 singular fibers” is now exactly what characterizes the K3 surface; indeed it is the only complex two-dimensional manifold with vanishing first Chern class (apart from T4).

The K3 manifold that arises in this context is so far just a formal construct, introduced to encode of the behavior of the string coupling in the presence of 7-branes in an elegant and useful way. One may speculate about a possible more concrete physical significance, such as a compactification manifold of a yet unknown 12 dimensional “F-theory”. The existence of such a theory is still unclear, but all we need the K3 for is to use its intriguing geometric properties for computing physical quantities (the quartic gauge threshold couplings, ultimately).

In order to do explicit computations, we first of all need a concrete representation of the K3 surface. Since the families of K3’s in question are elliptically fibered, the natural starting point is the two-torus T2. It can be represented in the well-known “Weierstraβ” form:

WT2 = y2 + x3 + xf + g = 0 —– (2)

which in turn is invariantly characterized by the J-function:

J = 4(24f)3/(4f3 + 27g2) —– (3)

An elliptically fibered K3 surface can be made out of (2) by letting f → f8(z) and g → g12(z) become polynomials in the P1 coordinate z, of the indicated orders. The locations zi of the 7-branes, which correspond to the locations of the singular fibers where J(τIIB(zi)) → ∞, are then precisely where the discriminant

∆(z) ≡ 4f83(z) + 27g122(z)

=: ∏i=124(z –  zi) vanishes.

Advertisements

Knowledge Limited for Dummies….Didactics.

header_Pipes

Bertrand Russell with Alfred North Whitehead, in the Principia Mathematica aimed to demonstrate that “all pure mathematics follows from purely logical premises and uses only concepts defined in logical terms.” Its goal was to provide a formalized logic for all mathematics, to develop the full structure of mathematics where every premise could be proved from a clear set of initial axioms.

Russell observed of the dense and demanding work, “I used to know of only six people who had read the later parts of the book. Three of those were Poles, subsequently (I believe) liquidated by Hitler. The other three were Texans, subsequently successfully assimilated.” The complex mathematical symbols of the manuscript required it to be written by hand, and its sheer size – when it was finally ready for the publisher, Russell had to hire a panel truck to send it off – made it impossible to copy. Russell recounted that “every time that I went out for a walk I used to be afraid that the house would catch fire and the manuscript get burnt up.”

Momentous though it was, the greatest achievement of Principia Mathematica was realized two decades after its completion when it provided the fodder for the metamathematical enterprises of an Austrian, Kurt Gödel. Although Gödel did face the risk of being liquidated by Hitler (therefore fleeing to the Institute of Advanced Studies at Princeton), he was neither a Pole nor a Texan. In 1931, he wrote a treatise entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which demonstrated that the goal Russell and Whitehead had so single-mindedly pursued was unattainable.

The flavor of Gödel’s basic argument can be captured in the contradictions contained in a schoolboy’s brainteaser. A sheet of paper has the words “The statement on the other side of this paper is true” written on one side and “The statement on the other side of this paper is false” on the reverse. The conflict isn’t resolvable. Or, even more trivially, a statement like; “This statement is unprovable.” You cannot prove the statement is true, because doing so would contradict it. If you prove the statement is false, then that means its converse is true – it is provable – which again is a contradiction.

The key point of contradiction for these two examples is that they are self-referential. This same sort of self-referentiality is the keystone of Gödel’s proof, where he uses statements that imbed other statements within them. This problem did not totally escape Russell and Whitehead. By the end of 1901, Russell had completed the first round of writing Principia Mathematica and thought he was in the homestretch, but was increasingly beset by these sorts of apparently simple-minded contradictions falling in the path of his goal. He wrote that “it seemed unworthy of a grown man to spend his time on such trivialities, but . . . trivial or not, the matter was a challenge.” Attempts to address the challenge extended the development of Principia Mathematica by nearly a decade.

Yet Russell and Whitehead had, after all that effort, missed the central point. Like granite outcroppings piercing through a bed of moss, these apparently trivial contradictions were rooted in the core of mathematics and logic, and were only the most readily manifest examples of a limit to our ability to structure formal mathematical systems. Just four years before Gödel had defined the limits of our ability to conquer the intellectual world of mathematics and logic with the publication of his Undecidability Theorem, the German physicist Werner Heisenberg’s celebrated Uncertainty Principle had delineated the limits of inquiry into the physical world, thereby undoing the efforts of another celebrated intellect, the great mathematician Pierre-Simon Laplace. In the early 1800s Laplace had worked extensively to demonstrate the purely mechanical and predictable nature of planetary motion. He later extended this theory to the interaction of molecules. In the Laplacean view, molecules are just as subject to the laws of physical mechanics as the planets are. In theory, if we knew the position and velocity of each molecule, we could trace its path as it interacted with other molecules, and trace the course of the physical universe at the most fundamental level. Laplace envisioned a world of ever more precise prediction, where the laws of physical mechanics would be able to forecast nature in increasing detail and ever further into the future, a world where “the phenomena of nature can be reduced in the last analysis to actions at a distance between molecule and molecule.”

What Gödel did to the work of Russell and Whitehead, Heisenberg did to Laplace’s concept of causality. The Uncertainty Principle, though broadly applied and draped in metaphysical context, is a well-defined and elegantly simple statement of physical reality – namely, the combined accuracy of a measurement of an electron’s location and its momentum cannot vary far from a fixed value. The reason for this, viewed from the standpoint of classical physics, is that accurately measuring the position of an electron requires illuminating the electron with light of a very short wavelength. But the shorter the wavelength the greater the amount of energy that hits the electron, and the greater the energy hitting the electron the greater the impact on its velocity.

What is true in the subatomic sphere ends up being true – though with rapidly diminishing significance – for the macroscopic. Nothing can be measured with complete precision as to both location and velocity because the act of measuring alters the physical properties. The idea that if we know the present we can calculate the future was proven invalid – not because of a shortcoming in our knowledge of mechanics, but because the premise that we can perfectly know the present was proven wrong. These limits to measurement imply limits to prediction. After all, if we cannot know even the present with complete certainty, we cannot unfailingly predict the future. It was with this in mind that Heisenberg, ecstatic about his yet-to-be-published paper, exclaimed, “I think I have refuted the law of causality.”

The epistemological extrapolation of Heisenberg’s work was that the root of the problem was man – or, more precisely, man’s examination of nature, which inevitably impacts the natural phenomena under examination so that the phenomena cannot be objectively understood. Heisenberg’s principle was not something that was inherent in nature; it came from man’s examination of nature, from man becoming part of the experiment. (So in a way the Uncertainty Principle, like Gödel’s Undecidability Proposition, rested on self-referentiality.) While it did not directly refute Einstein’s assertion against the statistical nature of the predictions of quantum mechanics that “God does not play dice with the universe,” it did show that if there were a law of causality in nature, no one but God would ever be able to apply it. The implications of Heisenberg’s Uncertainty Principle were recognized immediately, and it became a simple metaphor reaching beyond quantum mechanics to the broader world.

This metaphor extends neatly into the world of financial markets. In the purely mechanistic universe of classical physics, we could apply Newtonian laws to project the future course of nature, if only we knew the location and velocity of every particle. In the world of finance, the elementary particles are the financial assets. In a purely mechanistic financial world, if we knew the position each investor has in each asset and the ability and willingness of liquidity providers to take on those assets in the event of a forced liquidation, we would be able to understand the market’s vulnerability. We would have an early-warning system for crises. We would know which firms are subject to a liquidity cycle, and which events might trigger that cycle. We would know which markets are being overrun by speculative traders, and thereby anticipate tactical correlations and shifts in the financial habitat. The randomness of nature and economic cycles might remain beyond our grasp, but the primary cause of market crisis, and the part of market crisis that is of our own making, would be firmly in hand.

The first step toward the Laplacean goal of complete knowledge is the advocacy by certain financial market regulators to increase the transparency of positions. Politically, that would be a difficult sell – as would any kind of increase in regulatory control. Practically, it wouldn’t work. Just as the atomic world turned out to be more complex than Laplace conceived, the financial world may be similarly complex and not reducible to a simple causality. The problems with position disclosure are many. Some financial instruments are complex and difficult to price, so it is impossible to measure precisely the risk exposure. Similarly, in hedge positions a slight error in the transmission of one part, or asynchronous pricing of the various legs of the strategy, will grossly misstate the total exposure. Indeed, the problems and inaccuracies in using position information to assess risk are exemplified by the fact that major investment banking firms choose to use summary statistics rather than position-by-position analysis for their firmwide risk management despite having enormous resources and computational power at their disposal.

Perhaps more importantly, position transparency also has implications for the efficient functioning of the financial markets beyond the practical problems involved in its implementation. The problems in the examination of elementary particles in the financial world are the same as in the physical world: Beyond the inherent randomness and complexity of the systems, there are simply limits to what we can know. To say that we do not know something is as much a challenge as it is a statement of the state of our knowledge. If we do not know something, that presumes that either it is not worth knowing or it is something that will be studied and eventually revealed. It is the hubris of man that all things are discoverable. But for all the progress that has been made, perhaps even more exciting than the rolling back of the boundaries of our knowledge is the identification of realms that can never be explored. A sign in Einstein’s Princeton office read, “Not everything that counts can be counted, and not everything that can be counted counts.”

The behavioral analogue to the Uncertainty Principle is obvious. There are many psychological inhibitions that lead people to behave differently when they are observed than when they are not. For traders it is a simple matter of dollars and cents that will lead them to behave differently when their trades are open to scrutiny. Beneficial though it may be for the liquidity demander and the investor, for the liquidity supplier trans- parency is bad. The liquidity supplier does not intend to hold the position for a long time, like the typical liquidity demander might. Like a market maker, the liquidity supplier will come back to the market to sell off the position – ideally when there is another investor who needs liquidity on the other side of the market. If other traders know the liquidity supplier’s positions, they will logically infer that there is a good likelihood these positions shortly will be put into the market. The other traders will be loath to be the first ones on the other side of these trades, or will demand more of a price concession if they do trade, knowing the overhang that remains in the market.

This means that increased transparency will reduce the amount of liquidity provided for any given change in prices. This is by no means a hypothetical argument. Frequently, even in the most liquid markets, broker-dealer market makers (liquidity providers) use brokers to enter their market bids rather than entering the market directly in order to preserve their anonymity.

The more information we extract to divine the behavior of traders and the resulting implications for the markets, the more the traders will alter their behavior. The paradox is that to understand and anticipate market crises, we must know positions, but knowing and acting on positions will itself generate a feedback into the market. This feedback often will reduce liquidity, making our observations less valuable and possibly contributing to a market crisis. Or, in rare instances, the observer/feedback loop could be manipulated to amass fortunes.

One might argue that the physical limits of knowledge asserted by Heisenberg’s Uncertainty Principle are critical for subatomic physics, but perhaps they are really just a curiosity for those dwelling in the macroscopic realm of the financial markets. We cannot measure an electron precisely, but certainly we still can “kind of know” the present, and if so, then we should be able to “pretty much” predict the future. Causality might be approximate, but if we can get it right to within a few wavelengths of light, that still ought to do the trick. The mathematical system may be demonstrably incomplete, and the world might not be pinned down on the fringes, but for all practical purposes the world can be known.

Unfortunately, while “almost” might work for horseshoes and hand grenades, 30 years after Gödel and Heisenberg yet a third limitation of our knowledge was in the wings, a limitation that would close the door on any attempt to block out the implications of microscopic uncertainty on predictability in our macroscopic world. Based on observations made by Edward Lorenz in the early 1960s and popularized by the so-called butterfly effect – the fanciful notion that the beating wings of a butterfly could change the predictions of an otherwise perfect weather forecasting system – this limitation arises because in some important cases immeasurably small errors can compound over time to limit prediction in the larger scale. Half a century after the limits of measurement and thus of physical knowledge were demonstrated by Heisenberg in the world of quantum mechanics, Lorenz piled on a result that showed how microscopic errors could propagate to have a stultifying impact in nonlinear dynamic systems. This limitation could come into the forefront only with the dawning of the computer age, because it is manifested in the subtle errors of computational accuracy.

The essence of the butterfly effect is that small perturbations can have large repercussions in massive, random forces such as weather. Edward Lorenz was testing and tweaking a model of weather dynamics on a rudimentary vacuum-tube computer. The program was based on a small system of simultaneous equations, but seemed to provide an inkling into the variability of weather patterns. At one point in his work, Lorenz decided to examine in more detail one of the solutions he had generated. To save time, rather than starting the run over from the beginning, he picked some intermediate conditions that had been printed out by the computer and used those as the new starting point. The values he typed in were the same as the values held in the original simulation at that point, so the results the simulation generated from that point forward should have been the same as in the original; after all, the computer was doing exactly the same operations. What he found was that as the simulated weather pattern progressed, the results of the new run diverged, first very slightly and then more and more markedly, from those of the first run. After a point, the new path followed a course that appeared totally unrelated to the original one, even though they had started at the same place.

Lorenz at first thought there was a computer glitch, but as he investigated further, he discovered the basis of a limit to knowledge that rivaled that of Heisenberg and Gödel. The problem was that the numbers he had used to restart the simulation had been reentered based on his printout from the earlier run, and the printout rounded the values to three decimal places while the computer carried the values to six decimal places. This rounding, clearly insignificant at first, promulgated a slight error in the next-round results, and this error grew with each new iteration of the program as it moved the simulation of the weather forward in time. The error doubled every four simulated days, so that after a few months the solutions were going their own separate ways. The slightest of changes in the initial conditions had traced out a wholly different pattern of weather.

Intrigued by his chance observation, Lorenz wrote an article entitled “Deterministic Nonperiodic Flow,” which stated that “nonperiodic solutions are ordinarily unstable with respect to small modifications, so that slightly differing initial states can evolve into considerably different states.” Translation: Long-range weather forecasting is worthless. For his application in the narrow scientific discipline of weather prediction, this meant that no matter how precise the starting measurements of weather conditions, there was a limit after which the residual imprecision would lead to unpredictable results, so that “long-range forecasting of specific weather conditions would be impossible.” And since this occurred in a very simple laboratory model of weather dynamics, it could only be worse in the more complex equations that would be needed to properly reflect the weather. Lorenz discovered the principle that would emerge over time into the field of chaos theory, where a deterministic system generated with simple nonlinear dynamics unravels into an unrepeated and apparently random path.

The simplicity of the dynamic system Lorenz had used suggests a far-reaching result: Because we cannot measure without some error (harking back to Heisenberg), for many dynamic systems our forecast errors will grow to the point that even an approximation will be out of our hands. We can run a purely mechanistic system that is designed with well-defined and apparently well-behaved equations, and it will move over time in ways that cannot be predicted and, indeed, that appear to be random. This gets us to Santa Fe.

The principal conceptual thread running through the Santa Fe research asks how apparently simple systems, like that discovered by Lorenz, can produce rich and complex results. Its method of analysis in some respects runs in the opposite direction of the usual path of scientific inquiry. Rather than taking the complexity of the world and distilling simplifying truths from it, the Santa Fe Institute builds a virtual world governed by simple equations that when unleashed explode into results that generate unexpected levels of complexity.

In economics and finance, institute’s agenda was to create artificial markets with traders and investors who followed simple and reasonable rules of behavior and to see what would happen. Some of the traders built into the model were trend followers, others bought or sold based on the difference between the market price and perceived value, and yet others traded at random times in response to liquidity needs. The simulations then printed out the paths of prices for the various market instruments. Qualitatively, these paths displayed all the richness and variation we observe in actual markets, replete with occasional bubbles and crashes. The exercises did not produce positive results for predicting or explaining market behavior, but they did illustrate that it is not hard to create a market that looks on the surface an awful lot like a real one, and to do so with actors who are following very simple rules. The mantra is that simple systems can give rise to complex, even unpredictable dynamics, an interesting converse to the point that much of the complexity of our world can – with suitable assumptions – be made to appear simple, summarized with concise physical laws and equations.

The systems explored by Lorenz were deterministic. They were governed definitively and exclusively by a set of equations where the value in every period could be unambiguously and precisely determined based on the values of the previous period. And the systems were not very complex. By contrast, whatever the set of equations are that might be divined to govern the financial world, they are not simple and, furthermore, they are not deterministic. There are random shocks from political and economic events and from the shifting preferences and attitudes of the actors. If we cannot hope to know the course of the deterministic systems like fluid mechanics, then no level of detail will allow us to forecast the long-term course of the financial world, buffeted as it is by the vagaries of the economy and the whims of psychology.

Underlying the Non-Perturbative Quantum Geometry of the Quartic Gauge Couplings in 8D.

A lot can be learned by simply focussing on the leading singularities in the moduli space of the effective theory. However, for the sake of performing really non-trivial quantitative tests of the heterotic/F-theory duality, we should try harder in order to reproduce the exact functional form of the couplings ∆eff(T) from K3 geometry. The hope is, of course, to learn something new about how to do exact non-perturbative computations in D-brane physics.

More specifically, the issue is to eventually determine the extra contributions to the geometric Green’s functions. Having a priori no good clue from first principles how to do this, the results of the previous section, together with experience with four dimensional compactifications with N = 2 supersymmetry, suggest that somehow mirror symmetry should be a useful tool.

The starting point is the observation that threshold couplings of similar structure appear also in four dimensional, N = 2 supersymmetric compactifications of type II strings on Calabi-Yau threefolds. More precisely, these coupling functions multiply operators of the form TrFG2 (in contrast to quartic operators in d = 8), and can be written in the form

(4d)eff ∼ ln[λα1(1-λ)α2(λ’)3] + γ(λ) —– (1)

which is similar to Green’s function

eff (λ) = ∆N-1prime form (λ) + δ(λ)

It is to be noted that a Green’s function is in general ambiguous up to the addition of a finite piece, and it is this ambiguous piece to which we can formally attribute those extra non-singular, non-perturbative corrections.

The term δ(λ) contributes to dilation flat coordinate. The dilation S is a period associated with the CY threefold moduli space, and like all period integrals, it satisfies a system of linear differential equations. This differential equation may then be translated back into geometry, and this then would hopefully give us a clue about what the relevant quantum geometry is that underlies those quartic gauge couplings in eight dimensions.

The starting point is the families of singular K3 surfaces associated with which are the period integrals that evaluate to the hypergeometric functions. Generally, period integrals satisfy the Picard-Fuchs linear differential equations.

The four-dimensional theories are obtained by compactifying the type II strings on CY threefolds of special type, namely they are fibrations of the K3 surfaces over Pl. The size of the P1 yields then an additional modulus, whose associated fiat coordinate is precisely the dilaton S (in the dual, heterotic language; from the type II point of view, it is simply another geometric modulus). The K3-fibered threefolds are then associated with enlarged PF systems of the form:

LN(z, y) = θzz – 2θy) – z(θz + 1/2N)(θz + 1/2 – 1/2N)

L2(y) = θy2 – 2y(2θy +1)θy —– (2)

For perturbative, one-loop contributions on the heterotic side (which capture the full story in d = 8, in contrast to d = 4), we need to consider only the weak coupling limit, which corresponds to the limit of large base space: y ∼ e-S → 0. Though we might now be tempted to drop all the θ≡ y∂y terms in the PF system, we better note that the θy term in LN(z, y) can a non-vanishing contribution, namely in particular when it hits the logarithmic piece of the dilaton period, S = -In[y] + γ. As a result one finds that the piece , that we want to compute satisfies in the limit y → 0 the following inhomogenous differential equation

LN . (γϖ0)(z) = ϖ0(z) —– (3)

We now apply the inverse of this strategy to our eight dimensional problem. Since we know from the perturbative heterotic calculation what the exact answer for δ must be, we can work backwards and see what inhomogenous differential equation the extra contribution δ(λ) obeys. It satisfies

LN⊗2 . (δϖ0)(z) = ϖ02(z) —– (4)

whose homogenous operator

LN⊗2(z) = θz3 – z(θz + 1 – 1/N)(θz + 1/2)(θz + 1/N) —– (5)

is the symmetric square of the K3 Picard-Fuchs operator. This means that its solution space is given by the symmetric square of the solution space of LN(z), i.e.,

LN⊗2 . (ϖ02, ϖ0ϖ1, ϖ12) = 0 —– (6)

Even though the inhomogenous PF equation (4) concisely captures the extra corrections in the eight-dimensional threshold terms, the considerations leading to this equation have been rather formal and it would be clearly desirable to get a better understanding of what it mathematically and physically means.

Note that in the four dimensional situation, the PF operator LN(z), which figures as homogenous piece in (3), is by construction associated with the K3 fiber of the threefold. By analogy, the homogenous piece of equation (4) should then tell us something about the geometry that is relevant in the eight dimensional situation. Observing that the solution space (6) is given by products of the K3 periods, it is clear what the natural geometrical object is: it must be the symmetric square Sym2(K3) = (K3 x K3)/Ζ2. Being a hyperkähler manifold, its periods (not subject to world-sheet instanton corrections) indeed enjoy the factorization property exhibited by (6).

Untitled

Formal similarity of the four and eight-dimensional string compactifications: the underlying quantum geometry that underlies the quadratic or quartic gauge couplings appears to be given by three- or five-folds, which are fibrations of K3 or its symmetric square, respectively. The perturbative computations on the heterotic side are supposdly reproduced by the mirror maps on these manifolds in the limit where the base Pl‘s are large.

The occurrence of such symmetric products is familiar in D-brane physics. The geometrical structure that is relevant to us is however not just the symmetric square of K3, but rather a fibration of it, in the limit of large base space – this is precisely what the content of the inhomogenous PF equation (4) is. It is however not at all obvious to us why this particular structure of a hyperkähler-fibered five-fold would underlie the non-perturbative quantum geometry of the quartic gauge couplings in eight dimensions.

Weakness of Gravity and Transverse Spreading of Gravitational Flux. Drunken Risibility.

I15-15-stringtheories

Dirichlet branes, or their dual heterotic fivebranes and Horava-Witten walls – can trap non-abelian gauge interactions in their worldvolumes. This has placed on a firmer basis an old idea, according to which we might be living on a brane embedded in a higher-dimensional world. The idea arises naturally in compactifications of type I theory, which typically involve collections of orientifold planes and D-branes. The ‘brane-world’ scenario admits a fully perturbative string description.

In type I string theory the graviton (a closed-string state) lives in the ten-dimensional bulk, while open-string vector bosons are in general localized on lower-dimensional D-branes. Furthermore while closed strings interact to leading order via the sphere diagram, open strings interact via the disk diagram which is of higher order in the genus expansion. The four-dimensional Planck mass and Yang-Mills couplings therefore take the form

αU ∼ gI/(r˜MI)6-n

M2Planck ∼ rn6-nM8I/g2

where r is the typical radius of the n compact dimensions transverse to the brane, f the typical radius of the remaining (6-n) compact longitudinal dimensions, MI the type-I string scale and gI the string coupling constant. By appropriate T-dualities we can again ensure that both r and r˜ are greater than or equal to the fundamental string scale. T- dualities change n and may take us either to Ia or to Ib theory (also called I or I’, respectively).

It follows from these formulae that (a) there is no universal relation between MPlanck, αand MI anymore, and (b) tree-level gauge couplings corresponding to different sets of D-branes have radius-dependent ratios and need not unify at all. Thus type-I string theory is much more flexible (and less predictive) than its heterotic counterpart. The fundamental string scale, MI, in particular is a free parameter, even if one insists that αU be kept fixed and of order one, and that the string theory be weakly coupled. This added flexibility can be used to ‘remove’ the order-of magnitude discrepancy between the apparent unification and string scales of the heterotic theory, to lower MI to an intemediate scale or even all the way down to its experimentally-allowed limit of order the TeV. Keeping for instance gI, α and r˜MI fixed and of order one, leads to the condition

rn ∼ M2Planck/M2+nI

A TeV string scale would then require from n = 2 millimetric to n = 6 fermi-size dimensions transverse to our brane world. The relative weakness of gravity is in this picture attributed to the large transverse spreading of the gravitational flux.

Canonical Actions on Bundles – Philosophizing Identity Over Gauge Transformations.

Untitled

In physical applications, fiber bundles often come with a preferred group of transformations (usually the symmetry group of the system). The modem attitude of physicists is to regard this group as a fundamental structure which should be implemented from the very beginning enriching bundles with a further structure and defining a new category.

A similar feature appears on manifolds as well: for example, on ℜ2 one can restrict to Cartesian coordinates when we regard it just as a vector space endowed with a differentiable structure, but one can allow also translations if the “bigger” affine structure is considered. Moreover, coordinates can be chosen in much bigger sets: for instance one can fix the symplectic form w = dx ∧ dy on ℜ2 so that ℜ2 is covered by an atlas of canonical coordinates (which include all Cartesian ones). But ℜ2 also happens to be identifiable with the cotangent bundle T*ℜ so that we can restrict the previous symplectic atlas to allow only natural fibered coordinates. Finally, ℜ2 can be considered as a bare manifold so that general curvilinear coordinates should be allowed accordingly; only if the full (i.e., unrestricted) manifold structure is considered one can use a full maximal atlas. Other choices define instead maximal atlases in suitably restricted sub-classes of allowed charts. As any manifold structure is associated with a maximal atlas, geometric bundles are associated to “maximal trivializations”. However, it may happen that one can restrict (or enlarge) the allowed local trivializations, so that the same geometrical bundle can be trivialized just using the appropriate smaller class of local trivializations. In geometrical terms this corresponds, of course, to impose a further structure on the bare bundle. Of course, this newly structured bundle is defined by the same basic ingredients, i.e. the same base manifold M, the same total space B, the same projection π and the same standard fiber F, but it is characterized by a new maximal trivialization where, however, maximal refers now to a smaller set of local trivializations.

Examples are: vector bundles are characterized by linear local trivializations, affine bundles are characterized by affine local trivializations, principal bundles are characterized by left translations on the fiber group. Further examples come from Physics: gauge transformations are used as transition functions for the configuration bundles of any gauge theory. For these reasons we give the following definition of a fiber bundle with structure group.

A fiber bundle with structure group G is given by a sextuple B = (E, M, π; F ;>.., G) such that:

  • (E, M, π; F) is a fiber bundle. The structure group G is a Lie group (possibly a discrete one) and λ : G —–> Diff(F) defines a left action of G on the standard fiber F .
  • There is a family of preferred trivializations {(Uα, t(α)}α∈I of B such that the following holds: let the transition functions be gˆ(αβ) : Uαβ —–> Diff(F) and let eG be the neutral element of G. ∃ a family of maps g(αβ) : Uαβ —–> G such

    that, for each x ∈ Uαβγ = Uα ∩ Uβ ∩ Uγ

    g(αα)(x) = eG

    g(αβ)(x) = [g(βα)(x)]-1

    g(αβ)(x) . g(βγ)(x) . g(γα)(x) = eG

    and

    (αβ)(x) = λ(g(αβ)(x)) ∈ Diff(F)

The maps g(αβ) : Uαβ —–> G, which depend on the trivialization, are said to form a cocycle with values in G. They are called the transition functions with values in G (or also shortly the transition functions). The preferred trivializations will be said to be compatible with the structure. Whenever dealing with fiber bundles with structure group the choice of a compatible trivialization will be implicitly assumed.

Fiber bundles with structure group provide the suitable framework to deal with bundles with a preferred group of transformations. To see this, let us begin by introducing the notion of structure bundle of a fiber bundle with structure group B = (B, M, π; F; x, G).

Let B = (B, M, π; F; x, G) be a bundle with a structure group; let us fix a trivialization {(Uα, t(α)}α∈I and denote by g(αβ) : Uαβ —–> G its transition functions. By using the canonical left action L : G —–> Diff(G) of G onto itself, let us define gˆ(αβ) : Uαβ —–> Diff(G) given by gˆ(αβ)(x) = L (g(αβ)(x)); they obviously satisfy the cocycle properties. Now by constructing a (unique modulo isomorphisms) principal bundle PB = P(B) having G as structure group and g(αβ) as transition functions acting on G by left translation Lg : G —> G.

The principal bundle P(B) = (P, M, p; G) constructed above is called the structure bundle of B = (B, M, π; F; λ, G).

Notice that there is no similar canonical way of associating a structure bundle to a geometric bundle B = (B, M, π; F), since in that case the structure group G is at least partially undetermined.

Each automorphism of P(B) naturally acts over B.

Let, in fact, {σ(α)}α∈I be a trivialization of PB together with its transition functions g(αβ) : Uαβ —–> G defined by σ(β) = σ(α) . g(αβ). Then any principal morphism Φ = (Φ, φ) over PB is locally represented by local maps ψ(α) : Uα —> G such that

Φ : [x, h]α ↦ [φ(α)(x), ψ(α)(x).h](α)

Since Φ is a global automorphism of PB for the above local expression, the following property holds true in Uαβ.

φ(α)(x) = φ(β)(x) ≡ x’

ψ(α)(x) = g(αβ)(x’) . ψ(β)(x) . g(βα)(x)

By using the family of maps {(φ(α), ψ(α))} one can thence define a family of global automorphisms of B. In fact, using the trivialization {(Uα, t(α)}α∈I, one can define local automorphisms of B given by

Φ(α)B : (x, y) ↦ (φ(α)(x), [λ(ψ(α)(x))](y))

These local maps glue together to give a global automorphism ΦB of the bundle B, due to the fact that g(αβ) are also transition functions of B with respect to its trivialization {(Uα, t(α)}α∈I.

In this way B is endowed with a preferred group of transformations, namely the group Aut(PB) of automorphisms of the structure bundle PB, represented on B by means of the canonical action. These transformations are called (generalized) gauge transformations. Vertical gauge transformations, i.e. gauge transformations projecting over the identity, are also called pure gauge transformations.

Principal Bundles Preserve Structures…

Untitled

A bundle P = (P, M ,π; G) is a principal bundle if the standard fiber is a Lie group G and ∃ (at least) one trivialization the transition functions of which act on G by left translations Lg : G → G : h ↦ f  g . h (where . denotes here the group multiplication).

The principal bundles are slightly different from affine bundles and vector bundles. In fact, while in affine bundles the fibers π-1(x) have a canonical structure of affine spaces and in vector bundles the fibers π-1(x) have a canonical structure of vector spaces, in principal bundles the fibers have no canonical Lie group structure. This is due to the fact that, while in affine bundles transition functions act by means of affine transformations and in vector bundles transition functions act by means of linear transformations, in principal bundles transition functions act by means of left translations which are not group automorphisms. Thus the fibers of a principal bundle do not carry a canonical group structure, but rather many non-canonical (trivialization-depending) group structures. In the fibers of a vector bundle there exists a preferred element (the “zero”) the definition of which does not depend on the local trivialization. On the contrary, in the fibers of a principal bundle there is no preferred point which is fixed by transition functions to be selected as an identity. Thus, while in affine bundles affine morphisms are those which preserve the affine structure of the fibers and in vector bundles linear morphisms are the ones which preserve the linear structure of the fibers, in a principal bundle P = (P, M, π; G) principal morphisms preserve instead a structure, the right action of G on P.

Let P = (P, M, π; G) be a principal bundle and {(Uα, t(α)}α∈I a trivialization. We can locally consider the maps

R(α)g : π-1(Uα) → π-1(Uα) : [x, h](α) ↦ [x, h . g](α) —– (1)

∃ a (global) right action Rg of G on P which is free, vertical and transitive on fibers; the local expression in the given trivialization of this action is given by R(α)g .

Using the local trivialization, we set p = [x, h](α) = [x, g(βα)(x) . h]β following diagram commutes:

Untitled

which clearly shows that the local expressions agree on the overlaps Uαβ, to define a right action. This is obviously a vertical action; it is free because of the following:

Rgp = p => [x, h . g](α) = [x, h](α) => h · g = h => g = e —– (2)

Finally, if p = [x, h1](α) and q = [x, h2](α) are two points in the same fiber of p, one can choose g = h2-1 . h1 (where · denotes the group multiplication) so that p = Rgq. This shows that the right action is also transitive on the fibers.

On the contrary, that a global left action cannot be defined by using the local maps

L(α)g : π-1(Uα) → π-1(Uα) : [x, h](α) ↦ [x, g . h](α) —– (3)

since these local maps do not satisfy a compatibility condition analogous to the condition of the commuting diagram.

let P = (P, M, π; G) and P’ = (P’, M’, π’ ; G’ ) be two principal bundles and θ : G → G’ be a homomorphism of Lie groups. A bundle morphism Φ = (Φ, φ) : P → P’ is a principal morphism with respect to θ if the following diagram is commutative:

Untitled

When G = G’ and θ = idG we just say that Φ is a principal morphism.

A trivial principal bundle (M x G, M, π; G) naturally admits the global unity section I ∈ Γ(M x G), defined with respect to a global trivialization, I : x ↦ (x, e), e being the unit element of G. Also, principal bundles allow global sections iff they are trivial. In fact, on principal bundles there is a canonical correspondence between local sections and local trivializations, due to the presence of the global right action.

Nomological Unification and Phenomenology of Gravitation. Thought of the Day 110.0

Calabi-Yau-manifold

String theory, which promises to give an all-encompassing, nomologically unified description of all interactions did not even lead to any unambiguous solutions to the multitude of explanative desiderata of the standard model of quantum field theory: the determination of its specific gauge invariances, broken symmetries and particle generations as well as its 20 or more free parameters, the chirality of matter particles, etc. String theory does at least give an explanation for the existence and for the number of particle generations. The latter is determined by the topology of the compactified additional spatial dimensions of string theory; their topology determines the structure of the possible oscillation spectra. The number of particle generations is identical to half the absolute value of the Euler number of the compact Calabi-Yau topology. But, because it is completely unclear which topology should be assumed for the compact space, there are no definitive results. This ambiguity is part of the vacuum selection problem; there are probably more than 10100 alternative scenarios in the so-called string landscape. Moreover all concrete models, deliberately chosen and analyzed, lead to generation numbers much too big. There are phenomenological indications that the number of particle generations can not exceed three. String theory admits generation numbers between three and 480.

Attempts at a concrete solution of the relevant external problems (and explanative desiderata) either did not take place, or they did not show any results, or they led to escalating ambiguities and finally got drowned completely in the string landscape scenario: the recently developed insight that string theory obviously does not lead to a unique description of nature, but describes an immense number of nomologically, physically and phenomenologically different worlds with different symmetries, parameter values, and values of the cosmological constant.

String theory seems to be by far too much preoccupied with its internal conceptual and mathematical problems to be able to find concrete solutions to the relevant external physical problems. It is almost completely dominated by internal consistency constraints. It is not the fact that we are living in a ten-dimensional world which forces string theory to a ten-dimensional description. It is that perturbative string theories are only anomaly-free in ten dimensions; and they contain gravitons only in a ten-dimensional formulation. The resulting question, how the four-dimensional spacetime of phenomenology comes off from ten-dimensional perturbative string theories (or its eleven-dimensional non-perturbative extension: the mysterious, not yet existing M theory), led to the compactification idea and to the braneworld scenarios, and from there to further internal problems.

It is not the fact that empirical indications for supersymmetry were found, that forces consistent string theories to include supersymmetry. Without supersymmetry, string theory has no fermions and no chirality, but there are tachyons which make the vacuum instable; and supersymmetry has certain conceptual advantages: it leads very probably to the finiteness of the perturbation series, thereby avoiding the problem of non-renormalizability which haunted all former attempts at a quantization of gravity; and there is a close relation between supersymmetry and Poincaré invariance which seems reasonable for quantum gravity. But it is clear that not all conceptual advantages are necessarily part of nature, as the example of the elegant, but unsuccessful Grand Unified Theories demonstrates.

Apart from its ten (or eleven) dimensions and the inclusion of supersymmetry, both have more or less the character of only conceptually, but not empirically motivated ad-hoc assumptions. String theory consists of a rather careful adaptation of the mathematical and model-theoretical apparatus of perturbative quantum field theory to the quantized, one-dimensionally extended, oscillating string (and, finally, of a minimal extension of its methods into the non-perturbative regime for which the declarations of intent exceed by far the conceptual successes). Without any empirical data transcending the context of our established theories, there remains for string theory only the minimal conceptual integration of basic parts of the phenomenology already reproduced by these established theories. And a significant component of this phenomenology, namely the phenomenology of gravitation, was already used up in the selection of string theory as an interesting approach to quantum gravity. Only, because string theory, containing gravitons as string states, reproduces in a certain way the phenomenology of gravitation, it is taken seriously.