The Affinity of Mirror Symmetry to Algebraic Geometry: Going Beyond Formalism

symmetry-07-01633-g005

1617T345fibreandbaseOLD2

Even though formalism of homological mirror symmetry is an established case, what of other explanations of mirror symmetry which lie closer to classical differential and algebraic geometry? One way to tackle this is the so-called Strominger, Yau and Zaslow mirror symmetry or SYZ in short.

The central physical ingredient in this proposal is T-duality. To explain this, let us consider a superconformal sigma model with target space (M, g), and denote it (defined as a geometric functor, or as a set of correlation functions), as

CFT(M, g)

In physics, a duality is an equivalence

CFT(M, g) ≅ CFT(M′, g′)

which holds despite the fact that the underlying geometries (M,g) and (M′, g′) are not classically diffeomorphic.

T-duality is a duality which relates two CFT’s with toroidal target space, M ≅ M′ ≅ Td, but different metrics. In rough terms, the duality relates a “small” target space, with noncontractible cycles of length L < ls, with a “large” target space in which all such cycles have length L > ls.

This sort of relation is generic to dualities and follows from the following logic. If all length scales (lengths of cycles, curvature lengths, etc.) are greater than ls, string theory reduces to conventional geometry. Now, in conventional geometry, we know what it means for (M, g) and (M′, g′) to be non-isomorphic. Any modification to this notion must be associated with a breakdown of conventional geometry, which requires some length scale to be “sub-stringy,” with L < ls. To state T-duality precisely, let us first consider M = M′ = S1. We parameterise this with a coordinate X ∈ R making the identification X ∼ X + 2π. Consider a Euclidean metric gR given by ds2 = R2dX2. The real parameter R is usually called the “radius” from the obvious embedding in R2. This manifold is Ricci-flat and thus the sigma model with this target space is a conformal field theory, the “c = 1 boson.” Let us furthermore set the string scale ls = 1. With this, we attain a complete physical equivalence.

CFT(S1, gR) ≅ CFT(S1, g1/R)

Thus these two target spaces are indistinguishable from the point of view of string theory.

Just to give a physical picture for what this means, suppose for sake of discussion that superstring theory describes our universe, and thus that in some sense there must be six extra spatial dimensions. Suppose further that we had evidence that the extra dimensions factorized topologically and metrically as K5 × S1; then it would make sense to ask: What is the radius R of this S1 in our universe? In principle this could be measured by producing sufficiently energetic particles (so-called “Kaluza-Klein modes”), or perhaps measuring deviations from Newton’s inverse square law of gravity at distances L ∼ R. In string theory, T-duality implies that R ≥ ls, because any theory with R < ls is equivalent to another theory with R > ls. Thus we have a nontrivial relation between two (in principle) observable quantities, R and ls, which one might imagine testing experimentally. Let us now consider the theory CFT(Td, g), where Td is the d-dimensional torus, with coordinates Xi parameterising Rd/2πZd, and a constant metric tensor gij. Then there is a complete physical equivalence

CFT(Td, g) ≅ CFT(Td, g−1)

In fact this is just one element of a discrete group of T-duality symmetries, generated by T-dualities along one-cycles, and large diffeomorphisms (those not continuously connected to the identity). The complete group is isomorphic to SO(d, d; Z).

While very different from conventional geometry, T-duality has a simple intuitive explanation. This starts with the observation that the possible embeddings of a string into X can be classified by the fundamental group π1(X). Strings representing non-trivial homotopy classes are usually referred to as “winding states.” Furthermore, since strings interact by interconnecting at points, the group structure on π1 provided by concatenation of based loops is meaningful and is respected by interactions in the string theory. Now π1(Td) ≅ Zd, as an abelian group, referred to as the group of “winding numbers”.

Of course, there is another Zd we could bring into the discussion, the Pontryagin dual of the U(1)d of which Td is an affinization. An element of this group is referred to physically as a “momentum,” as it is the eigenvalue of a translation operator on Td. Again, this group structure is respected by the interactions. These two group structures, momentum and winding, can be summarized in the statement that the full closed string algebra contains the group algebra C[Zd] ⊕ C[Zd].

In essence, the point of T-duality is that if we quantize the string on a sufficiently small target space, the roles of momentum and winding will be interchanged. But the main point can be seen by bringing in some elementary spectral geometry. Besides the algebra structure, another invariant of a conformal field theory is the spectrum of its Hamiltonian H (technically, the Virasoro operator L0 + L ̄0). This Hamiltonian can be thought of as an analog of the standard Laplacian ∆g on functions on X, and its spectrum on Td with metric g is

Spec ∆= {∑i,j=1d gijpipj; pi ∈ Zd}

On the other hand, the energy of a winding string is (intuitively) a function of its length. On our torus, a geodesic with winding number w ∈ Zd has length squared

L2 = ∑i,j=1d gijwiwj

Now, the only string theory input we need to bring in is that the total Hamiltonian contains both terms,

H = ∆g + L2 + · · ·

where the extra terms … express the energy of excited (or “oscillator”) modes of the string. Then, the inversion g → g−1, combined with the interchange p ↔ w, leaves the spectrum of H invariant. This is T-duality.

There is a simple generalization of the above to the case with a non-zero B-field on the torus satisfying dB = 0. In this case, since B is a constant antisymmetric tensor, we can label CFT’s by the matrix g + B. Now, the basic T-duality relation becomes

CFT(Td, g + B) ≅ CFT(Td, (g + B)−1)

Another generalization, which is considerably more subtle, is to do T-duality in families, or fiberwise T-duality. The same arguments can be made, and would become precise in the limit that the metric on the fibers varies on length scales far greater than ls, and has curvature lengths far greater than ls. This is sometimes called the “adiabatic limit” in physics. While this is a very restrictive assumption, there are more heuristic physical arguments that T-duality should hold more generally, with corrections to the relations proportional to curvatures ls2R and derivatives ls∂ of the fiber metric, both in perturbation theory and from world-sheet instantons.

Grand Unification Theory/(Anti-GUT): Emerging Symmetry, Topology in a Momentum Space. Thought of the Day 129.0

Untitled

Quantum phase transition between two ground states with the same symmetry but of different universality class – gapless at q < qc and fully gapped at q > qc – as isolated point (a) as the termination point of first order transition (b)

There are two schemes for the classification of states in condensed matter physics and relativistic quantum fields: classification by symmetry (GUT scheme) and by momentum space topology (anti-GUT scheme).

For the first classification method, a given state of the system is characterized by a symmetry group H which is a subgroup of the symmetry group G of the relevant physical laws. The thermodynamic phase transition between equilibrium states is usually marked by a change of the symmetry group H. This classification reflects the phenomenon of spontaneously broken symmetry. In relativistic quantum fields the chain of successive phase transitions, in which the large symmetry group existing at high energy is reduced at low energy, is in the basis of the Grand Unification models (GUT). In condensed matter the spontaneous symmetry breaking is a typical phenomenon, and the thermodynamic states are also classified in terms of the subgroup H of the relevant group G. The groups G and H are also responsible for topological defects, which are determined by the nontrivial elements of the homotopy groups πn(G/H).

The second classification method reflects the opposite tendency – the anti Grand Unification (anti-GUT) – when instead of the symmetry breaking the symmetry gradually emerges at low energy. This method deals with the ground states of the system at zero temperature (T = 0), i.e., it is the classification of quantum vacua. The universality classes of quantum vacua are determined by momentum-space topology, which is also responsible for the type of the effective theory, emergent physical laws and symmetries at low energy. Contrary to the GUT scheme, where the symmetry of the vacuum state is primary giving rise to topology, in the anti-GUT scheme the topology in the momentum space is primary while the vacuum symmetry is the emergent phenomenon in the low energy corner.

At the moment, we live in the ultra-cold Universe. All the characteristic temperatures in our Universe are extremely small compared to the Planck energy scale EP. That is why all the massive fermions, whose natural mass must be of order EP, are frozen out due to extremely small factor exp(−EP/T). There is no matter in our Universe unless there are massless fermions, whose masslessness is protected with extremely high accuracy. It is the topology in the momentum space, which provides such protection.

For systems living in 3D space, there are four basic universality classes of fermionic vacua provided by topology in momentum space:

(i)  Vacua with fully-gapped fermionic excitations, such as semiconductors and conventional superconductors.

(ii)  Vacua with fermionic excitations characterized by Fermi points – points in 3D momentum space at which the energy of fermionic quasiparticle vanishes. Examples are provided by the quantum vacuum of Standard Model above the electroweak transition, where all elementary particles are Weyl fermions with Fermi points in the spectrum. This universality class manifests the phenomenon of emergent relativistic quantum fields at low energy: close to the Fermi points the fermionic quasiparticles behave as massless Weyl fermions, while the collective modes of the vacuum interact with these fermions as gauge and gravitational fields.

(iii)  Vacua with fermionic excitations characterized by lines in 3D momentum space or points in 2D momentum space. We call them Fermi lines, though in general it is better to characterize zeroes by co-dimension, which is the dimension of p-space minus the dimension of the manifold of zeros. Lines in 3D momentum space and points in 2D momentum space have co-dimension 2: since 3−1 = 2−0 = 2. The Fermi lines are topologically stable only if some special symmetry is obeyed.

(iv) Vacua with fermionic excitations characterized by Fermi surfaces. This universality class also manifests the phenomenon of emergent physics, though non-relativistic: at low temperature all the metals behave in a similar way, and this behavior is determined by the Landau theory of Fermi liquid – the effective theory based on the existence of Fermi surface. Fermi surface has co-dimension 1: in 3D system it is the surface (co-dimension = 3 − 2 = 1), in 2D system it is the line (co- dimension = 2 − 1 = 1), and in 1D system it is the point (co-dimension = 1 − 0 = 1; in one dimensional system the Landau Fermi-liquid theory does not work, but the Fermi surface survives).

The possibility of the Fermi band class (v), where the energy vanishes in the finite region of the 3D momentum space and thus zeroes have co-dimension 0, and such topologically stable flat band may exist in the spectrum of fermion zero modes, i.e. for fermions localized in the core of the topological objects. The phase transitions which follow from this classification scheme are quantum phase transitions which occur at T = 0. It may happen that by changing some parameter q of the system we transfer the vacuum state from one universality class to another, or to the vacuum of the same universality class but different topological quantum number, without changing its symmetry group H. The point qc, where this zero-temperature transition occurs, marks the quantum phase transition. For T ≠ 0, the second order phase transition is absent, as the two states belong to the same symmetry class H, but the first order phase transition is not excluded. Hence, there is an isolated singular point (qc, 0) in the (q, T) plane, or the end point of the first order transition. The quantum phase transitions which occur in classes (iv) and (i) or be- tween these classes are well known. In the class (iv) the corresponding quantum phase transition is known as Lifshitz transition, at which the Fermi surface changes its topology or emerges from the fully gapped state of class (i). The transition between the fully gapped states characterized by different topological charges occurs in 2D systems exhibiting the quantum Hall and spin-Hall effect: this is the plateau-plateau transition between the states with different values of the Hall (or spin-Hall) conductance. The less known transitions involve nodes of co-dimension 3 and nodes of co-dimension 2.

Gnostic Semiotics. Thought of the Day 63.0

untitled-1-1024x346

The question here is what is being composed? For the deferment and difference that is always already of the Sign, suggests that perhaps the composition is one that lies not within but without, a creation that lies on the outside but which then determines – perhaps through the reader more than anything else, for after all the meaning of a particular sign, be it a word or anything else, requires a form of impregnation by the receiver – a particular meaning.

Is there any choice but to assume a meaning in a sign? Only through the simulation, or ‘belief’ if you prefer (there is really no difference in the two concepts), of an inherent meaning in the sign can any transference continue. For even if we acknowledge that all communication is merely the circulation of empty signifiers, the impregnation of the signified (no matter how unconnected it may be to the other person’s signified) still ensures that the sign carries with it a meaning. Only through this simulation of a meaning is circulation possible – even if one posits that the sign circulates itself, this would not be possible if it were completely empty.

Since it is from without (even if meaning is from the reader, (s)he is external to the signification), this suggests that the meaning is a result, a consequence of forces – its signification is a result of the significance of various forces (convention, context, etc) which then means that inherently, the sign remains empty; a pure signifier leading to yet another signifier.

The interesting element though lies in the fact that the empty signifier then sucks the Other (in the form of the signified, which takes the form of the Absolute Other here) into it, in order to define an existence, but essentially remains an empty signifier, awaiting impregnation with meaning from the reader. A void: always full and empty or perhaps (n)either full (n)or empty. For true potentiality must always already contain the possibility of non-potentiality. Otherwise there would be absolutely no difference between potentiality and actualization – they would merely be different ends of the same spectrum.

Without Explosions, WE Would NOT Exist!

bb_theory

The matter and radiation in the universe gets hotter and hotter as we go back in time towards the initial quantum state, because it was compressed into a smaller volume. In this Hot Big Bang epoch in the early universe, we can use standard physical laws to examine the processes going on in the expanding mixture of matter and radiation. A key feature is that about 300,000 years after the start of the Hot Big Bang epoch, nuclei and electrons combined to form atoms. At earlier times when the temperature was higher, atoms could not exist, as the radiation then had so much energy it disrupted any atoms that tried to form into their constituent parts (nuclei and electrons). Thus at earlier times matter was ionized, consisting of negatively charged electrons moving independently of positively charged atomic nuclei. Under these conditions, the free electrons interact strongly with radiation by Thomson scattering. Consequently matter and radiation were tightly coupled in equilibrium at those times, and the Universe was opaque to radiation. When the temperature dropped through the ionization temperature of about 4000K, atoms formed from the nuclei and electrons, and this scattering ceased: the Universe became very transparent. The time when this transition took place is known as the time of decoupling – it was the time when matter and radiation ceased to be tightly coupled to each other, at a redshift zdec ≃ 1100 (Scott Dodelson (Auth.)-Modern Cosmology-Academic Press). By

μbar ∝ S−3, μrad ∝ S−4, Trad ∝ S−1 —– (1)

The scale factor S(t) obeys the Raychaudhuri equation

3S ̈/S = -1/2 κ(μ +3p/c2) + Λ —– (2)

where κ is the gravitational constant and Λ the cosmological constant.

, the universe was radiation dominated (μrad ≫ μmat) at early times and matter dominated (μrad ≪ μmat) at late times; matter-radiation density equality occurred significantly before decoupling (the temperature Teq when this equality occurred was Teq ≃ 104K; at that time the scale factor was Seq ≃ 104S0, where S0 is the present-day value). The dynamics of both the background model and of perturbations about that model differ significantly before and after Seq.

Radiation was emitted by matter at the time of decoupling, thereafter travelling freely to us through the intervening space. When it was emitted, it had the form of blackbody radiation, because this is a consequence of matter and radiation being in thermodynamic equilibrium at earlier times. Thus the matter at z = zdec forms the Last Scattering Surface (LSS) in the early universe, emitting Cosmic Blackbody Background Radiation (‘CBR’) at 4000K, that since then has travelled freely with its temperature T scaling inversely with the scale function of the universe. As the radiation travelled towards us, the universe expanded by a factor of about 1100; consequently by the time it reaches us, it has cooled to 2.75 K (that is, about 3 degrees above absolute zero, with a spectrum peaking in the microwave region), and so is extremely hard to observe. It was however detected in 1965, and its spectrum has since been intensively investigated, its blackbody nature being confirmed to high accuracy (R. B. Partridge-3K_ The Cosmic Microwave Background Radiation). Its existence is now taken as solid proof both that the Universe has indeed expanded from a hot early phase, and that standard physics applied unchanged at that era in the early universe.

The thermal capacity of the radiation is hugely greater than that of the matter. At very early times before decoupling, the temperatures of the matter and radiation were the same (because they were in equilibrium with each other), scaling as 1/S(t) (Equation 1 above). The early universe exceeded any temperature that can ever be attained on Earth or even in the centre of the Sun; as it dropped towards its present value of 3 K, successive physical reactions took place that determined the nature of the matter we see around us today. At very early times and high temperatures, only elementary particles can survive and even neutrinos had a very small mean free path; as the universe cooled down, neutrinos decoupled from the matter and streamed freely through space. At these times the expansion of the universe was radiation dominated, and we can approximate the universe then by models with {k = 0, w = 1/3, Λ = 0}, the resulting simple solution of

3S ̇2/S2 = A/S3 + B/S4 + Λ/3 – 3k/S2 —– (3)

uniquely relating time to temperature:

S(t)=S0t1/2 , t=1.92sec [T/1010K]−2 —– (4)

(There are no free constants in the latter equation).

At very early times, even neutrinos were tightly coupled and in equilibrium with the radiation; they decoupled at about 1010K, resulting in a relic neutrino background density in the universe today of about Ων0 ≃ 10−5 if they are massless (but it could be higher depending on their masses). Key events in the early universe are associated with out of equilibrium phenomena. An important event was the era of nucleosynthesis, the time when the light elements were formed. Above about 109K, nuclei could not exist because the radiation was so energetic that as fast as they formed, they were disrupted into their constituent parts (protons and neutrons). However below this temperature, if particles collided with each other with sufficient energy for nuclear reactions to take place, the resultant nuclei remained intact (the radiation being less energetic than their binding energy and hence unable to disrupt them). Thus the nuclei of the light elements  – deuterium, tritium, helium, and lithium – were created by neutron capture. This process ceased when the temperature dropped below about 108K (the nuclear reaction threshold). In this way, the proportions of these light elements at the end of nucleosynthesis were determined; they have remained virtually unchanged since. The rate of reaction was extremely high; all this took place within the first three minutes of the expansion of the Universe. One of the major triumphs of Big Bang theory is that theory and observation are in excellent agreement provided the density of baryons is low: Ωbar0 ≃ 0.044. Then the predicted abundances of these elements (25% Helium by weight, 75% Hydrogen, the others being less than 1%) agrees very closely with the observed abundances. Thus the standard model explains the origin of the light elements in terms of known nuclear reactions taking place in the early Universe. However heavier elements cannot form in the time available (about 3 minutes).

In a similar way, physical processes in the very early Universe (before nucleosynthesis) can be invoked to explain the ratio of matter to anti-matter in the present-day Universe: a small excess of matter over anti-matter must be created then in the process of baryosynthesis, without which we could not exist today (if there were no such excess, matter and antimatter would have all annihilated to give just radiation). However other quantities (such as electric charge) are believed to have been conserved even in the extreme conditions of the early Universe, so their present values result from given initial conditions at the origin of the Universe, rather than from physical processes taking place as it evolved. In the case of electric charge, the total conserved quantity appears to be zero: after quarks form protons and neutrons at the time of baryosynthesis, there are equal numbers of positively charged protons and negatively charged electrons, so that at the time of decoupling there were just enough electrons to combine with the nuclei and form uncharged atoms (it seems there is no net electrical charge on astronomical bodies such as our galaxy; were this not true, electromagnetic forces would dominate cosmology, rather than gravity).

After decoupling, matter formed large scale structures through gravitational instability which eventually led to the formation of the first generation of stars and is probably associated with the re-ionization of matter. However at that time planets could not form for a very important reason: there were no heavy elements present in the Universe. The first stars aggregated matter together by gravitational attraction, the matter heating up as it became more and more concentrated, until its temperature exceeded the thermonuclear ignition point and nuclear reactions started burning hydrogen to form helium. Eventually more complex nuclear reactions started in concentric spheres around the centre, leading to a build-up of heavy elements (carbon, nitrogen, oxygen for example), up to iron. These elements can form in stars because there is a long time available (millions of years) for the reactions to take place. Massive stars burn relatively rapidly, and eventually run out of nuclear fuel. The star becomes unstable, and its core rapidly collapses because of gravitational attraction. The consequent rise in temperature blows it apart in a giant explosion, during which time new reactions take place that generate elements heavier than iron; this explosion is seen by us as a Supernova (“New Star”) suddenly blazing in the sky, where previously there was just an ordinary star. Such explosions blow into space the heavy elements that had been accumulating in the star’s interior, forming vast filaments of dust around the remnant of the star. It is this material that can later be accumulated, during the formation of second generation stars, to form planetary systems around those stars. Thus the elements of which we are made (the carbon, nitrogen, oxygen and iron nuclei for example) were created in the extreme heat of stellar interiors, and made available for our use by supernova explosions. Without these explosions, we could not exist.

The Silicon Ideology

ramap

Traditional anti-fascist tactics have largely been formulated in response to 20th century fascism. Not confident that they will be sufficient to defeat neo-reactionaries. That is not to say they will not be useful; merely insufficient. Neo-reactionaries must be fought on their own ground (the internet), and with their own tactics: doxxing especially, which has been shown to be effective at threatening the alt-right. Information must be spread about neo-reactionaries, such that they lose opportunities to accumulate capital and social capital….

…Transhumanism, for many, seems to be the part of neo-reactionary ideology that “sticks out” from the rest. Indeed, some wonder how neo-reactionaries and transhumanists would ever mix, and why I am discussing LessWrong in the context of neo-reactionary beliefs. For the last question, this is because LessWrong served as a convenient “incubation centre” so to speak for neo-reactionary ideas to develop and spread for many years, and the goals of LessWrong: a friendly super-intelligent AI ruling humanity  for its own good, was fundamentally compatible with existing neo-reactionary ideology, which had already begun developing a futurist orientation in its infancy due, in part, to its historical and cultural influences. The rest of the question, however, is not just historical, but theoretical: what is transhumanism and why does it mix well with reactionary ideology?…..

…..In the words of Moldbug

A startup is basically structured as a monarchy. We don’t call it that, of course. That would seem weirdly outdated, and anything that’s not democracy makes people uncomfortable. We are biased toward the democratic-republican side of the spectrum. That’s what we’re used to from civics classes. But, the truth is that startups and founders lean toward the dictatorial side because that structure works better for startups.

He doesn’t, of course, claim that this would be a good way to rule a country, but that is the clear message sent by his political projects. Balaji Srinivasan made a similar rhetorical move, using clear neo-reactionary ideas without mentioning their sources, in a speech to a “startup school” affiliated with Y Combinator:

We want to show what a society run by Silicon Valley would look like. That’s where “exit” comes in . . . . It basically means: build an opt-in society, ultimately outside the US, run by technology. And this is actually where the Valley is going. This is where we’re going over the next ten years . . . [Google co-founder] Larry Page, for example, wants to set aside a part of the world for unregulated experimentation. That’s carefully phrased. He’s not saying, “take away the laws in the U.S.” If you like your country, you can keep it. Same with Marc Andreessen: “The world is going to see an explosion of countries in the years ahead—doubled, tripled, quadrupled countries.”

Well, thats the the-silicon-ideology through.