Schematic Grothendieck Representation

A spectral Grothendieck representation Rep is said to be schematic if for every triple γ ≤ τ ≤ δ in Top(A), for every A in R^(Ring) we have a commutative diagram in R^:

IMG_20191226_064217

 

If Rep is schematic, then, P : Top(A) → R^ is a presheaf with values in R^ over the lattice Top(A)o, for every A in R.

The modality is to restrict attention to Tors(Rep(A)); that is, a lattice in the usual sense; and hence this should be viewed as the commutative shadow of a suitable noncommutative theory.

For obtaining the complete lattice Q(A), a duality is expressed by an order-reversing bijection: (−)−1 : Q(A) → Q((Rep(A))o). (Rep(A))o is not a Grothendieck category. It is additive and has a projective generator; moreover, it is known to be a varietal category (also called triplable) in the sense that it has a projective regular generator P, it is co-complete and has kernel pairs with respect to the functor Hom(P, −), and moreover every equivalence relation in the category is a kernel pair. If a comparison functor is constructed via Hom(P, −) as a functor to the category of sets, it works well for the category of set-valued sheaves over a Grothendieck topology.

Now (−)−1 is defined as an order-reversing bijection between idempotent radicals on Rep(A) and (Rep(A))o, implying we write (Top(A))−1 for the image of Top(A) in Q((Rep( A))o). This is encoded in the exact sequence in Rep(A):

0 → ρ(M) → M → ρ−1(M) → 0

(reversed in (Rep(A))o). By restricting attention to hereditary torsion theories (kernel functors) when defining Tors(−), we introduce an asymmetry that breaks the duality because Top(A)−1 is not in Tors((Rep(A))op). If notationally, TT(G) is the complete lattice of torsion theories (not necessarily hereditary) of the category G; then (TT(G))−1 ≅ TT(Gop). Hence we may view Tors(G)−1 as a complete sublattice of TT(Gop).

Superfluid He-3. Thought of the Day 130.0

nphys2220-f1

At higher temperatures 3He is a gas, while below temperature of 3K – due to van der Walls forces – 3He is a normal liquid with all symmetries which a condensed matter system can have: translation, gauge symmetry U(1) and two SO(3) symmetries for the spin (SOS(3)) and orbital (SOL(3)) rotations. At temperatures below 100 mK, 3He behaves as a strongly interacting Fermi liquid. Its physical properties are well described by Landau’s theory. Quasi-particles of the 3He (i.e. 3He atoms “dressed” into mutual interactions) have spin equal to 1/2 and similar to the electrons, they can create Cooper pairs as well. However, different from electrons in a metal, 3He is a liquid without a lattice and the electron-phonon interaction, responsible for superconductivity, can not be applied here. As the 3He quasiparticles have spin, the magnetic interaction between spins rises up when the temperature falls down until, at a certain temperature, Cooper pairs are created – the coupled pairs of 3He quasiparticles – and the normal 3He liquid becomes a superfluid. The Cooper pairs produce a superfluid component and the rest, unpaired 3He quasiparticles, generate a normal component (N -phase).

A physical picture of the superfluid 3He is more complicated than for superconducting electrons. First, the 3He quasiparticles are bare atoms and creating the Cooper pair they have to rotate around its common center of mass, generating an orbital angular momentum of the pair (L = 1). Secondly, the spin of the Cooper pair is equal to one (S = 1), thus superfluid 3He has magnetic properties. Thirdly, the orbital and spin angular momenta of the pair are coupled via a dipole-dipole interaction.

It is evident that the phase transition of 3He into the superfluid state is accompanied by spontaneously broken symmetry: orbital, spin and gauge: SOL(3)× SOS(3) × U(1), except the translational symmetry, as the superfluid 3He is still a liquid. Finally, an energy gap ∆ appears in the energy spectrum separating the Cooper pairs (ground state) from unpaired quasiparticles – Fermi excitations.

In superfluid 3He the density of Fermi excitations decreases upon further cooling. For temperatures below around 0.25Tc (where Tc is the superfluid transition temperature), the density of the Fermi excitations is so low that the excitations can be regarded as a non-interacting gas because almost all of them are paired and occupy the ground state. Therefore, at these very low temperatures, the superfluid phases of helium-3 represent well defined models of the quantum vacua, which allows us to study any influences of various external forces on the ground state and excitations from this state as well.

The ground state of superfluid 3He is formed by the Cooper pairs having both spin (S = 1) and orbital momentum (L = 1). As a consequence of this spin-triplet, orbital p-wave pairing, the order parameter (or wave function) is far more complicated than that of conventional superconductors and superfluid 4He. The order parameter of the superfluid 3He joins two spaces: the orbital (or k space) and spin and can be expressed as:

Ψ(k) = Ψ↑↑(kˆ)|↑↑⟩ + Ψ↓↓(kˆ)|↓↓⟩ + √2Ψ↑↓(kˆ)(|↑↓⟩ + |↓↑⟩) —– (1)

where kˆ is a unit vector in k space defining a position on the Fermi surface, Ψ↑↑(kˆ), Ψ↓↓(kˆ) a Ψ↑↓(kˆ) are amplitudes of the spin sub-states operators determined by its projection |↑↑⟩, |↓↓⟩ a (|↑↓⟩ + |↓↑⟩) on a quantization axis z.

The order parameter is more often written in a vector representation as a vector d(k) in spin space. For any orientation of the k on the Fermi surface, d(k) is in the direction for which the Cooper pairs have zero spin projection. Moreover, the amplitude of the superfluid condensate at the same point is defined by |d(k)|2 = 1/2tr(ΨΨH). The vector form of the order parameter d(k) for its components can be written as:

dν(k) = ∑μ Aνμkμ —– (2)

where ν (1,2,3) are orthogonal directions in spin space and μ (x,y,z) are those for orbital space. The matrix components Aνμ are complex and theoretically each of them represents possible superfluid phase of 3He. Experimentally, however, only three are stable.

phasediagramLooking at the phase diagram of 3He we can see the presence of two main superfluid phases: A – phase and B – phase. While B – phase consists of all three spin components, the A – phase does not have the component (|↑↓⟩ + |↓↑⟩). There is also a narrow region of the A1 superfluid phase which exists only at higher pressures and temperatures and in nonzero magnetic field. The A1 – phase has only one spin component |↑↑⟩. The phase transition from N – phase to the A or B – phase is a second order transition, while the phase transition between the superfluid A and B phases is of first order.

The B – phase occupies a low field region and it is stable down to the lowest temperatures. In zero field, the B – phase is a pure manifestation of p-wave superfluidity. Having equal numbers of all possible spin and angular momentum projections, the energy gap separating ground state from excitation is isotropic in k space.

The A – phase is preferable at higher pressures and temperatures in zero field. In limit T → 0K, the A – phase can exist at higher magnetic fields (above 340 mT) at zero pressure and this critical field needed for creation of the A – phase rises up as the pressure increases. In this phase, all Cooper pairs have orbital momenta orientated in a common direction defined by the vector lˆ, that is the direction in which the energy gap is reduced to zero. It results in a remarkable difference between these superfluid phases. The B – phase has an isotropic gap, while the A – phase energy spectrum consists of two Fermi points i.e. points with zero energy gap. The difference in the gap structure leads to the different thermodynamic properties of quasiparticle excitations in the limit T → 0K. The density of excitation in the B – phase falls down exponentially with temperature as exp(−∆/kBT), where kB is the Boltzman constant. At the lowest temperatures their density is so low that the excitations can be regarded as a non-interacting gas with a mean free path of the order of kilometers. On the other hand, in A – phase the Fermi points (or nodes) are far more populated with quasiparticle excitations. The nodes orientation in the lˆ direction make the A – phase excitations almost perfectly one-dimensional. The presence of the nodes in the energy spectrum leads to a T3 temperature dependence of the density of excitations and entropy. As a result, as T → 0K, the specific heat of the A – phase is far greater than that of the B – phase. In this limit, the A – phase represents a model system for a vacuum of the Standard model and B – phase is a model system for a Dirac vacuum.

In experiments with superfluid 3He phases, application of different external forces can excite the collective modes of the order parameter representing so called Bose excitations, while the Fermi excitations are responsible for the energy dissipation. Coexistence and mutual interactions of these excitations in the limit T → 0K (in limit of low energies), can be described by quantum field theory, where Bose and Fermi excitations represent Bose and Fermi quantum fields. Thus, 3He has a much broader impact by offering the possibility of experimentally investigating quantum field/cosmological theories via their analogies with the superfluid phases of 3He.

How are Topological Equivalences of Structures Homeomorphic?

dxabC

Given a first-order vocabulary 𝜏, 𝐿𝜔𝜔(𝜏) is the set of first-order sentences of type 𝜏. The elementary topology on the class 𝑆𝑡𝜏 of first-order structures type 𝜏 is obtained by taking the family of elementary classes

𝑀𝑜𝑑(𝜑) = {𝑀:𝑀 |= 𝜑}, 𝜑 ∈ 𝐿𝜔𝜔(𝜏)

as an open basis. Due to the presence of classical negation, this family is also a closed basis and thus the closed classes of 𝑆𝑡𝜏 are the first-order axiomatizable classes 𝑀𝑜𝑑(𝑇), 𝑇 ⊆ 𝐿𝜔𝜔(𝜏). Possible foundational problems due to the fact that the topology is a class of classes may be settled observing that it is indexed by a set, namely the set of theories of type 𝜏.

The main facts of model theory are reflected by the topological properties of these spaces. Thus, the downward Löwenheim-Skolem theorem for sentences amounts to topological density of the subclass of countable structures. Łoś theorem on ultraproducts grants that U-limits exist for any ultrafilter 𝑈, condition well known to be equivalent to topological compactness, and to model theoretic compactness in this case.

These spaces are not Hausdorff or T1, but having a clopen basis they are regular; that is, closed classes and exterior points may be separated by disjoint open classes. All properties or regular compact spaces are then available: normality, complete regularity, uniformizability, the Baire property, etc.

Many model theoretic properties are related to the continuity of natural operations between classes of structures, where operations are seen to be continuous and play an important role in abstract model theory.

A topological space is regular if closed sets and exterior points may be separated by open sets. It is normal if disjoint closed sets may be separated by disjoint open sets. Thus, normality does not imply regularity here. However, a regular compact space is normal. Actually, a regular Lindelöf space is already normal

Consider the following equivalence relation in a space 𝑋: 𝑥 ≡ 𝑦 ⇔ 𝑐𝑙{𝑥} = 𝑐𝑙{𝑦}

where 𝑐𝑙 denotes topological adherence. Clearly, 𝑥 ≡ 𝑦 iff 𝑥 and 𝑦 belong to the same closed (open) subsets (of a given basis). Let 𝑋/≡ be the quotient space and 𝜂 : 𝑋 → 𝑋/≡ the natural projection. Then 𝑋/≡ is T0 by construction but not necessarily Hausdorff. The following claims thus follow:

a) 𝜂 : 𝑋 → 𝑋/≡ induces an isomorphism between the respective lattices of Borel subsets of 𝑋 and 𝑋/≡. In particular, it is open and closed, preserves disjointedness, preserves and reflects compactness and normality.

b) The assignment 𝑋 → 𝑋/≡ is functorial, because ≡ is preserved by continuous functions and thus any continuous map 𝑓 : 𝑋 → 𝑌 induces a continuous assignment 𝑓/≡ : 𝑋/≡ → 𝑌/≡ which commutes with composition.

c) 𝑋 → 𝑋/≡ preserves products; that is, (𝛱𝑖𝑋𝑖)/≡ is canonically homeomorphic to 𝛱𝑖(𝑋𝑖/≡) with the product topology (monomorphisms are not preserved).

d) If 𝑋 is regular, the equivalence class of 𝑥 is 𝑐𝑙{𝑥} (this may fail in the non-regular case).

e) If 𝑋 is regular, 𝑋/≡ is Hausdorff : if 𝑥 ≢ 𝑦 then 𝑥 ∉ 𝑐𝑙{𝑦} by (d); thus there are disjoint open sets 𝑈, 𝑉 in 𝑋 such that 𝑥 ∈ 𝑈, 𝑐𝑙{𝑦} ⊆ 𝑉, and their images under 𝜂 provide an open separation of 𝜂𝑥 and 𝜂𝑦 in 𝑋/≡ by (a).

f) If 𝐾1 and 𝐾2 are disjoint compact subsets of a regular topological space 𝑋 that cannot be separated by open sets there exist 𝑥𝑖 ∈ 𝐾𝑖, 𝑖 = 1, 2, such that 𝑥1 ≡ 𝑥2. Indeed, 𝜂𝐾1 and 𝜂𝐾2 are compact in 𝑋/≡ by continuity and thus closed because 𝑋/≡ is Hausdorff by (e). They can not be disjoint; otherwise, they would be separated by open sets whose inverse images would separate 𝐾1 and 𝐾2. Pick 𝜂𝑥 = 𝜂𝑦 ∈ 𝜂𝐾1 ∩ 𝜂𝐾2 with 𝑥 ∈ 𝐾1, 𝑦 ∈ 𝐾2.

Clearly then, for the elementary topology on 𝑆𝑡𝜏, the relation ≡ coincides with elementary equivalence of structures and 𝑆𝑡𝜏/≡ is homeomorphic to the Stone space of complete theories.

Appropriation of (Ir)reversibility of Noise Fluctuations to (Un)Facilitate Complexity

 

data

The logical depth is a suitable measure of subjective complexity for physical as well as mathematical objects. this, upon considering the effect of irreversibility, noise, and spatial symmetries of the equations of motion and initial conditions on the asymptotic depth-generating abilities of model systems.

“Self-organization” suggests a spontaneous increase of complexity occurring in a system with simple, generic (e.g. spatially homogeneous) initial conditions. The increase of complexity attending a computation, by contrast, is less remarkable because it occurs in response to special initial conditions. An important question, which would have interested Turing, is whether self-organization is an asymptotically qualitative phenomenon like phase transitions. In other words, are there physically reasonable models in which complexity, appropriately defined, not only increases, but increases without bound in the limit of infinite space and time? A positive answer to this question would not explain the natural history of our particular finite world, but would suggest that its quantitative complexity can legitimately be viewed as an approximation to a well-defined qualitative property of infinite systems. On the other hand, a negative answer would suggest that our world should be compared to chemical reaction-diffusion systems (e.g. Belousov-Zhabotinsky), which self-organize on a macroscopic, but still finite scale, or to hydrodynamic systems which self-organize on a scale determined by their boundary conditions.

The suitability of logical depth as a measure of physical complexity depends on the assumed ability (“physical Church’s thesis”) of Turing machines to simulate physical processes, and to do so with reasonable efficiency. Digital machines cannot of course integrate a continuous system’s equations of motion exactly, and even the notion of computability is not very robust in continuous systems, but for realistic physical systems, subject throughout their time development to finite perturbations (e.g. electromagnetic and gravitational) from an uncontrolled environment, it is plausible that a finite-precision digital calculation can approximate the motion to within the errors induced by these perturbations. Empirically, many systems have been found amenable to “master equation” treatments in which the dynamics is approximated as a sequence of stochastic transitions among coarse-grained microstates.

We concentrate arbitrarily on cellular automata, in the broad sense of discrete lattice models with finitely many states per site, which evolve according to a spatially homogeneous local transition rule that may be deterministic or stochastic, reversible or irreversible, and synchronous (discrete time) or asynchronous (continuous time, master equation). Such models cover the range from evidently computer-like (e.g. deterministic cellular automata) to evidently material-like (e.g. Ising models) with many gradations in between.

More of the favorable properties need to be invoked to obtain “self-organization,” i.e. nontrivial computation from a spatially homogeneous initial condition. A rather artificial system (a cellular automaton which is stochastic but noiseless, in the sense that it has the power to make purely deterministic as well as random decisions) undergoes this sort of self-organization. It does so by allowing the nucleation and growth of domains, within each of which a depth-producing computation begins. When two domains collide, one conquers the other, and uses the conquered territory to continue its own depth-producing computation (a computation constrained to finite space, of course, cannot continue for more than exponential time without repeating itself). To achieve the same sort of self-organization in a truly noisy system appears more difficult, partly because of the conflict between the need to encourage fluctuations that break the system’s translational symmetry, while suppressing fluctuations that introduce errors in the computation.

Irreversibility seems to facilitate complex behavior by giving noisy systems the generic ability to correct errors. Only a limited sort of error-correction is possible in microscopically reversible systems such as the canonical kinetic Ising model. Minority fluctuations in a low-temperature ferromagnetic Ising phase in zero field may be viewed as errors, and they are corrected spontaneously because of their potential energy cost. This error correcting ability would be lost in nonzero field, which breaks the symmetry between the two ferromagnetic phases, and even in zero field it gives the Ising system the ability to remember only one bit of information. This limitation of reversible systems is recognized in the Gibbs phase rule, which implies that under generic conditions of the external fields, a thermodynamic system will have a unique stable phase, all others being metastable. Even in reversible systems, it is not clear why the Gibbs phase rule enforces as much simplicity as it does, since one can design discrete Ising-type systems whose stable phase (ground state) at zero temperature simulates an aperiodic tiling of the plane, and can even get the aperiodic ground state to incorporate (at low density) the space-time history of a Turing machine computation. Even more remarkably, one can get the structure of the ground state to diagonalize away from all recursive sequences.

von Neumann & Dis/belief in Hilbert Spaces

I would like to make a confession which may seem immoral: I do not believe absolutely in Hilbert space any more.

— John von Neumann, letter to Garrett Birkhoff, 1935.

15_03

The mathematics: Let us consider the raison d’ˆetre for the Hilbert space formalism. So why would one need all this ‘Hilbert space stuff, i.e. the continuum structure, the field structure of complex numbers, a vector space over it, inner-product structure, etc. Why? According to von Neumann, he simply used it because it happened to be ‘available’. The use of linear algebra and complex numbers in so many different scientific areas, as well as results in model theory, clearly show that quite a bit of modeling can be done using Hilbert spaces. On the other hand, we can also model any movie by means of the data stream that runs through your cables when watching it. But does this mean that these data streams make up the stuff that makes a movie? Clearly not, we should rather turn our attention to the stuff that is being taught at drama schools and directing schools. Similarly, von Neumann turned his attention to the actual physical concepts behind quantum theory, more specifically, the notion of a physical property and the structure imposed on these by the peculiar nature of quantum observation. His quantum logic gave the resulting ‘algebra of physical properties’ a privileged role. All of this leads us to … the physics of it. Birkhoff and von Neumann crafted quantum logic in order to emphasize the notion of quantum superposition. In terms of states of a physical system and properties of that system, superposition means that the strongest property which is true for two distinct states is also true for states other than the two given ones. In order-theoretic terms this means, representing states by the atoms of a lattice of properties, that the join p ∨ q of two atoms p and q is also above other atoms. From this it easily follows that the distributive law breaks down: given atom r ≠ p, q with r < p ∨ q we have r ∧ (p ∨ q) = r while (r ∧ p) ∨ (r ∧ q) = 0 ∨ 0 = 0. Birkhoff and von Neumann as well as many others believed that understanding the deep structure of superposition is the key to obtaining a better understanding of quantum theory as a whole.

For Schrödinger, this is the behavior of compound quantum systems, described by the tensor product. While the quantum information endeavor is to a great extend the result of exploiting this important insight, the language of the field is still very much that of strings of complex numbers, which is akin to the strings of 0’s and 1’s in the early days of computer programming. If the manner in which we describe compound quantum systems captures so much of the essence of quantum theory, then it should be at the forefront of the presentation of the theory, and not preceded by continuum structure, field of complex numbers, vector space over the latter, etc, to only then pop up as some secondary construct. How much quantum phenomena can be derived from ‘compoundness + epsilon’. It turned out that epsilon can be taken to be ‘very little’, surely not involving anything like continuum, fields, vector spaces, but merely a ‘2D space’ of temporal composition and compoundness, together with some very natural purely operational assertion, including one which in a constructive manner asserts entanglement; among many other things, trace structure then follows.

Marching From Galois Connections to Adjunctions. Part 4.

To make the transition from Galois connections to adjoint functors we make a slight change of notation. The change is only cosmetic but it is very important for our intuition.

Definition of Poset Adjunction. Let (P, ≤P) and (Q, ≤Q) be posets. A pair of functions L ∶ P ⇄ Q ∶ R is called an adjunction if ∀ p ∈ P and q ∈ Q we have

p ≤P R(q) ⇐⇒ L(p) ≤Q q

In this case we write L ⊣ R and call this an adjoint pair of functions. The function L is the left adjoint and R is the right adjoint.

The only difference between Galois connections and poset adjunctions is that we have reversed the partial order on Q. To be precise, we define the opposite poset Qop with the same underlying set Q, such that for all q1 , q2 ∈ Q we have

q1Qop q2 ⇐⇒ q2Q q1

Then an adjunction P ⇄ Q is just the same thing as a Galois connection P ⇄ Qop.

However, this difference is important because it breaks the symmetry. It also prepares us for the notation of an adjunction between categories, where it is more common to use an “asymmetric pair of covariant functors” as opposed to a “symmetric pair of contravariant functors”.

Uniqueness of Adjoints for Posets: Let P and Q be posets and let L ∶ P ⇄ Q ∶ R be an adjunction. Then each of the adjoint functions L ⊣ R uniquely determines the other.

Proof: To prove that R determines L, suppose that L′ ∶ P ⇄ Q ∶ R is another adjunction. Then by definition of adjunction we have for all q ∈ Q that

L(p) ≤Q q ⇐⇒ p ≤P R(q) ⇐⇒ L′(p) ≤Q q

In particular, setting q = L(p) gives

L(p) ≤Q L(p) ⇒ L′(p) ≤Q L′(p)

and setting q = L′(p) gives

L′(p) ≤Q L(p) ⇒ L(p) ≤Q L′(p)

Then by the antisymmetry of Q we have L(p) = L′(p). Since this holds for all p ∈ P we conclude that L = L′, as desired.

RAPL Theorem for Posets. Let L ∶ P ⇄ Q ∶ R be an adjunction of posets. Then for all subsets S ⊆ P and T ⊆ Q we have

L (∨P S) = ∨Q L(S) and R (∧Q T) = ∧P R(T).

In words, this could be said as “left adjoints preserve join” and “right adjoints preserve meet”.

Proof: We just have to observe that sending Q to its opposite Qop switches the definitions of join and meet: Qop = ∧Q and Qop = ∨Q.

It seems worthwhile to emphasize the new terminology with a picture. Suppose that the posets P and Q have top and bottom elements: 1P , 0P ∈ P and 1Q, 0Q ∈ Q. Then a poset adjunction L ∶ P ⇄ Q ∶ R looks like this:

img_20170204_163208

In this case RL ∶ P → P is a closure operator as before, but now LR ∶ Q → Q is called an interior operator. From the case of Galois connections we also know that LRL = L and RLR = R. Since bottom elements are colmits and top elements are limits, the identities L(0P ) = 0Q and R(1Q) = 1P are special cases of the RAPL Theorem.

Just as with Galois connections, adjunctions between the Boolean lattices 2U and 2V are in bijection with relations ∼ ⊆ U × V, but this time we will view the relation as a function f ∼ ∶ U → 2V that sends each to the set f ∼ (u)∶= {v∈V ∶ u∼v}. We can also think off as a “multi-valued function” from U to V.

Adjunctions of Boolean Lattices: Let U,V be sets and consider an arbitrary function f ∶ U → 2V. Then subsets S ∈ 2U and T ∈ 2V we define

L(S) ∶= ∪s∈S f(s) ∈ 2V,

R(T) ∶= {u∈U ∶ f(u) ⊆ T} ∈ 2U

The pair of functions Lf ∶ 2U ⇄ 2V ∶ Rf is an adjunction of Boolean lattices. To see this, note  S ∈ 2U and T ∈ 2V

S ⊆ Rf (T) ⇐⇒ ∀ s∈S, s ∈ R(T)

⇐⇒ ∀ s∈S, f(s) ⊆ T

⇐⇒ ∪s∈S f(s) ⊆ T

⇐⇒ L(S) ⊆ T

Functions : Let f ∶ U → V be any function. We can extend this to a function f ∶ U → 2V by defining f(u) ∶= {f(u)} ∀ u ∈ U. In this case we denote the corresponding left and right adjoint functions by f ∶= Lf ∶ 2U → 2V and f−1 ∶= Rf ∶ 2V → 2U, so that ∀ S ∈ 2U and T ∈ 2V we have

f(S) = {f(s) ∶ s ∈ S}, f−1(T)={u∈U ∶ f(s) ∈ T}

The resulting adjunction f ∶ 2U ⇄ 2V ∶ f−1 is called the image and preimage of the function. It follows from RAPL that image preserves unions and preimage preserves intersections.

But now something surprising happens. We can restrict the preimage f−1 ∶ 2V → 2U to a function f−1 ∶ V → 2U by defining f−1(v) ∶= f−1({v}) for each v ∈ V. Then since f−1 = Lf−1 we obtain another adjunction

f−1 ∶ 2V ⇄ 2U ∶ Rf−1,
where this time f−1 is the left adjoint. The new right adjoint is defined for each S ∈ 2U by

R f−1(S) = {v∈V ∶ f−1(v) ⊆ S}

There seems to be no standard notation for this function, but people call it f! ∶= Rf−1 (the “!” is pronounced “shriek”). In summary, each function f ∶ U → V determines a triple of

adjoints f ⊣ f−1 ⊣ f! where f preserves unions, f! preserves intersections, and f−1 preserves both unions and intersections. Logicians will tell you that the functions f and f! are closely related to the existential (∃) and universal (∀) quantifiers, in the sense that for all S ∈ 2U we have

f∗ (S) = {v∈V ∶ ∃ u ∈ f−1 (v), u ∈ S}, f(S)={v ∈ V ∶ ∀ u ∈ f−1(v), u ∈ S}

Group Homomorphisms: Given a group G we let (L (G), ⊆) denote its poset of subgroups. Since the intersection of subgroups is again a subgroup, we have ∧ = ∩. Then since L (G) has arbitrary meets it also has arbitrary joins. In particular, the join of two subgroups A, B ∈ L (G) is given by

A ∨ B = ⋂ {C ∈ L(G) ∶ A ⊆ C and B ⊆ C},

which is the smallest subgroup containing the union A ∪ B. Thus L (G) is a lattice, but since A ∨ B ≠ A ∪ B (in general) it is not a sublattice of 2G.

Now let φ ∶ G → H be an arbitrary group homomorphism. One can check that the image and preimage φ ∶ 2G ⇄ 2H ∶ φ−1 send subgroups to subgroups, hence they restrict to an adjunction between subgroup lattices:

φ ∶L(G) ⇄ L(H)∶ φ−1.

The function φ! ∶ 2G → 2H does not send subgroups to subgroups, and in general the function φ−1 ∶ L(H) → L(G) does not have a right adjoint. For all subgroups A ∈ L (G) and B ∈ L (H) one can check that

φ−1φ(A)=A ∨ ker φ and φφ−1(B) = B ∧ im φ

Thus the φ−1φ-fixed subgroups of G are precisely those that contain the kernel and the φφ−1-fixed subgroups of H are precisely those contained in the image. Finally, the Fundamental Theorem gives us an order-preserving bijection as in the following picture:

img_20170204_173156

…..

Philosophy of Quantum Entanglement and Topology

58525360_35e55309c4_o

Many-body entanglement is essential for the existence of topological order in condensed matter systems and understanding many-body entanglement provides a promising approach to understand in general what topological orders exist. It also leads to tensor network descriptions of many-body wave functions potentializing the classification of phases of quantum matter. The generic many-body entanglement is reduced to specifically 2-body systems for choice of entanglement. Consider the equation,

S(A) ≡ −tr(ρA log2A)) —– (1)

where, ρA ≡ trBAB ⟩⟨ΨAB | is the density matrix for part A, and where we assumed that the whole system is in a pure state AB.

Specializing AB⟩ to a ground state in a local Hamiltonian in D dimensions spatially, the central observation being that the entanglement between of a region A of size LD and the (much larger) rest B of the lattice is then often proportional to the size |σ(A)| of the boundary σ(A) of region A,

S(A) ≈ |σ(A)| ≈ LD−1  —– (2)

where, the correction -1 is due to the topological order of the topic code, thus signifying adherence to Boundary Law observed in the ground state of gapped local Hamiltonian in arbitrary dimension D, as well as in some gapless systems in D > 1 dimensions. Instead, in gapless systems in D = 1 dimensions, as well as in certain gapless systems in D > 1 dimensions (namely systems with a Fermi surface of dimension D − 1), ground state entanglement displays a logarithmic correction to the boundary law,

S(A) ≈ |σ(A)| log2 (|σ(A)|) ≈ LD−1 log2(L) —– (3)

At an intuitive level, the boundary law of (2) is understood as resulting from entanglement that involves degrees of freedom located near the boundary between regions A and B. Also intuitively, the logarithmic correction of (3) is argued to have its origin in contributions to entanglement from degrees of freedom that are further away from the boundary between A and B. Given the entanglement between A and B, introducing an entanglement contour sA that assigns a real number sA(i) ≥ 0 to each lattice site i contained in region A such that the sum of sA(i) over all the sites i ∈ A is equal to the entanglement entropy S (A),

S(A) = Σi∈A sA(i) —– (4) 

and that aims to quantifying how much the degrees of freedom in site i participate in/contribute to the entanglement between A and B. And as Chen and Vidal put it, the entanglement contour sA(i) is not equivalent to the von Neumann entropy S(i) ≡ −tr ρ(i) log2 ρ(i) of the reduced density matrix ρ(i) at site i. Notice that, indeed, the von Neumann en- tropy of individual sites in region A is not additive in the presence of correlations between the sites, and therefore generically

S(A) ≠ Σi∈A S(i)

whereas the entanglement contour sA(i) is required to fulfil (4). Relatedly, when site i is only entangled with neighboring sites contained within region A, and it is thus uncorrelated with region B, the entanglement contour sA(i) will be required to vanish, whereas the one-site von Neumann entropy S(i) still takes a non-zero value due to the presence of local entanglement within region A.

As an aside, in the traditional approach to quantum mechanics, a physical system is described in a Hilbert space: Observables correspond to self-adjoint operators and statistical operators are associated with the states. In fact, a statistical operator describes a mixture of pure states. Pure states are the really physical states and they are given by rank one statistical operators, or equivalently by rays of the Hilbert space. Von Neumann associated an entropy quantity to a statistical operator and his argument was a gedanken experiment on the ground of phenomenological thermodynamics. Let us consider a gas of N(≫ 1) molecules in a rectangular box K. Suppose that the gas behaves like a quantum system and is described by a statistical operator D, which is a mixture λ|φ1⟩⟨φ1| + (1 − λ)|φ1⟩⟨φ2|, |φi⟩ ≡ φ is a state vector (i = 1, 2). We may take λN molecules in the pure state φ1 and (1−λ)N molecules in the pure state φ2. On the basis of phenomenological thermodynamics, we assume that if φ1 and φ2 are orthogonal, then there is a wall that is completely permeable for the φ1-molecules and isolating for the φ2-molecules. We add an equally large empty rectangular box K′ to the left of the box K and we replace the common wall with two new walls. Wall (a), the one to the left is impenetrable, whereas the one to the right, wall (b), lets through the φ1-molecules but keeps back the φ2-molecules. We add a third wall (c) opposite to (b) which is semipermeable, transparent for the φ2-molecules and impenetrable for the φ1-ones. Then we push slowly (a) and (c) to the left, maintaining their distance. During this process the φ1-molecules are pressed through (b) into K′ and the φ2-molecules diffuse through wall (c) and remain in K. No work is done against the gas pressure, no heat is developed. Replacing the walls (b) and (c) with a rigid absolutely impenetrable wall and removing (a) we restore the boxes K and K′ and succeed in the separation of the φ1-molecules from the φ2-ones without any work being done, without any temperature change and without evolution of heat. The entropy of the original D-gas ( with density N/V ) must be the sum of the entropies of the φ1- and φ2-gases ( with densities λ N/V and (1 − λ)N/V , respectively). If we compress the gases in K and K′ to the volumes λV and (1 − λ)V , respectively, keeping the temperature T constant by means of a heat reservoir, the entropy change amounts to κλN log λ and κ(1 − λ)N log(1 − λ), respectively. Indeed, we have to add heat in the amount of λiNκT logλi (< 0) when the φi-gas is compressed, and dividing by the temperature T we get the change of entropy. Finally, mixing the φ1- and φ2-gases of identical density we obtain a D-gas of N molecules in a volume V at the original temperature. If S0(ψ,N) denotes the entropy of a ψ-gas of N molecules (in a volume V and at the given temperature), we conclude that

S0(φ1,λN)+S0(φ2,(1−λ)N) = S0(D, N) + κλN log λ + κ(1 − λ)N log(1 − λ) —– (5)

must hold, where κ is Boltzmann’s constant. Assuming that S0(ψ,N) is proportional to N and dividing by N we have

λS(φ1) + (1 − λ)S(φ2) = S(D) + κλ log λ + κ(1 − λ) log(1 − λ) —– (6)

where S is certain thermodynamical entropy quantity ( relative to the fixed temperature and molecule density ). We arrived at the mixing property of entropy, but we should not forget about the initial assumption: φ1 and φ2 are supposed to be orthogonal. Instead of a two-component mixture, von Neumann operated by an infinite mixture, which does not make a big difference, and he concluded that

S (Σiλi|φi⟩⟨φi|) = ΣiλiS(|φi⟩⟨φi|) − κ Σiλi log λi —– (7)

Von Neumann’s argument does not require that the statistical operator D is a mixture of pure states. What we really needed is the property D = λD1 + (1 − λ)D2 in such a way that the possible mixed states D1 and D2 are disjoint. D1 and D2 are disjoint in the thermodynamical sense, when there is a wall which is completely permeable for the molecules of a D1gas and isolating for the molecules of a D2-gas. In other words, if the mixed states D1 and D2 are disjoint, then this should be demonstrated by a certain filter. Mathematically, the disjointness of D1 and D2 is expressed in the orthogonality of the eigenvectors corresponding to nonzero eigenvalues of the two density matrices. The essential point is in the remark that (6) must hold also in a more general situation when possibly the states do not correspond to density matrices, but orthogonality of the states makes sense:

λS(D1) + (1 − λ)S(D2) = S(D) + κλ log λ + κ(1 − λ) log(1 − λ) —– (8)

(7) reduces the determination of the (thermodynamical) entropy of a mixed state to that of pure states. The so-called Schatten decomposition Σi λi|φi⟩⟨φi| of a statistical operator is not unique even if ⟨φi , φj ⟩ = 0 is assumed for i ≠ j . When λi is an eigenvalue with multiplicity, then the corresponding eigenvectors can be chosen in many ways. If we expect the entropy S(D) to be independent of the Schatten decomposition, then we are led to the conclusion that S(|φ⟩⟨φ|) must be independent of the state vector |φ⟩. This argument assumes that there are no superselection sectors, that is, any vector of the Hilbert space can be a state vector. On the other hand, von Neumann wanted to avoid degeneracy of the spectrum of a statistical operator. Von Neumann’s proof of the property that S(|φ⟩⟨φ|) is independent of the state vector |φ⟩ was different. He did not want to refer to a unitary time development sending one state vector to another, because that argument requires great freedom in choosing the energy operator H. Namely, for any |φ1⟩ and |φ2⟩ we would need an energy operator H such that

eitH|φ1⟩ = |φ2⟩

This process would be reversible. Anyways, that was quite a digression.

Entanglement between A and B is naturally described by the coefficients {pα} appearing in the Schmidt decomposition of the state |ΨAB⟩,

AB⟩ = Σα √pαAα ⟩ ⊗ |ΨBα ⟩ —– (9)

These coefficients {pα} correspond to the eigenvalues of the reduced density matrix ρA, whose spectral decomposition reads

ρA = ΣαpAα⟩⟨ΨAα—– (10)

defining a probability distribution, pα ≥ 0, Σα pα = 1, in terms of which the von Neumann entropy S(A) is

S(A) = − Σαpα log2(pα—– (11)

On the other hand, the Hilbert space VA of region A factorizes as the tensor product

VA = ⊗ i∈A V(i) —– (12)

where V(i) describes the local Hilbert space of site i. The reduced density matrix ρA in (10) and the factorization of (12) define two inequivalent structures within the vector space VA of region A. The entanglement contours A is a function from the set of sites i∈A to the real numbers,

sA : A → ℜ —– (13)

that attempts to relate these two structures, by distributing the von-Neumann entropy S(A) of (11) among the sites i ∈ A. According to Chen and Vidal, there are five conditions/requirements on entanglement contours that need satiation.

a. Positivity: sA(i) ≥ 0

b. Normalization: Σi∈AsA(i) = S(A) 

These constraints amount to defining a probability distribution pi ≡ sA(i)/S(A) over the sites i ∈ A, with pi ≥ 0 and i Σipi = 1, such that sA(i) = piS(A), however, do not requiring sA to inform us about the spatial structure of entanglement in A, but only relating to the density matrix ρA through its total von Neumann entropy S(A).

c. Symmetry: if T is a symmetry of ρA, that is AT = ρA, and T exchanges site i with site j, then sA(i) = sA(j).

This condition ensures that the entanglement contour is the same on two sites i and j of region A that, as far as entanglement is concerned, play an equivalent role in region A. It uses the (possible) presence of a spatial symmetry, such as invariance under space reflection, or under discrete translations/rotations, to define an equivalence relation in the set of sites of region A, and requires that the entanglement contour be constant within each resulting equivalence class. Notice, however, that this condition does not tell us whether the entanglement contour should be large or small on a given site (or equivalence class of site). In particular, the three conditions above are satisfied by a canonical choice sA(i) = S (A)/|A|, that is a flat entanglement contour over the |A| sites contained in region A, which once more does not tell us anything about the spatial structure of the von Neumann entropy in ρA.

The remaining conditions refer to subregions within region A, instead of referring to single sites. It is therefore convenient to (trivially) extend the definition of entanglement contour to a set X of sites in region A, X ⊆ A, with vector space

VX = ⊗i∈X V(i) —– (14)

as the sum of the contour over the sites in X,

sA(X) ≡  Σi∈XsA(i) —– (15)

It follows from this extension that for any two disjoint subsets X1, X2 ⊆ A, with X1 ∩ X2 = ∅, the contour is additive,

sA(X1 ∪ X2) = sA(X1) + sA(X2—– (16)

In particular, condition 2 can be now recast as sA(A) =S(A). Similarly, if X, X ⊆ A, are such that all the sites of X1 are also contained in X2, X1X2 ,then the contour must be larger on X2 than on X1 (monotonicity of sA(X)),

sA(X1) ≤ sA(X2) if X1 ⊆ X2 —– (17)

d. Invariance under local unitary transformations: if the state |Ψ′AB is obtained from the state AB by means of a unitary transformation UX that acts on a subset X ⊆ A of sites of region A, that is |Ψ′AB⟩ ≡ UXAB, then the entanglement contour sA(X) must be the same for state AB and for state |Ψ′AB.

That is, the contribution of region X to the entanglement between A and B is not affected by a redefinition of the sites or change of basis within region X. Notice that it follows that  Ucan also not change sA(X’), where X’ ≡ A − X is the complement of X in A.

To motivate our last condition, let us consider a state AB that factorizes as the product

AB⟩ = |ΨXXB⟩ ⊗ |ΨX’X’B—– (18)

where X ⊆ A and XB ⊆ B are subsets of sites in regions A and B, respectively, and X’ ⊆ A and X’B ⊆ B are their complements within A and B, so that

VA = VX ⊗ VX’, —– (19)

VB = VXB ⊗ VX’B —– (20)

in this case the reduced density matrix ρA factorizes as ρA = ρX ⊗ ρX’ and the entanglement entropy is additive,

S(A) = S(X) + S(X’) —– (21)

Since the entanglement entropy S(X) of subregion X is well-defined, let the entanglement profile over X be equal to it,

sA(X) = S(X) —– (22)

The last condition refers to a more general situation where, instead of obeying (18), the state AB factorizes as the product

AB⟩ = |ΨΩAΩB⟩ ⊗ |ΨΩ’AΩ’B, —– (23)

with respect to some decomposition of VA and VB as

tensor products of factor spaces,

VA = VΩA ⊗ VΩ’A, —– (24)

VB = VΩB ⊗ VΩ’B —– (25)

Let S(ΩA) denote the entanglement entropy supported on the first factor space VΩA of  VA, that is

S(ΩA) = −tr(ρΩA log2ΩA)) —– (26)

ρΩA ≡ trΩB |Ψ ΩA ΩB⟩⟨Ψ ΩA ΩB| —– (27)

and let X ⊆ A be a subset of sites whose vector space VX is completely contained in VΩA , meaning that VΩA can be further decomposed as

VΩA  ≈ VX VX’ —– (28)

e. Upper bound: if a subregion X ⊆ A is contained in a factor space ΩA (24 and 28) then the entanglement contour of subregion X cannot be larger than the entanglement entropy S(ΩA) (26)

sA(X) S(ΩA) —– (29)

This condition says that whenever we can ascribe a concrete value S(ΩA) of the entanglement entropy to a factor space ΩA within region A (that is, whenever the state AB factorizes as in (24) then the entanglement contour has to be consistent with this fact, meaning that the contour S(X) in any subregion X contained in the factor space ΩA is upper bounded by S(ΩA).

Let us consider a particular case of condition e. When a region X ∈ A is not at all correlated with B, that is ρXBX ⊗ ρB,then it can be seen that X is contained in some factor space ΩA such that the state |Ψ ΩA ΩB itself further factorizes as |Ψ ΩA⟩ ⊗ |ΨΩB, so that (23) becomes

AB⟩ = |Ψ ΩA⟩ ⊗ |ΨΩB ⊗ |ΨΩ’AΩ’B ⟩, —– (30)

and S(ΩA) = 0. Condition e then requires that sA(X) = 0, that is

ρXBX ⊗ ρB sA(X) = 0, —– (31)

reflecting the fact that a region X ⊆ A that is not correlated with B does not contribute at all to the entanglement between A and B. Finally, the upper bound in e can be alternatively announced as a lower bound. Let Y ⊆ A be a subset of sites whose vector space VY completely contains VΩA in (24), meaning that VY can be further decomposed as

VY VΩA ⊗ VΩ’A —– (32)

e’. Lower bound: The entanglement contour of subregion Y is at least equal to the entanglement entropy S(ΩA) in (26),

sA(Y) ≥ S(ΩA) —– (33)

Conditions a-e (e’) are not expected to completely determine the entanglement contour. In other words, there probably are inequivalent functions sA : A → ℜ that conform to all the conditions above. So, where do we get philosophical from here? It is through the entanglement contour through selected states that a time evolution ensuing a global or a local quantum quench characterizing entanglement between regions rather than within regions, revealing a a detailed real-space structure of the entanglement of a region A and its dynamics, well beyond what is accessible from the entanglement entropy alone. But, that isn’t all. Questions of how to quantify entanglement and non-locality, and the need to clarify the relationship between them are important not only conceptually, but also practically, insofar as entanglement and non-locality seem to be different resources for the performance of quantum information processing tasks. Whether in a given quantum information protocol (cryptography, teleportation, and algorithm . . .) it is better to look for the largest amount of entanglement or the largest amount of non-locality becomes decisive. The ever-evolving field of quantum information theory is devoted to using the principles and laws of quantum mechanics to aid in the acquisition, transmission, and processing of information. In particular, it seeks to harness the peculiarly quantum phenomena of entanglement, superposition, and non-locality to perform all sorts of novel tasks, such as enabling computations that operate exponentially faster or more efficiently than their classical counterparts (via quantum computers) and providing unconditionally secure cryptographic systems for the transfer of secret messages over public channels (via quantum key distribution). By contrast, classical information theory is concerned with the storage and transfer of information in classical systems. It uses the “bit” as the fundamental unit of information, where the system capable of representing a bit can take on one of two values (typically 0 or 1). Classical information theory is based largely on the concept of information formalized by Claude Shannon in the late 1940s. Quantum information theory, which was later developed in analogy with classical information theory, is concerned with the storage and processing of information in quantum systems, such as the photon, electron, quantum dot, or atom. Instead of using the bit, however, it defines the fundamental unit of quantum information as the “qubit.” What makes the qubit different from a classical bit is that the smallest system capable of storing a qubit, the two-level quantum system, not only can take on the two distinct values |0 and |1 , but can also be in a state of superposition of these two states: |ψ = α0 |0 + α1 |1.

Quantum information theory has opened up a whole new range of philosophical and foundational questions in quantum cryptography or quantum key distribution, which involves using the principles of quantum mechanics to ensure secure communication. Some quantum cryptographic protocols make use of entanglement to establish correlations between systems that would be lost upon eavesdropping. Moreover, a quantum principle known as the no-cloning theorem prohibits making identical copies of an unknown quantum state. In the context of a C∗-algebraic formulation,  quantum theory can be characterized in terms of three information-theoretic constraints: (1) no superluminal signaling via measurement, (2) no cloning (for pure states) or no broadcasting (mixed states), and (3) no unconditionally secure bit commitment.

Entanglement does not refute the principle of locality. A sketch of the sort of experiment commonly said to refute locality runs as follows. Suppose that you have two electrons with entangled spin. For each electron you can measure the spin along the X, Y or Z direction. If you measure X on both electrons, then you get opposite values, likewise for measuring Y or Z on both electrons. If you measure X on one electron and Y or Z on the other, then you have a 50% probability of a match. And if you measure Y on one and Z on the other, the probability of a match is 50%. The crucial issue is that whether you find a correlation when you do the comparison depends on whether you measure the same quantity on each electron. Bell’s theorem just explains that the extent of this correlation is greater than a local theory would allow if the measured quantities were represented by stochastic variables (i.e. – numbers picked out of a hat). This fact is often misrepresented as implying that quantum mechanics is non-local. But in quantum mechanics, systems are not characterised by stochastic variables, but, rather, by Hermitian operators. There is an entirely local explanation of how the correlations arise in terms of properties of systems represented by such operators. But, another answer to such violations of the principle of locality could also be “Yes, unless you get really obsessive about it.” It has been formally proven that one can have determinacy in a model of quantum dynamics, or one can have locality, but cannot have both. If one gives up the determinacy of the theory in various ways, one can imagine all kinds of ‘planned flukes’ like the notion that the experiments that demonstrate entanglement leak information and pre-determine the environment to make the coordinated behavior seem real. Since this kind of information shaping through distributed uncertainty remains a possibility, folks can cling to locality until someone actually manages something like what those authors are attempting, or we find it impossible. If one gives up locality instead, entanglement does not present a problem, the theory of relativity does. Because the notion of a frame of reference is local. Experiments on quantum tunneling that violate the constraints of the speed of light have been explained with the idea that probabilistic partial information can ‘lead’ real information faster than light by pushing at the vacuum underneath via the ‘Casimir Effect’. If both of these make sense, then the information carried by the entanglement when it is broken would be limited as the particles get farther apart — entanglements would have to spontaneously break down over time or distance of separation so that the probabilities line up. This bodes ill for our ability to find entangled particles from the Big Bang, which seems to be the only prospect in progress to debunk the excessively locality-focussed.

But, much of the work remains undone and this is to be continued…..