Black Hole Entropy in terms of Mass. Note Quote.

c839ecac963908c173c6b13acf3cd2a8--friedrich-nietzsche-the-portal

If M-theory is compactified on a d-torus it becomes a D = 11 – d dimensional theory with Newton constant

GD = G11/Ld = l911/Ld —– (1)

A Schwartzschild black hole of mass M has a radius

Rs ~ M(1/(D-3)) GD(1/(D-3)) —– (2)

According to Bekenstein and Hawking the entropy of such a black hole is

S = Area/4GD —– (3)

where Area refers to the D – 2 dimensional hypervolume of the horizon:

Area ~ RsD-2 —– (4)

Thus

S ~ 1/GD (MGD)(D-2)/(D-3) ~ M(D-2)/(D-3) GD1/(D-3) —– (5)

From the traditional relativists’ point of view, black holes are extremely mysterious objects. They are described by unique classical solutions of Einstein’s equations. All perturbations quickly die away leaving a featureless “bald” black hole with ”no hair”. On the other hand Bekenstein and Hawking have given persuasive arguments that black holes possess thermodynamic entropy and temperature which point to the existence of a hidden microstructure. In particular, entropy generally represents the counting of hidden microstates which are invisible in a coarse grained description. An ultimate exact treatment of objects in matrix theory requires a passage to the infinite N limit. Unfortunately this limit is extremely difficult. For the study of Schwarzchild black holes, the optimal value of N (the value which is large enough to obtain an adequate description without involving many redundant variables) is of order the entropy, S, of the black hole.

Considering the minimum such value for N, we have

Nmin(S) = MRs = M(MGD)1/D-3 = S —– (6)

We see that the value of Nmin in every dimension is proportional to the entropy of the black hole. The thermodynamic properties of super Yang Mills theory can be estimated by standard arguments only if S ≤ N. Thus we are caught between conflicting requirements. For N >> S we don’t have tools to compute. For N ~ S the black hole will not fit into the compact geometry. Therefore we are forced to study the black hole using N = Nmin = S.

Matrix theory compactified on a d-torus is described by d + 1 super Yang Mills theory with 16 real supercharges. For d = 3 we are dealing with a very well known and special quantum field theory. In the standard 3+1 dimensional terminology it is U(N) Yang Mills theory with 4 supersymmetries and with all fields in the adjoint repersentation. This theory is very special in that, in addition to having electric/magnetic duality, it enjoys another property which makes it especially easy to analyze, namely it is exactly scale invariant.

Let us begin by considering it in the thermodynamic limit. The theory is characterized by a “moduli” space defined by the expectation values of the scalar fields φ. Since the φ also represents the positions of the original DO-branes in the non compact directions, we choose them at the origin. This represents the fact that we are considering a single compact object – the black hole- and not several disconnected pieces.

The equation of state of the system, defined by giving the entropy S as a function of temperature. Since entropy is extensive, it is proportional to the volume ∑3 of the dual torus. Furthermore, the scale invariance insures that S has the form

S = constant T33 —– (7)

The constant in this equation counts the number of degrees of freedom. For vanishing coupling constant, the theory is described by free quanta in the adjoint of U(N). This means that the number of degrees of freedom is ~ N2.

From the standard thermodynamic relation,

dE = TdS —– (8)

and the energy of the system is

E ~ N2T43 —– (9)

In order to relate entropy and mass of the black hole, let us eliminate temperature from (7) and (9).

S = N23((E/N23))3/4 —– (10)

Now the energy of the quantum field theory is identified with the light cone energy of the system of DO-branes forming the black hole. That is

E ≈ M2/N R —– (11)

Plugging (11) into (10)

S = N23(M2R/N23)3/4 —– (12)

This makes sense only when N << S, as when N >> S computing the equation of state is slightly trickier. At N ~ S, this is precisely the correct form for the black hole entropy in terms of the mass.

Superalgebras

Untitled

Let k be an algebraically closed field. Given a superalgebra A we will denote with A0 the even part, with A1 the odd part and with IAodd the ideal generated by the odd part.

A superalgebra is said to be commutative (or supercommutative) if

xy = (−1)p(x)p(y)yx, ∀ homogeneous x, y

where p denotes the parity of an homogeneous element (p(x) = 0 if x ∈ A0, p(x) = 1 if x ∈ A1).

Let’s denote with A the category of affine superalgebras that is commutative superalgebras such that, modulo the ideal generated by their odd part, they are affine algebras (an affine algebra is a finitely generated reduced commutative algebra).

Define affine algebraic supervariety over k a representable functor V from the category A of affine superalgebras to the category S of sets. Let’s call k[V] the commutative k-superalgebra representing the functor V,

V (A) = Homk−superalg(k[V], A), A ∈ A

We will call V (A) the A-points of the variety V. A morphism of affine supervarieties is identified with a morphism between the representing objects, that is a morphism of affine superalgebras.

We also define the functor Vred associated to V from the category Ac of affine k-algebras to the category of sets:

Vred(Ac)= Homk−alg(k[V]/Ik[V]odd, Ac), Ac ∈ Ac

Vred is an affine algebraic variety and it is called the reduced variety associated to V. If the algebra k[V] representing the functor V has the additional structure of a commutative Hopf superalgebra, we say that V is an affine algebraic supergroup.

Let G be an affine algebraic supergroup. As in the classical setting, the condition k[G] being a commutative Hopf superalgebra makes the functor group valued, that is the product of two morphisms is still a morphism. In fact let A be a commutative superalgebra and let x, y ∈ Homk−superalg(k[G], A) be two points of G(A). The product of x and y is defined as:

x · y = defmA · x ⊗ y · ∆

where mA is the multiplication in A and ∆ the comultiplication in k[G]. One can find that x · y ∈ Homk−superalg(k[G], A), that is:

(x · y)(ab) = (x · y)(a)(x · y)(b)

The non commutativity of the Hopf algebra in the quantum setting does not allow to multiply morphisms(=points). In fact in the quantum (super)group setting the product of two morphisms is not in general a morphism.

Let V be an affine algebraic supervariety. Let k0 ⊂ k be a subfield of k. We say that V is a k0-variety if there exists a k0-superalgebra k0[V] such that k[V] ≅ k0[V] ⊗k0 k and

V(A) = Homk0 − superalg(k0[V], A) = Homk−superalg(k[V], A), A ∈ A.

We obtain a functor that we still denote by V from the category Ak0 of affine k0-superalgebras to the category of sets:

V(Ak0) = Homk0−superalg(k0[V], Ak0), A ∈ Ak0

thus opening up for consideration of rationality on supervariety.

Hyperstructures

universe_splatter2

In many areas of mathematics there is a need to have methods taking local information and properties to global ones. This is mostly done by gluing techniques using open sets in a topology and associated presheaves. The presheaves form sheaves when local pieces fit together to global ones. This has been generalized to categorical settings based on Grothendieck topologies and sites.

The general problem of going from local to global situations is important also outside of mathematics. Consider collections of objects where we may have information or properties of objects or subcollections, and we want to extract global information.

This is where hyperstructures are very useful. If we are given a collection of objects that we want to investigate, we put a suitable hyperstructure on it. Then we may assign “local” properties at each level and by the generalized Grothendieck topology for hyperstructures we can now glue both within levels and across the levels in order to get global properties. Such an assignment of global properties or states we call a globalizer. 

To illustrate our intuition let us think of a society organized into a hyperstructure. Through levelwise democratic elections leaders are elected and the democratic process will eventually give a “global” leader. In this sense democracy may be thought of as a sociological (or political) globalizer. This applies to decision making as well.

In “frustrated” spin systems in physics one may possibly think of the “frustation” being resolved by creating new levels and a suitable globalizer assigning a global state to the system corresponding to various exotic physical conditions like, for example, a kind of hyperstructured spin glass or magnet. Acting on both classical and quantum fields in physics may be facilitated by putting a hyperstructure on them.

There are also situations where we are given an object or a collection of objects with assignments of properties or states. To achieve a certain goal we need to change, let us say, the state. This may be very difficult and require a lot of resources. The idea is then to put a hyperstructure on the object or collection. By this we create levels of locality that we can glue together by a generalized Grothendieck topology.

It may often be much easier and require less resources to change the state at the lowest level and then use a globalizer to achieve the desired global change. Often it may be important to find a minimal hyperstructure needed to change a global state with minimal resources.

Again, to support our intuition let us think of the democratic society example. To change the global leader directly may be hard, but starting a “political” process at the lower individual levels may not require heavy resources and may propagate through the democratic hyperstructure leading to a change of leader.

Hence, hyperstructures facilitates local to global processes, but also global to local processes. Often these are called bottom up and top down processes. In the global to local or top down process we put a hyperstructure on an object or system in such a way that it is represented by a top level bond in the hyperstructure. This means that to an object or system X we assign a hyperstructure

H = {B0,B1,…,Bn} in such a way that X = bn for some bn ∈ B binding a family {bi1n−1} of Bn−1 bonds, each bi1n−1 binding a family {bi2n−2} of Bn−2 bonds, etc. down to B0 bonds in H. Similarly for a local to global process. To a system, set or collection of objects X, we assign a hyperstructure H such that X = B0. A hyperstructure on a set (space) will create “global” objects, properties and states like what we see in organized societies, organizations, organisms, etc. The hyperstructure is the “glue” or the “law” of the objects. In a way, the globalizer creates a kind of higher order “condensate”. Hyperstructures represent a conceptual tool for translating organizational ideas like for example democracy, political parties, etc. into a mathematical framework where new types of arguments may be carried through.

Philosophy of Quantum Entanglement and Topology

58525360_35e55309c4_o

Many-body entanglement is essential for the existence of topological order in condensed matter systems and understanding many-body entanglement provides a promising approach to understand in general what topological orders exist. It also leads to tensor network descriptions of many-body wave functions potentializing the classification of phases of quantum matter. The generic many-body entanglement is reduced to specifically 2-body systems for choice of entanglement. Consider the equation,

S(A) ≡ −tr(ρA log2A)) —– (1)

where, ρA ≡ trBAB ⟩⟨ΨAB | is the density matrix for part A, and where we assumed that the whole system is in a pure state AB.

Specializing AB⟩ to a ground state in a local Hamiltonian in D dimensions spatially, the central observation being that the entanglement between of a region A of size LD and the (much larger) rest B of the lattice is then often proportional to the size |σ(A)| of the boundary σ(A) of region A,

S(A) ≈ |σ(A)| ≈ LD−1  —– (2)

where, the correction -1 is due to the topological order of the topic code, thus signifying adherence to Boundary Law observed in the ground state of gapped local Hamiltonian in arbitrary dimension D, as well as in some gapless systems in D > 1 dimensions. Instead, in gapless systems in D = 1 dimensions, as well as in certain gapless systems in D > 1 dimensions (namely systems with a Fermi surface of dimension D − 1), ground state entanglement displays a logarithmic correction to the boundary law,

S(A) ≈ |σ(A)| log2 (|σ(A)|) ≈ LD−1 log2(L) —– (3)

At an intuitive level, the boundary law of (2) is understood as resulting from entanglement that involves degrees of freedom located near the boundary between regions A and B. Also intuitively, the logarithmic correction of (3) is argued to have its origin in contributions to entanglement from degrees of freedom that are further away from the boundary between A and B. Given the entanglement between A and B, introducing an entanglement contour sA that assigns a real number sA(i) ≥ 0 to each lattice site i contained in region A such that the sum of sA(i) over all the sites i ∈ A is equal to the entanglement entropy S (A),

S(A) = Σi∈A sA(i) —– (4) 

and that aims to quantifying how much the degrees of freedom in site i participate in/contribute to the entanglement between A and B. And as Chen and Vidal put it, the entanglement contour sA(i) is not equivalent to the von Neumann entropy S(i) ≡ −tr ρ(i) log2 ρ(i) of the reduced density matrix ρ(i) at site i. Notice that, indeed, the von Neumann en- tropy of individual sites in region A is not additive in the presence of correlations between the sites, and therefore generically

S(A) ≠ Σi∈A S(i)

whereas the entanglement contour sA(i) is required to fulfil (4). Relatedly, when site i is only entangled with neighboring sites contained within region A, and it is thus uncorrelated with region B, the entanglement contour sA(i) will be required to vanish, whereas the one-site von Neumann entropy S(i) still takes a non-zero value due to the presence of local entanglement within region A.

As an aside, in the traditional approach to quantum mechanics, a physical system is described in a Hilbert space: Observables correspond to self-adjoint operators and statistical operators are associated with the states. In fact, a statistical operator describes a mixture of pure states. Pure states are the really physical states and they are given by rank one statistical operators, or equivalently by rays of the Hilbert space. Von Neumann associated an entropy quantity to a statistical operator and his argument was a gedanken experiment on the ground of phenomenological thermodynamics. Let us consider a gas of N(≫ 1) molecules in a rectangular box K. Suppose that the gas behaves like a quantum system and is described by a statistical operator D, which is a mixture λ|φ1⟩⟨φ1| + (1 − λ)|φ1⟩⟨φ2|, |φi⟩ ≡ φ is a state vector (i = 1, 2). We may take λN molecules in the pure state φ1 and (1−λ)N molecules in the pure state φ2. On the basis of phenomenological thermodynamics, we assume that if φ1 and φ2 are orthogonal, then there is a wall that is completely permeable for the φ1-molecules and isolating for the φ2-molecules. We add an equally large empty rectangular box K′ to the left of the box K and we replace the common wall with two new walls. Wall (a), the one to the left is impenetrable, whereas the one to the right, wall (b), lets through the φ1-molecules but keeps back the φ2-molecules. We add a third wall (c) opposite to (b) which is semipermeable, transparent for the φ2-molecules and impenetrable for the φ1-ones. Then we push slowly (a) and (c) to the left, maintaining their distance. During this process the φ1-molecules are pressed through (b) into K′ and the φ2-molecules diffuse through wall (c) and remain in K. No work is done against the gas pressure, no heat is developed. Replacing the walls (b) and (c) with a rigid absolutely impenetrable wall and removing (a) we restore the boxes K and K′ and succeed in the separation of the φ1-molecules from the φ2-ones without any work being done, without any temperature change and without evolution of heat. The entropy of the original D-gas ( with density N/V ) must be the sum of the entropies of the φ1- and φ2-gases ( with densities λ N/V and (1 − λ)N/V , respectively). If we compress the gases in K and K′ to the volumes λV and (1 − λ)V , respectively, keeping the temperature T constant by means of a heat reservoir, the entropy change amounts to κλN log λ and κ(1 − λ)N log(1 − λ), respectively. Indeed, we have to add heat in the amount of λiNκT logλi (< 0) when the φi-gas is compressed, and dividing by the temperature T we get the change of entropy. Finally, mixing the φ1- and φ2-gases of identical density we obtain a D-gas of N molecules in a volume V at the original temperature. If S0(ψ,N) denotes the entropy of a ψ-gas of N molecules (in a volume V and at the given temperature), we conclude that

S0(φ1,λN)+S0(φ2,(1−λ)N) = S0(D, N) + κλN log λ + κ(1 − λ)N log(1 − λ) —– (5)

must hold, where κ is Boltzmann’s constant. Assuming that S0(ψ,N) is proportional to N and dividing by N we have

λS(φ1) + (1 − λ)S(φ2) = S(D) + κλ log λ + κ(1 − λ) log(1 − λ) —– (6)

where S is certain thermodynamical entropy quantity ( relative to the fixed temperature and molecule density ). We arrived at the mixing property of entropy, but we should not forget about the initial assumption: φ1 and φ2 are supposed to be orthogonal. Instead of a two-component mixture, von Neumann operated by an infinite mixture, which does not make a big difference, and he concluded that

S (Σiλi|φi⟩⟨φi|) = ΣiλiS(|φi⟩⟨φi|) − κ Σiλi log λi —– (7)

Von Neumann’s argument does not require that the statistical operator D is a mixture of pure states. What we really needed is the property D = λD1 + (1 − λ)D2 in such a way that the possible mixed states D1 and D2 are disjoint. D1 and D2 are disjoint in the thermodynamical sense, when there is a wall which is completely permeable for the molecules of a D1gas and isolating for the molecules of a D2-gas. In other words, if the mixed states D1 and D2 are disjoint, then this should be demonstrated by a certain filter. Mathematically, the disjointness of D1 and D2 is expressed in the orthogonality of the eigenvectors corresponding to nonzero eigenvalues of the two density matrices. The essential point is in the remark that (6) must hold also in a more general situation when possibly the states do not correspond to density matrices, but orthogonality of the states makes sense:

λS(D1) + (1 − λ)S(D2) = S(D) + κλ log λ + κ(1 − λ) log(1 − λ) —– (8)

(7) reduces the determination of the (thermodynamical) entropy of a mixed state to that of pure states. The so-called Schatten decomposition Σi λi|φi⟩⟨φi| of a statistical operator is not unique even if ⟨φi , φj ⟩ = 0 is assumed for i ≠ j . When λi is an eigenvalue with multiplicity, then the corresponding eigenvectors can be chosen in many ways. If we expect the entropy S(D) to be independent of the Schatten decomposition, then we are led to the conclusion that S(|φ⟩⟨φ|) must be independent of the state vector |φ⟩. This argument assumes that there are no superselection sectors, that is, any vector of the Hilbert space can be a state vector. On the other hand, von Neumann wanted to avoid degeneracy of the spectrum of a statistical operator. Von Neumann’s proof of the property that S(|φ⟩⟨φ|) is independent of the state vector |φ⟩ was different. He did not want to refer to a unitary time development sending one state vector to another, because that argument requires great freedom in choosing the energy operator H. Namely, for any |φ1⟩ and |φ2⟩ we would need an energy operator H such that

eitH|φ1⟩ = |φ2⟩

This process would be reversible. Anyways, that was quite a digression.

Entanglement between A and B is naturally described by the coefficients {pα} appearing in the Schmidt decomposition of the state |ΨAB⟩,

AB⟩ = Σα √pαAα ⟩ ⊗ |ΨBα ⟩ —– (9)

These coefficients {pα} correspond to the eigenvalues of the reduced density matrix ρA, whose spectral decomposition reads

ρA = ΣαpAα⟩⟨ΨAα—– (10)

defining a probability distribution, pα ≥ 0, Σα pα = 1, in terms of which the von Neumann entropy S(A) is

S(A) = − Σαpα log2(pα—– (11)

On the other hand, the Hilbert space VA of region A factorizes as the tensor product

VA = ⊗ i∈A V(i) —– (12)

where V(i) describes the local Hilbert space of site i. The reduced density matrix ρA in (10) and the factorization of (12) define two inequivalent structures within the vector space VA of region A. The entanglement contours A is a function from the set of sites i∈A to the real numbers,

sA : A → ℜ —– (13)

that attempts to relate these two structures, by distributing the von-Neumann entropy S(A) of (11) among the sites i ∈ A. According to Chen and Vidal, there are five conditions/requirements on entanglement contours that need satiation.

a. Positivity: sA(i) ≥ 0

b. Normalization: Σi∈AsA(i) = S(A) 

These constraints amount to defining a probability distribution pi ≡ sA(i)/S(A) over the sites i ∈ A, with pi ≥ 0 and i Σipi = 1, such that sA(i) = piS(A), however, do not requiring sA to inform us about the spatial structure of entanglement in A, but only relating to the density matrix ρA through its total von Neumann entropy S(A).

c. Symmetry: if T is a symmetry of ρA, that is AT = ρA, and T exchanges site i with site j, then sA(i) = sA(j).

This condition ensures that the entanglement contour is the same on two sites i and j of region A that, as far as entanglement is concerned, play an equivalent role in region A. It uses the (possible) presence of a spatial symmetry, such as invariance under space reflection, or under discrete translations/rotations, to define an equivalence relation in the set of sites of region A, and requires that the entanglement contour be constant within each resulting equivalence class. Notice, however, that this condition does not tell us whether the entanglement contour should be large or small on a given site (or equivalence class of site). In particular, the three conditions above are satisfied by a canonical choice sA(i) = S (A)/|A|, that is a flat entanglement contour over the |A| sites contained in region A, which once more does not tell us anything about the spatial structure of the von Neumann entropy in ρA.

The remaining conditions refer to subregions within region A, instead of referring to single sites. It is therefore convenient to (trivially) extend the definition of entanglement contour to a set X of sites in region A, X ⊆ A, with vector space

VX = ⊗i∈X V(i) —– (14)

as the sum of the contour over the sites in X,

sA(X) ≡  Σi∈XsA(i) —– (15)

It follows from this extension that for any two disjoint subsets X1, X2 ⊆ A, with X1 ∩ X2 = ∅, the contour is additive,

sA(X1 ∪ X2) = sA(X1) + sA(X2—– (16)

In particular, condition 2 can be now recast as sA(A) =S(A). Similarly, if X, X ⊆ A, are such that all the sites of X1 are also contained in X2, X1X2 ,then the contour must be larger on X2 than on X1 (monotonicity of sA(X)),

sA(X1) ≤ sA(X2) if X1 ⊆ X2 —– (17)

d. Invariance under local unitary transformations: if the state |Ψ′AB is obtained from the state AB by means of a unitary transformation UX that acts on a subset X ⊆ A of sites of region A, that is |Ψ′AB⟩ ≡ UXAB, then the entanglement contour sA(X) must be the same for state AB and for state |Ψ′AB.

That is, the contribution of region X to the entanglement between A and B is not affected by a redefinition of the sites or change of basis within region X. Notice that it follows that  Ucan also not change sA(X’), where X’ ≡ A − X is the complement of X in A.

To motivate our last condition, let us consider a state AB that factorizes as the product

AB⟩ = |ΨXXB⟩ ⊗ |ΨX’X’B—– (18)

where X ⊆ A and XB ⊆ B are subsets of sites in regions A and B, respectively, and X’ ⊆ A and X’B ⊆ B are their complements within A and B, so that

VA = VX ⊗ VX’, —– (19)

VB = VXB ⊗ VX’B —– (20)

in this case the reduced density matrix ρA factorizes as ρA = ρX ⊗ ρX’ and the entanglement entropy is additive,

S(A) = S(X) + S(X’) —– (21)

Since the entanglement entropy S(X) of subregion X is well-defined, let the entanglement profile over X be equal to it,

sA(X) = S(X) —– (22)

The last condition refers to a more general situation where, instead of obeying (18), the state AB factorizes as the product

AB⟩ = |ΨΩAΩB⟩ ⊗ |ΨΩ’AΩ’B, —– (23)

with respect to some decomposition of VA and VB as

tensor products of factor spaces,

VA = VΩA ⊗ VΩ’A, —– (24)

VB = VΩB ⊗ VΩ’B —– (25)

Let S(ΩA) denote the entanglement entropy supported on the first factor space VΩA of  VA, that is

S(ΩA) = −tr(ρΩA log2ΩA)) —– (26)

ρΩA ≡ trΩB |Ψ ΩA ΩB⟩⟨Ψ ΩA ΩB| —– (27)

and let X ⊆ A be a subset of sites whose vector space VX is completely contained in VΩA , meaning that VΩA can be further decomposed as

VΩA  ≈ VX VX’ —– (28)

e. Upper bound: if a subregion X ⊆ A is contained in a factor space ΩA (24 and 28) then the entanglement contour of subregion X cannot be larger than the entanglement entropy S(ΩA) (26)

sA(X) S(ΩA) —– (29)

This condition says that whenever we can ascribe a concrete value S(ΩA) of the entanglement entropy to a factor space ΩA within region A (that is, whenever the state AB factorizes as in (24) then the entanglement contour has to be consistent with this fact, meaning that the contour S(X) in any subregion X contained in the factor space ΩA is upper bounded by S(ΩA).

Let us consider a particular case of condition e. When a region X ∈ A is not at all correlated with B, that is ρXBX ⊗ ρB,then it can be seen that X is contained in some factor space ΩA such that the state |Ψ ΩA ΩB itself further factorizes as |Ψ ΩA⟩ ⊗ |ΨΩB, so that (23) becomes

AB⟩ = |Ψ ΩA⟩ ⊗ |ΨΩB ⊗ |ΨΩ’AΩ’B ⟩, —– (30)

and S(ΩA) = 0. Condition e then requires that sA(X) = 0, that is

ρXBX ⊗ ρB sA(X) = 0, —– (31)

reflecting the fact that a region X ⊆ A that is not correlated with B does not contribute at all to the entanglement between A and B. Finally, the upper bound in e can be alternatively announced as a lower bound. Let Y ⊆ A be a subset of sites whose vector space VY completely contains VΩA in (24), meaning that VY can be further decomposed as

VY VΩA ⊗ VΩ’A —– (32)

e’. Lower bound: The entanglement contour of subregion Y is at least equal to the entanglement entropy S(ΩA) in (26),

sA(Y) ≥ S(ΩA) —– (33)

Conditions a-e (e’) are not expected to completely determine the entanglement contour. In other words, there probably are inequivalent functions sA : A → ℜ that conform to all the conditions above. So, where do we get philosophical from here? It is through the entanglement contour through selected states that a time evolution ensuing a global or a local quantum quench characterizing entanglement between regions rather than within regions, revealing a a detailed real-space structure of the entanglement of a region A and its dynamics, well beyond what is accessible from the entanglement entropy alone. But, that isn’t all. Questions of how to quantify entanglement and non-locality, and the need to clarify the relationship between them are important not only conceptually, but also practically, insofar as entanglement and non-locality seem to be different resources for the performance of quantum information processing tasks. Whether in a given quantum information protocol (cryptography, teleportation, and algorithm . . .) it is better to look for the largest amount of entanglement or the largest amount of non-locality becomes decisive. The ever-evolving field of quantum information theory is devoted to using the principles and laws of quantum mechanics to aid in the acquisition, transmission, and processing of information. In particular, it seeks to harness the peculiarly quantum phenomena of entanglement, superposition, and non-locality to perform all sorts of novel tasks, such as enabling computations that operate exponentially faster or more efficiently than their classical counterparts (via quantum computers) and providing unconditionally secure cryptographic systems for the transfer of secret messages over public channels (via quantum key distribution). By contrast, classical information theory is concerned with the storage and transfer of information in classical systems. It uses the “bit” as the fundamental unit of information, where the system capable of representing a bit can take on one of two values (typically 0 or 1). Classical information theory is based largely on the concept of information formalized by Claude Shannon in the late 1940s. Quantum information theory, which was later developed in analogy with classical information theory, is concerned with the storage and processing of information in quantum systems, such as the photon, electron, quantum dot, or atom. Instead of using the bit, however, it defines the fundamental unit of quantum information as the “qubit.” What makes the qubit different from a classical bit is that the smallest system capable of storing a qubit, the two-level quantum system, not only can take on the two distinct values |0 and |1 , but can also be in a state of superposition of these two states: |ψ = α0 |0 + α1 |1.

Quantum information theory has opened up a whole new range of philosophical and foundational questions in quantum cryptography or quantum key distribution, which involves using the principles of quantum mechanics to ensure secure communication. Some quantum cryptographic protocols make use of entanglement to establish correlations between systems that would be lost upon eavesdropping. Moreover, a quantum principle known as the no-cloning theorem prohibits making identical copies of an unknown quantum state. In the context of a C∗-algebraic formulation,  quantum theory can be characterized in terms of three information-theoretic constraints: (1) no superluminal signaling via measurement, (2) no cloning (for pure states) or no broadcasting (mixed states), and (3) no unconditionally secure bit commitment.

Entanglement does not refute the principle of locality. A sketch of the sort of experiment commonly said to refute locality runs as follows. Suppose that you have two electrons with entangled spin. For each electron you can measure the spin along the X, Y or Z direction. If you measure X on both electrons, then you get opposite values, likewise for measuring Y or Z on both electrons. If you measure X on one electron and Y or Z on the other, then you have a 50% probability of a match. And if you measure Y on one and Z on the other, the probability of a match is 50%. The crucial issue is that whether you find a correlation when you do the comparison depends on whether you measure the same quantity on each electron. Bell’s theorem just explains that the extent of this correlation is greater than a local theory would allow if the measured quantities were represented by stochastic variables (i.e. – numbers picked out of a hat). This fact is often misrepresented as implying that quantum mechanics is non-local. But in quantum mechanics, systems are not characterised by stochastic variables, but, rather, by Hermitian operators. There is an entirely local explanation of how the correlations arise in terms of properties of systems represented by such operators. But, another answer to such violations of the principle of locality could also be “Yes, unless you get really obsessive about it.” It has been formally proven that one can have determinacy in a model of quantum dynamics, or one can have locality, but cannot have both. If one gives up the determinacy of the theory in various ways, one can imagine all kinds of ‘planned flukes’ like the notion that the experiments that demonstrate entanglement leak information and pre-determine the environment to make the coordinated behavior seem real. Since this kind of information shaping through distributed uncertainty remains a possibility, folks can cling to locality until someone actually manages something like what those authors are attempting, or we find it impossible. If one gives up locality instead, entanglement does not present a problem, the theory of relativity does. Because the notion of a frame of reference is local. Experiments on quantum tunneling that violate the constraints of the speed of light have been explained with the idea that probabilistic partial information can ‘lead’ real information faster than light by pushing at the vacuum underneath via the ‘Casimir Effect’. If both of these make sense, then the information carried by the entanglement when it is broken would be limited as the particles get farther apart — entanglements would have to spontaneously break down over time or distance of separation so that the probabilities line up. This bodes ill for our ability to find entangled particles from the Big Bang, which seems to be the only prospect in progress to debunk the excessively locality-focussed.

But, much of the work remains undone and this is to be continued…..