Embedding Branes in Minkowski Space-Time Dimensions To Decipher Them As Particles Or Otherwise

essempi

The physics treatment of Dirichlet branes in terms of boundary conditions is very analogous to that of the “bulk” quantum field theory, and the next step is again to study the renormalization group. This leads to equations of motion for the fields which arise from the open string, namely the data (M, E, ∇). In the supergravity limit, these equations are solved by taking the submanifold M to be volume minimizing in the metric on X, and the connection ∇ to satisfy the Yang-Mills equations.

Like the Einstein equations, the equations governing a submanifold of minimal volume are highly nonlinear, and their general theory is difficult. This is one motivation to look for special classes of solutions; the physical arguments favoring supersymmetry are another. Just as supersymmetric compactification manifolds correspond to a special class of Ricci-flat manifolds, those admitting a covariantly constant spinor, supersymmetry for a Dirichlet brane will correspond to embedding it into a special class of minimal volume submanifolds. Since the physical analysis is based on a covariantly constant spinor, this special class should be defined using the spinor, or else the covariantly constant forms which are bilinear in the spinor.

The standard physical arguments leading to this class are based on the kappa symmetry of the Green-Schwarz world-volume action, in which one finds that the subset of supersymmetry parameters ε which preserve supersymmetry, both of the metric and of the brane, must satisfy

φ ≡ Re εt Γε|M = Vol|M —– (1)

In words, the real part of one of the covariantly constant forms on M must equal the volume form when restricted to the brane.

Clearly dφ = 0, since it is covariantly constant. Thus,

Z(M) ≡ ∫φ —– (2)

depends only on the homology class of M. Thus, it is what physicists would call a “topological charge”, or a “central charge”.

If in addition the p-form φ is dominated by the volume form Vol upon restriction to any p-dimensional subspace V ⊂ Tx X, i.e.,

φ|V ≤ Vol|V —– (3)

then φ will be a calibration in the sense of implying the global statement

φ ≤ ∫Vol —– (4)

for any submanifold M . Thus, the central charge |Z (M)| is an absolute lower bound for Vol(M).

A calibrated submanifold M is now one satisfying (1), thereby attaining the lower bound and thus of minimal volume. Physically these are usually called “BPS branes,” after a prototypical argument of this type due, for magnetic monopole solutions in nonabelian gauge theory.

For a Calabi-Yau X, all of the forms ωp can be calibrations, and the corresponding calibrated submanifolds are p-dimensional holomorphic submanifolds. Furthermore, the n-form Re eΩ for any choice of real parameter θ is a calibration, and the corresponding calibrated submanifolds are called special Lagrangian.

This generalizes to the presence of a general connection on M, and leads to the following two types of BPS branes for a Calabi-Yau X. Let n = dimR M, and let F be the (End(E)-valued) curvature two-form of ∇.

The first kind of BPS D-brane, based on the ωp calibrations, is (for historical reasons) called a “B-type brane”. Here the BPS constraint is equivalent to the following three requirements:

  1. M is a p-dimensional complex submanifold of X.
  2. The 2-form F is of type (1, 1), i.e., (E, ∇) is a holomorphic vector bundle on M.
  3. In the supergravity limit, F satisfies the Hermitian Yang-Mills equation:ω|p−1M ∧ F = c · ω|pMfor some real constant c.
  4. F satisfies Im e(ω|M + ils2F)p = 0 for some real constant φ, where ls is the correction.

The second kind of BPS D-brane, based on the Re eΩ calibration, is called an “A-type” brane. The simplest examples of A-branes are the so-called special Lagrangian submanifolds (SLAGs), satisfying

(1) M is a Lagrangian submanifold of X with respect to ω.

(2) F = 0, i.e., the vector bundle E is flat.

(3) Im e Ω|M = 0 for some real constant α.

More generally, one also has the “coisotropic branes”. In the case when E is a line bundle, such A-branes satisfy the following four requirements:

(1)  M is a coisotropic submanifold of X with respect to ω, i.e., for any x ∈ M the skew-orthogonal complement of TxM ⊂ TxX is contained in TxM. Equivalently, one requires ker ωM to be an integrable distribution on M.

(2)  The 2-form F annihilates ker ωM.

(3)  Let F M be the vector bundle T M/ ker ωM. It follows from the first two conditions that ωM and F descend to a pair of skew-symmetric forms on FM, denoted by σ and f. Clearly, σ is nondegenerate. One requires the endomorphism σ−1f : FM → FM to be a complex structure on FM.

(4)  Let r be the complex dimension of FM. r is even and that r + n = dimR M. Let Ω be the holomorphic trivialization of KX. One requires that Im eΩ|M ∧ Fr/2 = 0 for some real constant α.

Coisotropic A-branes carrying vector bundles of higher rank are still not fully understood. Physically, one must also specify the embedding of the Dirichlet brane in the remaining (Minkowski) dimensions of space-time. The simplest possibility is to take this to be a time-like geodesic, so that the brane appears as a particle in the visible four dimensions. This is possible only for a subset of the branes, which depends on which string theory one is considering. Somewhat confusingly, in the type IIA theory, the B-branes are BPS particles, while in IIB theory, the A-branes are BPS particles.

Conjuncted: Philosophizing Twistors via Fibration. Note Quote.

papers-198screensnapz001_med

The fibration is not holomorphic (even when this makes sense) but the fibres are complex submanifolds of the twistor space. Let us now see how we might build such fibrations over more general Riemannian manifolds.

So let N be a 2n-dimensional Riemannian manifold. We may at least construct such a fibration of an almost complex manifold over N as follows: let π:J(N) → N be the bundle of almost Hermitian structures of N. Thus the fibre at x ∈ N is

Jx(N) = {j ∈ End(TxN): j2 = −1, j skew-symmetric}.

This bundle is associated to the orthonormal frame bundle of N with typical fibre J(R2n) = O(2n)/U(n) which is a Hermitian symmetric space (in fact it is two disjoint copies of the compact irreducible Hermitian symmetric space SO(2n)/U(n)). In particular, the typical fibre has an O(2n)-invariant complex structure and thus the vertical distribution V = ker dπ inherits an almost complex structure JV. The Levi-Civita connection on the orthonormal frame bundle induces a horizontal distribution H on J(N) so that we have a splitting

T J (N ) = V ⊕ H

with dπ giving an isomorphism between H and π−1TN. This enables us to define a tautological almost complex structure JH on H by

JjH = j
and adding this to JV gives us an almost complex structure J = JV ⊕ JH on J(N). By

construction, the fibres of π are almost complex submanifolds with respect to J.

If we make a conformal change of metric on N, the bundle J(N) remains unchanged although the horizontal distribution H will vary. However, despite this, it can be shown that the almost complex structure J is independent of the choice of metric within a conformal class. Thus our construction may be viewed as one in Conformal Geometry.

Having got our almost complex structure, it is natural to ask whether or not it is integrable so that J(N) is an honest complex manifold. For this, of course, it is necessary and sufficient that the Nijenhuis tensor NJ of J vanish. The obstruction to this vanishing lies in the curvature tensor of N [22] :

Theorem: Let j ∈ J(N) with √−1-eigenspace T+ ⊂ Tπ(j)NC. Let R denote the Riemann curvature tensor of N. Then NJ vanishes at j if and only if

R(T+, T+)T+ ⊂ T+

Thus J is integrable if the above equation holds for all maximally isotropic subspaces T+ of TNC. This is a condition on the curvature tensor that can be analysed in terms of the representation theory of O(2n) on the space of curvature tensors and one concludes:

Corollary: J is integrable if and only if the Weyl tensor of R vanishes identically (i.e. N is locally conformally flat).

Thus J(N) is a complex manifold only in extremely restricted circumstances. The moral to be drawn from this is that J(N) is “too big” in general for J to be integrable. It is therefore appropriate to seek subbundles of J(N) picked out by the geometry of N in the hope that some of these are complex manifolds. One way to do this is is to restrict attention to those elements of J(N) that are compatible with the holonomy of N.

Figure-6-Holonomy-along-a-leafwise-path

Purely Random Correlations of the Matrix, or Studying Noise in Neural Networks

2-Figure1-1

Expressed in the most general form, in essentially all the cases of practical interest, the n × n matrices W used to describe the complex system are by construction designed as

W = XYT —– (1)

where X and Y denote the rectangular n × m matrices. Such, for instance, are the correlation matrices whose standard form corresponds to Y = X. In this case one thinks of n observations or cases, each represented by a m dimensional row vector xi (yi), (i = 1, …, n), and typically m is larger than n. In the limit of purely random correlations the matrix W is then said to be a Wishart matrix. The resulting density ρW(λ) of eigenvalues is here known analytically, with the limits (λmin ≤ λ ≤ λmax) prescribed by

λmaxmin = 1+1/Q±2 1/Q and Q = m/n ≥ 1.

The variance of the elements of xi is here assumed unity.

The more general case, of X and Y different, results in asymmetric correlation matrices with complex eigenvalues λ. In this more general case a limiting distribution corresponding to purely random correlations seems not to be yet known analytically as a function of m/n. It indicates however that in the case of no correlations, quite generically, one may expect a largely uniform distribution of λ bound in an ellipse on the complex plane.

Further examples of matrices of similar structure, of great interest from the point of view of complexity, include the Hamiltonian matrices of strongly interacting quantum many body systems such as atomic nuclei. This holds true on the level of bound states where the problem is described by the Hermitian matrices, as well as for excitations embedded in the continuum. This later case can be formulated in terms of an open quantum system, which is represented by a complex non-Hermitian Hamiltonian matrix. Several neural network models also belong to this category of matrix structure. In this domain the reference is provided by the Gaussian (orthogonal, unitary, symplectic) ensembles of random matrices with the semi-circle law for the eigenvalue distribution. For the irreversible processes there exists their complex version with a special case, the so-called scattering ensemble, which accounts for S-matrix unitarity.

As it has already been expressed above, several variants of ensembles of the random matrices provide an appropriate and natural reference for quantifying various characteristics of complexity. The bulk of such characteristics is expected to be consistent with Random Matrix Theory (RMT), and in fact there exists strong evidence that it is. Once this is established, even more interesting are however deviations, especially those signaling emergence of synchronous or coherent patterns, i.e., the effects connected with the reduction of dimensionality. In the matrix terminology such patterns can thus be associated with a significantly reduced rank k (thus k ≪ n) of a leading component of W. A satisfactory structure of the matrix that would allow some coexistence of chaos or noise and of collectivity thus reads:

W = Wr + Wc —– (2)

Of course, in the absence of Wr, the second term (Wc) of W generates k nonzero eigenvalues, and all the remaining ones (n − k) constitute the zero modes. When Wr enters as a noise (random like matrix) correction, a trace of the above effect is expected to remain, i.e., k large eigenvalues and the bulk composed of n − k small eigenvalues whose distribution and fluctuations are consistent with an appropriate version of random matrix ensemble. One likely mechanism that may lead to such a segregation of eigenspectra is that m in eq. (1) is significantly smaller than n, or that the number of large components makes it effectively small on the level of large entries w of W. Such an effective reduction of m (M = meff) is then expressed by the following distribution P(w) of the large off-diagonal matrix elements in the case they are still generated by the random like processes

P(w) = (|w|(M-1)/2K(M-1)/2(|w|))/(2(M-1)/2Γ(M/2)√π) —– (3)

where K stands for the modified Bessel function. Asymptotically, for large w, this leads to P(w) ∼ e(−|w|) |w|M/2−1, and thus reflects an enhanced probability of appearence of a few large off-diagonal matrix elements as compared to a Gaussian distribution. As consistent with the central limit theorem the distribution quickly converges to a Gaussian with increasing M.

Based on several examples of natural complex dynamical systems, like the strongly interacting Fermi systems, the human brain and the financial markets, one could systematize evidence that such effects are indeed common to all the phenomena that intuitively can be qualified as complex.

Philosophy of Quantum Entanglement and Topology

58525360_35e55309c4_o

Many-body entanglement is essential for the existence of topological order in condensed matter systems and understanding many-body entanglement provides a promising approach to understand in general what topological orders exist. It also leads to tensor network descriptions of many-body wave functions potentializing the classification of phases of quantum matter. The generic many-body entanglement is reduced to specifically 2-body systems for choice of entanglement. Consider the equation,

S(A) ≡ −tr(ρA log2A)) —– (1)

where, ρA ≡ trBAB ⟩⟨ΨAB | is the density matrix for part A, and where we assumed that the whole system is in a pure state AB.

Specializing AB⟩ to a ground state in a local Hamiltonian in D dimensions spatially, the central observation being that the entanglement between of a region A of size LD and the (much larger) rest B of the lattice is then often proportional to the size |σ(A)| of the boundary σ(A) of region A,

S(A) ≈ |σ(A)| ≈ LD−1  —– (2)

where, the correction -1 is due to the topological order of the topic code, thus signifying adherence to Boundary Law observed in the ground state of gapped local Hamiltonian in arbitrary dimension D, as well as in some gapless systems in D > 1 dimensions. Instead, in gapless systems in D = 1 dimensions, as well as in certain gapless systems in D > 1 dimensions (namely systems with a Fermi surface of dimension D − 1), ground state entanglement displays a logarithmic correction to the boundary law,

S(A) ≈ |σ(A)| log2 (|σ(A)|) ≈ LD−1 log2(L) —– (3)

At an intuitive level, the boundary law of (2) is understood as resulting from entanglement that involves degrees of freedom located near the boundary between regions A and B. Also intuitively, the logarithmic correction of (3) is argued to have its origin in contributions to entanglement from degrees of freedom that are further away from the boundary between A and B. Given the entanglement between A and B, introducing an entanglement contour sA that assigns a real number sA(i) ≥ 0 to each lattice site i contained in region A such that the sum of sA(i) over all the sites i ∈ A is equal to the entanglement entropy S (A),

S(A) = Σi∈A sA(i) —– (4) 

and that aims to quantifying how much the degrees of freedom in site i participate in/contribute to the entanglement between A and B. And as Chen and Vidal put it, the entanglement contour sA(i) is not equivalent to the von Neumann entropy S(i) ≡ −tr ρ(i) log2 ρ(i) of the reduced density matrix ρ(i) at site i. Notice that, indeed, the von Neumann en- tropy of individual sites in region A is not additive in the presence of correlations between the sites, and therefore generically

S(A) ≠ Σi∈A S(i)

whereas the entanglement contour sA(i) is required to fulfil (4). Relatedly, when site i is only entangled with neighboring sites contained within region A, and it is thus uncorrelated with region B, the entanglement contour sA(i) will be required to vanish, whereas the one-site von Neumann entropy S(i) still takes a non-zero value due to the presence of local entanglement within region A.

As an aside, in the traditional approach to quantum mechanics, a physical system is described in a Hilbert space: Observables correspond to self-adjoint operators and statistical operators are associated with the states. In fact, a statistical operator describes a mixture of pure states. Pure states are the really physical states and they are given by rank one statistical operators, or equivalently by rays of the Hilbert space. Von Neumann associated an entropy quantity to a statistical operator and his argument was a gedanken experiment on the ground of phenomenological thermodynamics. Let us consider a gas of N(≫ 1) molecules in a rectangular box K. Suppose that the gas behaves like a quantum system and is described by a statistical operator D, which is a mixture λ|φ1⟩⟨φ1| + (1 − λ)|φ1⟩⟨φ2|, |φi⟩ ≡ φ is a state vector (i = 1, 2). We may take λN molecules in the pure state φ1 and (1−λ)N molecules in the pure state φ2. On the basis of phenomenological thermodynamics, we assume that if φ1 and φ2 are orthogonal, then there is a wall that is completely permeable for the φ1-molecules and isolating for the φ2-molecules. We add an equally large empty rectangular box K′ to the left of the box K and we replace the common wall with two new walls. Wall (a), the one to the left is impenetrable, whereas the one to the right, wall (b), lets through the φ1-molecules but keeps back the φ2-molecules. We add a third wall (c) opposite to (b) which is semipermeable, transparent for the φ2-molecules and impenetrable for the φ1-ones. Then we push slowly (a) and (c) to the left, maintaining their distance. During this process the φ1-molecules are pressed through (b) into K′ and the φ2-molecules diffuse through wall (c) and remain in K. No work is done against the gas pressure, no heat is developed. Replacing the walls (b) and (c) with a rigid absolutely impenetrable wall and removing (a) we restore the boxes K and K′ and succeed in the separation of the φ1-molecules from the φ2-ones without any work being done, without any temperature change and without evolution of heat. The entropy of the original D-gas ( with density N/V ) must be the sum of the entropies of the φ1- and φ2-gases ( with densities λ N/V and (1 − λ)N/V , respectively). If we compress the gases in K and K′ to the volumes λV and (1 − λ)V , respectively, keeping the temperature T constant by means of a heat reservoir, the entropy change amounts to κλN log λ and κ(1 − λ)N log(1 − λ), respectively. Indeed, we have to add heat in the amount of λiNκT logλi (< 0) when the φi-gas is compressed, and dividing by the temperature T we get the change of entropy. Finally, mixing the φ1- and φ2-gases of identical density we obtain a D-gas of N molecules in a volume V at the original temperature. If S0(ψ,N) denotes the entropy of a ψ-gas of N molecules (in a volume V and at the given temperature), we conclude that

S0(φ1,λN)+S0(φ2,(1−λ)N) = S0(D, N) + κλN log λ + κ(1 − λ)N log(1 − λ) —– (5)

must hold, where κ is Boltzmann’s constant. Assuming that S0(ψ,N) is proportional to N and dividing by N we have

λS(φ1) + (1 − λ)S(φ2) = S(D) + κλ log λ + κ(1 − λ) log(1 − λ) —– (6)

where S is certain thermodynamical entropy quantity ( relative to the fixed temperature and molecule density ). We arrived at the mixing property of entropy, but we should not forget about the initial assumption: φ1 and φ2 are supposed to be orthogonal. Instead of a two-component mixture, von Neumann operated by an infinite mixture, which does not make a big difference, and he concluded that

S (Σiλi|φi⟩⟨φi|) = ΣiλiS(|φi⟩⟨φi|) − κ Σiλi log λi —– (7)

Von Neumann’s argument does not require that the statistical operator D is a mixture of pure states. What we really needed is the property D = λD1 + (1 − λ)D2 in such a way that the possible mixed states D1 and D2 are disjoint. D1 and D2 are disjoint in the thermodynamical sense, when there is a wall which is completely permeable for the molecules of a D1gas and isolating for the molecules of a D2-gas. In other words, if the mixed states D1 and D2 are disjoint, then this should be demonstrated by a certain filter. Mathematically, the disjointness of D1 and D2 is expressed in the orthogonality of the eigenvectors corresponding to nonzero eigenvalues of the two density matrices. The essential point is in the remark that (6) must hold also in a more general situation when possibly the states do not correspond to density matrices, but orthogonality of the states makes sense:

λS(D1) + (1 − λ)S(D2) = S(D) + κλ log λ + κ(1 − λ) log(1 − λ) —– (8)

(7) reduces the determination of the (thermodynamical) entropy of a mixed state to that of pure states. The so-called Schatten decomposition Σi λi|φi⟩⟨φi| of a statistical operator is not unique even if ⟨φi , φj ⟩ = 0 is assumed for i ≠ j . When λi is an eigenvalue with multiplicity, then the corresponding eigenvectors can be chosen in many ways. If we expect the entropy S(D) to be independent of the Schatten decomposition, then we are led to the conclusion that S(|φ⟩⟨φ|) must be independent of the state vector |φ⟩. This argument assumes that there are no superselection sectors, that is, any vector of the Hilbert space can be a state vector. On the other hand, von Neumann wanted to avoid degeneracy of the spectrum of a statistical operator. Von Neumann’s proof of the property that S(|φ⟩⟨φ|) is independent of the state vector |φ⟩ was different. He did not want to refer to a unitary time development sending one state vector to another, because that argument requires great freedom in choosing the energy operator H. Namely, for any |φ1⟩ and |φ2⟩ we would need an energy operator H such that

eitH|φ1⟩ = |φ2⟩

This process would be reversible. Anyways, that was quite a digression.

Entanglement between A and B is naturally described by the coefficients {pα} appearing in the Schmidt decomposition of the state |ΨAB⟩,

AB⟩ = Σα √pαAα ⟩ ⊗ |ΨBα ⟩ —– (9)

These coefficients {pα} correspond to the eigenvalues of the reduced density matrix ρA, whose spectral decomposition reads

ρA = ΣαpAα⟩⟨ΨAα—– (10)

defining a probability distribution, pα ≥ 0, Σα pα = 1, in terms of which the von Neumann entropy S(A) is

S(A) = − Σαpα log2(pα—– (11)

On the other hand, the Hilbert space VA of region A factorizes as the tensor product

VA = ⊗ i∈A V(i) —– (12)

where V(i) describes the local Hilbert space of site i. The reduced density matrix ρA in (10) and the factorization of (12) define two inequivalent structures within the vector space VA of region A. The entanglement contours A is a function from the set of sites i∈A to the real numbers,

sA : A → ℜ —– (13)

that attempts to relate these two structures, by distributing the von-Neumann entropy S(A) of (11) among the sites i ∈ A. According to Chen and Vidal, there are five conditions/requirements on entanglement contours that need satiation.

a. Positivity: sA(i) ≥ 0

b. Normalization: Σi∈AsA(i) = S(A) 

These constraints amount to defining a probability distribution pi ≡ sA(i)/S(A) over the sites i ∈ A, with pi ≥ 0 and i Σipi = 1, such that sA(i) = piS(A), however, do not requiring sA to inform us about the spatial structure of entanglement in A, but only relating to the density matrix ρA through its total von Neumann entropy S(A).

c. Symmetry: if T is a symmetry of ρA, that is AT = ρA, and T exchanges site i with site j, then sA(i) = sA(j).

This condition ensures that the entanglement contour is the same on two sites i and j of region A that, as far as entanglement is concerned, play an equivalent role in region A. It uses the (possible) presence of a spatial symmetry, such as invariance under space reflection, or under discrete translations/rotations, to define an equivalence relation in the set of sites of region A, and requires that the entanglement contour be constant within each resulting equivalence class. Notice, however, that this condition does not tell us whether the entanglement contour should be large or small on a given site (or equivalence class of site). In particular, the three conditions above are satisfied by a canonical choice sA(i) = S (A)/|A|, that is a flat entanglement contour over the |A| sites contained in region A, which once more does not tell us anything about the spatial structure of the von Neumann entropy in ρA.

The remaining conditions refer to subregions within region A, instead of referring to single sites. It is therefore convenient to (trivially) extend the definition of entanglement contour to a set X of sites in region A, X ⊆ A, with vector space

VX = ⊗i∈X V(i) —– (14)

as the sum of the contour over the sites in X,

sA(X) ≡  Σi∈XsA(i) —– (15)

It follows from this extension that for any two disjoint subsets X1, X2 ⊆ A, with X1 ∩ X2 = ∅, the contour is additive,

sA(X1 ∪ X2) = sA(X1) + sA(X2—– (16)

In particular, condition 2 can be now recast as sA(A) =S(A). Similarly, if X, X ⊆ A, are such that all the sites of X1 are also contained in X2, X1X2 ,then the contour must be larger on X2 than on X1 (monotonicity of sA(X)),

sA(X1) ≤ sA(X2) if X1 ⊆ X2 —– (17)

d. Invariance under local unitary transformations: if the state |Ψ′AB is obtained from the state AB by means of a unitary transformation UX that acts on a subset X ⊆ A of sites of region A, that is |Ψ′AB⟩ ≡ UXAB, then the entanglement contour sA(X) must be the same for state AB and for state |Ψ′AB.

That is, the contribution of region X to the entanglement between A and B is not affected by a redefinition of the sites or change of basis within region X. Notice that it follows that  Ucan also not change sA(X’), where X’ ≡ A − X is the complement of X in A.

To motivate our last condition, let us consider a state AB that factorizes as the product

AB⟩ = |ΨXXB⟩ ⊗ |ΨX’X’B—– (18)

where X ⊆ A and XB ⊆ B are subsets of sites in regions A and B, respectively, and X’ ⊆ A and X’B ⊆ B are their complements within A and B, so that

VA = VX ⊗ VX’, —– (19)

VB = VXB ⊗ VX’B —– (20)

in this case the reduced density matrix ρA factorizes as ρA = ρX ⊗ ρX’ and the entanglement entropy is additive,

S(A) = S(X) + S(X’) —– (21)

Since the entanglement entropy S(X) of subregion X is well-defined, let the entanglement profile over X be equal to it,

sA(X) = S(X) —– (22)

The last condition refers to a more general situation where, instead of obeying (18), the state AB factorizes as the product

AB⟩ = |ΨΩAΩB⟩ ⊗ |ΨΩ’AΩ’B, —– (23)

with respect to some decomposition of VA and VB as

tensor products of factor spaces,

VA = VΩA ⊗ VΩ’A, —– (24)

VB = VΩB ⊗ VΩ’B —– (25)

Let S(ΩA) denote the entanglement entropy supported on the first factor space VΩA of  VA, that is

S(ΩA) = −tr(ρΩA log2ΩA)) —– (26)

ρΩA ≡ trΩB |Ψ ΩA ΩB⟩⟨Ψ ΩA ΩB| —– (27)

and let X ⊆ A be a subset of sites whose vector space VX is completely contained in VΩA , meaning that VΩA can be further decomposed as

VΩA  ≈ VX VX’ —– (28)

e. Upper bound: if a subregion X ⊆ A is contained in a factor space ΩA (24 and 28) then the entanglement contour of subregion X cannot be larger than the entanglement entropy S(ΩA) (26)

sA(X) S(ΩA) —– (29)

This condition says that whenever we can ascribe a concrete value S(ΩA) of the entanglement entropy to a factor space ΩA within region A (that is, whenever the state AB factorizes as in (24) then the entanglement contour has to be consistent with this fact, meaning that the contour S(X) in any subregion X contained in the factor space ΩA is upper bounded by S(ΩA).

Let us consider a particular case of condition e. When a region X ∈ A is not at all correlated with B, that is ρXBX ⊗ ρB,then it can be seen that X is contained in some factor space ΩA such that the state |Ψ ΩA ΩB itself further factorizes as |Ψ ΩA⟩ ⊗ |ΨΩB, so that (23) becomes

AB⟩ = |Ψ ΩA⟩ ⊗ |ΨΩB ⊗ |ΨΩ’AΩ’B ⟩, —– (30)

and S(ΩA) = 0. Condition e then requires that sA(X) = 0, that is

ρXBX ⊗ ρB sA(X) = 0, —– (31)

reflecting the fact that a region X ⊆ A that is not correlated with B does not contribute at all to the entanglement between A and B. Finally, the upper bound in e can be alternatively announced as a lower bound. Let Y ⊆ A be a subset of sites whose vector space VY completely contains VΩA in (24), meaning that VY can be further decomposed as

VY VΩA ⊗ VΩ’A —– (32)

e’. Lower bound: The entanglement contour of subregion Y is at least equal to the entanglement entropy S(ΩA) in (26),

sA(Y) ≥ S(ΩA) —– (33)

Conditions a-e (e’) are not expected to completely determine the entanglement contour. In other words, there probably are inequivalent functions sA : A → ℜ that conform to all the conditions above. So, where do we get philosophical from here? It is through the entanglement contour through selected states that a time evolution ensuing a global or a local quantum quench characterizing entanglement between regions rather than within regions, revealing a a detailed real-space structure of the entanglement of a region A and its dynamics, well beyond what is accessible from the entanglement entropy alone. But, that isn’t all. Questions of how to quantify entanglement and non-locality, and the need to clarify the relationship between them are important not only conceptually, but also practically, insofar as entanglement and non-locality seem to be different resources for the performance of quantum information processing tasks. Whether in a given quantum information protocol (cryptography, teleportation, and algorithm . . .) it is better to look for the largest amount of entanglement or the largest amount of non-locality becomes decisive. The ever-evolving field of quantum information theory is devoted to using the principles and laws of quantum mechanics to aid in the acquisition, transmission, and processing of information. In particular, it seeks to harness the peculiarly quantum phenomena of entanglement, superposition, and non-locality to perform all sorts of novel tasks, such as enabling computations that operate exponentially faster or more efficiently than their classical counterparts (via quantum computers) and providing unconditionally secure cryptographic systems for the transfer of secret messages over public channels (via quantum key distribution). By contrast, classical information theory is concerned with the storage and transfer of information in classical systems. It uses the “bit” as the fundamental unit of information, where the system capable of representing a bit can take on one of two values (typically 0 or 1). Classical information theory is based largely on the concept of information formalized by Claude Shannon in the late 1940s. Quantum information theory, which was later developed in analogy with classical information theory, is concerned with the storage and processing of information in quantum systems, such as the photon, electron, quantum dot, or atom. Instead of using the bit, however, it defines the fundamental unit of quantum information as the “qubit.” What makes the qubit different from a classical bit is that the smallest system capable of storing a qubit, the two-level quantum system, not only can take on the two distinct values |0 and |1 , but can also be in a state of superposition of these two states: |ψ = α0 |0 + α1 |1.

Quantum information theory has opened up a whole new range of philosophical and foundational questions in quantum cryptography or quantum key distribution, which involves using the principles of quantum mechanics to ensure secure communication. Some quantum cryptographic protocols make use of entanglement to establish correlations between systems that would be lost upon eavesdropping. Moreover, a quantum principle known as the no-cloning theorem prohibits making identical copies of an unknown quantum state. In the context of a C∗-algebraic formulation,  quantum theory can be characterized in terms of three information-theoretic constraints: (1) no superluminal signaling via measurement, (2) no cloning (for pure states) or no broadcasting (mixed states), and (3) no unconditionally secure bit commitment.

Entanglement does not refute the principle of locality. A sketch of the sort of experiment commonly said to refute locality runs as follows. Suppose that you have two electrons with entangled spin. For each electron you can measure the spin along the X, Y or Z direction. If you measure X on both electrons, then you get opposite values, likewise for measuring Y or Z on both electrons. If you measure X on one electron and Y or Z on the other, then you have a 50% probability of a match. And if you measure Y on one and Z on the other, the probability of a match is 50%. The crucial issue is that whether you find a correlation when you do the comparison depends on whether you measure the same quantity on each electron. Bell’s theorem just explains that the extent of this correlation is greater than a local theory would allow if the measured quantities were represented by stochastic variables (i.e. – numbers picked out of a hat). This fact is often misrepresented as implying that quantum mechanics is non-local. But in quantum mechanics, systems are not characterised by stochastic variables, but, rather, by Hermitian operators. There is an entirely local explanation of how the correlations arise in terms of properties of systems represented by such operators. But, another answer to such violations of the principle of locality could also be “Yes, unless you get really obsessive about it.” It has been formally proven that one can have determinacy in a model of quantum dynamics, or one can have locality, but cannot have both. If one gives up the determinacy of the theory in various ways, one can imagine all kinds of ‘planned flukes’ like the notion that the experiments that demonstrate entanglement leak information and pre-determine the environment to make the coordinated behavior seem real. Since this kind of information shaping through distributed uncertainty remains a possibility, folks can cling to locality until someone actually manages something like what those authors are attempting, or we find it impossible. If one gives up locality instead, entanglement does not present a problem, the theory of relativity does. Because the notion of a frame of reference is local. Experiments on quantum tunneling that violate the constraints of the speed of light have been explained with the idea that probabilistic partial information can ‘lead’ real information faster than light by pushing at the vacuum underneath via the ‘Casimir Effect’. If both of these make sense, then the information carried by the entanglement when it is broken would be limited as the particles get farther apart — entanglements would have to spontaneously break down over time or distance of separation so that the probabilities line up. This bodes ill for our ability to find entangled particles from the Big Bang, which seems to be the only prospect in progress to debunk the excessively locality-focussed.

But, much of the work remains undone and this is to be continued…..