Quantum Informational Biochemistry. Thought of the Day 71.0

el_net2

A natural extension of the information-theoretic Darwinian approach for biological systems is obtained taking into account that biological systems are constituted in their fundamental level by physical systems. Therefore it is through the interaction among physical elementary systems that the biological level is reached after increasing several orders of magnitude the size of the system and only for certain associations of molecules – biochemistry.

In particular, this viewpoint lies in the foundation of the “quantum brain” project established by Hameroff and Penrose (Shadows of the Mind). They tried to lift quantum physical processes associated with microsystems composing the brain to the level of consciousness. Microtubulas were considered as the basic quantum information processors. This project as well the general project of reduction of biology to quantum physics has its strong and weak sides. One of the main problems is that decoherence should quickly wash out the quantum features such as superposition and entanglement. (Hameroff and Penrose would disagree with this statement. They try to develop models of hot and macroscopic brain preserving quantum features of its elementary micro-components.)

However, even if we assume that microscopic quantum physical behavior disappears with increasing size and number of atoms due to decoherence, it seems that the basic quantum features of information processing can survive in macroscopic biological systems (operating on temporal and spatial scales which are essentially different from the scales of the quantum micro-world). The associated information processor for the mesoscopic or macroscopic biological system would be a network of increasing complexity formed by the elementary probabilistic classical Turing machines of the constituents. Such composed network of processors can exhibit special behavioral signatures which are similar to quantum ones. We call such biological systems quantum-like. In the series of works Asano and others (Quantum Adaptivity in Biology From Genetics to Cognition), there was developed an advanced formalism for modeling of behavior of quantum-like systems based on theory of open quantum systems and more general theory of adaptive quantum systems. This formalism is known as quantum bioinformatics.

The present quantum-like model of biological behavior is of the operational type (as well as the standard quantum mechanical model endowed with the Copenhagen interpretation). It cannot explain physical and biological processes behind the quantum-like information processing. Clarification of the origin of quantum-like biological behavior is related, in particular, to understanding of the nature of entanglement and its role in the process of interaction and cooperation in physical and biological systems. Qualitatively the information-theoretic Darwinian approach supplies an interesting possibility of explaining the generation of quantum-like information processors in biological systems. Hence, it can serve as the bio-physical background for quantum bioinformatics. There is an intriguing point in the fact that if the information-theoretic Darwinian approach is right, then it would be possible to produce quantum information from optimal flows of past, present and anticipated classical information in any classical information processor endowed with a complex enough program. Thus the unified evolutionary theory would supply a physical basis to Quantum Information Biology.

Evolutionary Game Theory. Note Quote

Untitled

In classical evolutionary biology the fitness landscape for possible strategies is considered static. Therefore optimization theory is the usual tool in order to analyze the evolution of strategies that consequently tend to climb the peaks of the static landscape. However in more realistic scenarios the evolution of populations modifies the environment so that the fitness landscape becomes dynamic. In other words, the maxima of the fitness landscape depend on the number of specimens that adopt every strategy (frequency-dependent landscape). In this case, when the evolution depends on agents’ actions, game theory is the adequate mathematical tool to describe the process. But this is precisely the scheme in that the evolving physical laws (i.e. algorithms or strategies) are generated from the agent-agent interactions (bottom-up process) submitted to natural selection.

The concept of evolutionarily stable strategy (ESS) is central to evolutionary game theory. An ESS is defined as that strategy that cannot be displaced by any alternative strategy when being followed by the great majority – almost all of systems in a population. In general,

an ESS is not necessarily optimal; however it might be assumed that in the last stages of evolution — before achieving the quantum equilibrium — the fitness landscape of possible strategies could be considered static or at least slow varying. In this simplified case an ESS would be one with the highest payoff therefore satisfying an optimizing criterion. Different ESSs could exist in other regions of the fitness landscape.

In the information-theoretic Darwinian approach it seems plausible to assume as optimization criterion the optimization of information flows for the system. A set of three regulating principles could be:

Structure: The complexity of the system is optimized (maximized).. The definition that is adopted for complexity is Bennett’s logical depth that for a binary string is the time needed to execute the minimal program that generates such string. There is no a general acceptance of the definition of complexity, neither is there a consensus on the relation between the increase of complexity – for a certain definition – and Darwinian evolution. However, it seems that there is some agreement on the fact that, in the long term, Darwinian evolution should drive to an increase in complexity in the biological realm for an adequate natural definition of this concept. Then the complexity of a system at time in this theory would be the Bennett’s logical depth of the program stored at time in its Turing machine. The increase of complexity is a characteristic of Lamarckian evolution, and it is also admitted that the trend of evolution in the Darwinian theory is in the direction in which complexity grows, although whether this tendency depends on the timescale – or some other factors – is still not very clear.

Dynamics: The information outflow of the system is optimized (minimized). The information is the Fisher information measure for the probability density function of the position of the system. According to S. A. Frank, natural selection acts maximizing the Fisher information within a Darwinian system. As a consequence, assuming that the flow of information between a system and its surroundings can be modeled as a zero-sum game, Darwinian systems would follow dynamics.

Interaction: The interaction between two subsystems optimizes (maximizes) the complexity of the total system. The complexity is again equated to the Bennett’s logical depth. The role of Interaction is central in the generation of composite systems, therefore in the structure for the information processor of composite systems resulting from the logical interconnections among the processors of the constituents. There is an enticing option of defining the complexity of a system in contextual terms as the capacity of a system for anticipating the behavior at t + ∆t of the surrounding systems included in the sphere of radius r centered in the position X(t) occupied by the system. This definition would directly drive to the maximization of the predictive power for the systems that maximized their complexity. However, this magnitude would definitely be very difficult to even estimate, in principle much more than the usual definitions for complexity.

Quantum behavior of microscopic systems should now emerge from the ESS. In other terms, the postulates of quantum mechanics should be deduced from the application of the three regulating principles on our physical systems endowed with an information processor.

Let us apply Structure. It is reasonable to consider that the maximization of the complexity of a system would in turn maximize the predictive power of such system. And this optimal statistical inference capacity would plausibly induce the complex Hilbert space structure for the system’s space of states. Let us now consider Dynamics. This is basically the application of the principle of minimum Fisher information or maximum Cramer-Rao bound on the probability distribution function for the position of the system. The concept of entanglement seems to be determinant to study the generation of composite systems, in particular in this theory through applying Interaction. The theory admits a simple model that characterizes the entanglement between two subsystems as the mutual exchange of randomizers (R1, R2), programs (P1, P2) – with their respective anticipation modules (A1, A2) – and wave functions (Ψ1, Ψ2). In this way, both subsystems can anticipate not only the behavior of their corresponding surrounding systems, but also that of the environment of its partner entangled subsystem. In addition, entanglement can be considered a natural phenomenon in this theory, a consequence of the tendency to increase the complexity, and therefore, in a certain sense, an experimental support to the theory.

In addition, the information-theoretic Darwinian approach is a minimalist realist theory – every system follows a continuous trajectory in time, as in Bohmian mechanics, a local theory in physical space – in this theory apparent nonlocality, as in Bell’s inequality violations, would be an artifact of the anticipation module in the information space, although randomness would necessarily be intrinsic to nature through the random number generator methodologically associated with every fundamental system at t = 0, and as essential ingredient to start and fuel – through variation – Darwinian evolution. As time increases, random events determined by the random number generators would progressively be replaced by causal events determined by the evolving programs that gradually take control of the elementary systems. Randomness would be displaced by causality as physical Darwinian evolution gave rise to the quantum equilibrium regime, but not completely, since randomness would play a crucial role in the optimization of strategies – thus, of information flows – as game theory states.

Black Hole Complementarity: The Case of the Infalling Observer

The four postulates of black hole complementarity are:

Postulate 1: The process of formation and evaporation of a black hole, as viewed by a distant observer, can be described entirely within the context of standard quantum theory. In particular, there exists a unitary S-matrix which describes the evolution from infalling matter to outgoing Hawking-like radiation.

Postulate 2: Outside the stretched horizon of a massive black hole, physics can be described to good approximation by a set of semi-classical field equations.

Postulate 3: To a distant observer, a black hole appears to be a quantum system with discrete energy levels. The dimension of the subspace of states describing a black hole of mass M is the exponential of the Bekenstein entropy S(M).

We take as implicit in postulate 2 that the semi-classical field equations are those of a low energy effective field theory with local Lorentz invariance. These postulates do not refer to the experience of an infalling observer, but states a ‘certainty,’ which for uniformity we label as a further postulate:

Postulate 4: A freely falling observer experiences nothing out of the ordinary when crossing the horizon.

To be more specific, we will assume that postulate 4 means both that any low-energy dynamics this observer can probe near his worldline is well-described by familiar Lorentz-invariant effective field theory and also that the probability for an infalling observer to encounter a quantum with energy E ≫ 1/rs (measured in the infalling frame) is suppressed by an exponentially decreasing adiabatic factor as predicted by quantum field theory in curved spacetime. We will argue that postulates 1, 2, and 4 are not consistent with one another for a sufficiently old black hole.

Consider a black hole that forms from collapse of some pure state and subsequently decays. Dividing the Hawking radiation into an early part and a late part, postulate 1 implies that the state of the Hawking radiation is pure,

|Ψ⟩= ∑ii⟩E ⊗|i⟩L —– (1)

Here we have taken an arbitrary complete basis |i⟩L for the late radiation. We use postulates 1, 2, and 3 to make the division after the Page time when the black hole has emitted half of its initial Bekenstein-Hawking entropy; we will refer to this as an ‘old’ black hole. The number of states in the early subspace will then be much larger than that in the late subspace and, as a result, for typical states |Ψ⟩ the reduced density matrix describing the late-time radiation is close to the identity. We can therefore construct operators acting on the early radiation, whose action on |Ψ⟩ is equal to that of a projection operator onto any given subspace of the late radiation.

To simplify the discussion, we treat gray-body factors by taking the transmission coefficients T to have unit magnitude for a few low partial waves and to vanish for higher partial waves. Since the total radiated energy is finite, this allows us to think of the Hawking radiation as defining a finite-dimensional Hilbert space.

Now, consider an outgoing Hawking mode in the later part of the radiation. We take this mode to be a localized packet with width of order rs corresponding to a superposition of frequencies O(r−1s). Note that postulate 2 allows us to assign a unique observer-independent s lowering operator b to this mode. We can project onto eigenspaces of the number operator bb. In other words, an observer making measurements on the early radiation can know the number of photons that will be present in a given mode of the late radiation.

Following postulate 2, we can now relate this Hawking mode to one at earlier times, as long as we stay outside the stretched horizon. The earlier mode is blue-shifted, and so may have frequency ω* much larger than O(r−1s) though still sub-Planckian.

Next consider an infalling observer and the associated set of infalling modes with lowering operators a. Hawking radiation arises precisely because

b = ∫0 dω B(ω)aω + C(ω)aω —– (2)

so that the full state cannot be both an a-vacuum (a|Ψ⟩ = 0) and a bb eigenstate. Here again we have used our simplified gray-body factors.

The application of postulates 1 and 2 has thus led to the conclusion that the infalling observer will encounter high-energy modes. Note that the infalling observer need not have actually made the measurement on the early radiation: to guarantee the presence of the high energy quanta it is enough that it is possible, just as shining light on a two-slit experiment destroys the fringes even if we do not observe the scattered light. Here we make the implicit assumption that the measurements of the infalling observer can be described in terms of an effective quantum field theory. Instead we could simply suppose that if he chooses to measure bb he finds the expected eigenvalue, while if he measures the noncommuting operator aa instead he finds the expected vanishing value. But this would be an extreme modification of the quantum mechanics of the observer, and does not seem plausible.

Figure below gives a pictorial summary of our argument, using ingoing Eddington-Finkelstein coordinates. The support of the mode b is shaded. At large distance it is a well-defined Hawking photon, in a predicted eigenstate of bb by postulate 1. The observer encounters it when its wavelength is much shorter: the field must be in the ground state aωaω = 0, by postulate 4, and so cannot be in an eigenstate of bb. But by postulate 2, the evolution of the mode outside the horizon is essentially free, so this is a contradiction.

Untitled

Figure: Eddington-Finkelstein coordinates, showing the infalling observer encountering the outgoing Hawking mode (shaded) at a time when its size is ω−1* ≪ rs. If the observer’s measurements are given by an eigenstate of aa, postulate 1 is violated; if they are given by an eigenstate of bb, postulate 4 is violated; if the result depends on when the observer falls in, postulate 2 is violated.

To restate our paradox in brief, the purity of the Hawking radiation implies that the late radiation is fully entangled with the early radiation, and the absence of drama for the infalling observer implies that it is fully entangled with the modes behind the horizon. This is tantamount to cloning. For example, it violates strong subadditivity of the entropy,

SAB + SBC ≥ SB + SABC —– (3)

Let A be the early Hawking modes, B be outgoing Hawking mode, and C be its interior partner mode. For an old black hole, the entropy is decreasing and so SAB < SA. The absence of infalling drama means that SBC = 0 and so SABC = SA. Subadditivity then gives SA ≥ SB + SA, which fails substantially since the density matrix for system B by itself is thermal.

Actually, assuming the Page argument, the inequality is violated even more strongly: for an old black hole the entropy decrease is maximal, SAB = SA − SB, so that we get from subadditivity that SA ≥ 2SB + SA.

Note that the measurement of Nb takes place entirely outside the horizon, while the measurement of Na (real excitations above the infalling vacuum) must involve a region that extends over both sides of the horizon. These are noncommuting measurements, but by measuring Nb the observer can infer something about what would have happened if Na had been measured instead. For an analogy, consider a set of identically prepared spins. If each is measured along the x-axis and found to be +1/2, we can infer that a measurement along the z-axis would have had equal probability to return +1/2 and −1/2. The multiple spins are needed to reduce statistical variance; similarly in our case the observer would need to measure several modes Nb to have confidence that he was actually entangled with the early radiation. One might ask if there could be a possible loophole in the argument: A physical observer will have a nonzero mass, and so the mass and entropy of the black hole will increase after he falls in. However, we may choose to consider a particular Hawking wavepacket which is already separated from the streched horizon by a finite amount when it is encountered by the infalling observer. Thus by postulate 2 the further evolution of this mode is semiclassical and not affected by the subsequent merging of the observer with the black hole. In making this argument we are also assuming that the dynamics of the stretched horizon is causal.

Thus far the asymptotically flat discussion applies to a black hole that is older than the Page time; we needed this in order to frame a sharp paradox using the entanglement with the Hawking radiation. However, we are discussing what should be intrinsic properties of the black hole, not dependent on its entanglement with some external system. After the black hole scrambling time, almost every small subsystem of the black hole is in an almost maximally mixed state. So if the degrees of freedom sampled by the infalling observer can be considered typical, then they are ‘old’ in an intrinsic sense. Our conclusions should then hold. If the black hole is a fast scrambler the scrambling time is rs ln(rs/lP), after which we have to expect either drama for the infalling observer or novel physics outside the black hole.

We note that the three postulates that are in conflict – purity of the Hawking radiation, absence of infalling drama, and semiclassical behavior outside the horizon — are widely held even by those who do not explicitly label them as ‘black hole complementarity.’ For example, one might imagine that if some tunneling process were to cause a shell of branes to appear at the horizon, an infalling observer would just go ‘splat,’ and of course Postulate 4 would not hold.

Financial Entanglement and Complexity Theory. An Adumbration on Financial Crisis.

entanglement

The complex system approach in finance could be described through the concept of entanglement. The concept of entanglement bears the same features as a definition of a complex system given by a group of physicists working in a field of finance (Stanley et al,). As they defined it – in a complex system all depends upon everything. Just as in the complex system the notion of entanglement is a statement acknowledging interdependence of all the counterparties in financial markets including financial and non-financial corporations, the government and the central bank. How to identify entanglement empirically? Stanley H.E. et al formulated the process of scientific study in finance as a search for patterns. Such a search, going on under the auspices of “econophysics”, could exemplify a thorough analysis of a complex and unstructured assemblage of actual data being finalized in the discovery and experimental validation of an appropriate pattern. On the other side of a spectrum, some patterns underlying the actual processes might be discovered due to synthesizing a vast amount of historical and anecdotal information by applying appropriate reasoning and logical deliberations. The Austrian School of Economic Thought which, in its extreme form, rejects application of any formalized systems, or modeling of any kind, could be viewed as an example. A logical question follows out this comparison: Does there exist any intermediate way of searching for regular patters in finance and economics?

Importantly, patterns could be discovered by developing rather simple models of money and debt interrelationships. Debt cycles were studied extensively by many schools of economic thought (Shiller, Robert J._ Akerlof, George A – Animal Spirits: How Human Psychology Drives the Economy, and Why It Matters for Global Capitalism). The modern financial system worked by spreading risk, promoting economic efficiency and providing cheap capital. It had been formed during the years as bull markets in shares and bonds originated in the early 1990s. These markets were propelled by abundance of money, falling interest rates and new information technology. Financial markets, by combining debt and derivatives, could originate and distribute huge quantities of risky structurized products and sell them to different investors. Meanwhile, financial sector debt, only a tenth of the size of non-financial-sector debt in 1980, became half as big by the beginning of the credit crunch in 2007. As liquidity grew, banks could buy more assets, borrow more against them, and enjoy their value rose. By 2007 financial services were making 40% of America’s corporate profits while employing only 5% of its private sector workers. Thanks to cheap money, banks could have taken on more debt and, by designing complex structurized products, they were able to make their investment more profitable and risky. Securitization facilitating the emergence of the “shadow banking” system foments, simultaneously, bubbles on different segments of a global financial market.

Yet over the past decade this system, or a big part of it, began to lose touch with its ultimate purpose: to reallocate deficit resources in accordance with the social priorities. Instead of writing, managing and trading claims on future cashflows for the rest of the economy, finance became increasingly a game for fees and speculation. Due to disastrously lax regulation, investment banks did not lay aside enough capital in case something went wrong, and, as the crisis began in the middle of 2007, credit markets started to freeze up. Qualitatively, after the spectacular Lehman Brothers disaster in September 2008, laminar flows of financial activity came to an end. Banks began to suffer losses on their holdings of toxic securities and were reluctant to lend to one another that led to shortages of funding system. This only intensified in late 2007 when Nothern Rock, a British mortgage lender, experienced a bank run that started in the money markets. All of a sudden, liquidity became in a short supply, debt was unwound, and investors were forced to sell and write down the assets. For several years, up to now, the market counterparties no longer trust each other. As Walter Bagehot, an authority on bank runs, once wrote:

Every banker knows that if he has to prove that he is worth of credit, however good may be his arguments, in fact his credit is gone.

In an entangled financial system, his axiom should be stretched out to the whole market. And it means, precisely, financial meltdown or the crisis. The most fascinating feature of the post-crisis era on financial markets was the continuation of a ubiquitous liquidity expansion. To fight the market squeeze, all the major central banks have greatly expanded their balance sheets. The latter rose, roughly, from about 10 percent to 25-30 percent of GDP for the appropriate economies. For several years after the credit crunch 2007-09, central banks bought trillions of dollars of toxic and government debts thus increasing, without any precedent in modern history, money issuance. Paradoxically, this enormous credit expansion, though accelerating for several years, has been accompanied by a stagnating and depressed real economy. Yet, until now, central bankers are worried with downside risks and threats of price deflation, mainly. Otherwise, a hectic financial activity that is going on along unbounded credit expansion could be transformed by herding into autocatalytic process that, if being subject to accumulation of a new debt, might drive the entire system at a total collapse. From a financial point of view, this systemic collapse appears to be a natural result of unbounded credit expansion which is ‘supported’ with the zero real resources. Since the wealth of investors, as a whole, becomes nothing but the ‘fool’s gold’, financial process becomes a singular one, and the entire system collapses. In particular, three phases of investors’ behavior – hedge finance, speculation, and the Ponzi game, could be easily identified as a sequence of sub-cycles that unwound ultimately in the total collapse.

Quantum Music

Human neurophysiology suggests that artistic beauty cannot easily be disentangled from sexual attraction. It is, for instance, very difficult to appreciate Sandro Botticelli’s Primavera, the arguably “most beautiful painting ever painted,” when a beautiful woman or man is standing in front of that picture. Indeed so strong may be the distraction, and so deep the emotional impact, that it might not be unreasonable to speculate whether aesthetics, in particular beauty and harmony in art, could be best understood in terms of surrogates for natural beauty. This might be achieved through the process of artistic creation, idealization and “condensation.”

1200px-Botticelli-primavera

In this line of thought, in Hegelian terms, artistic beauty is the sublimation, idealization, completion, condensation and augmentation of natural beauty. Very different from Hegel who asserts that artistic beauty is “born of the spirit and born again, and the higher the spirit and its productions are above nature and its phenomena, the higher, too, is artistic beauty above the beauty of nature” what is believed here is that human neurophysiology can hardly be disregarded for the human creation and perception of art; and, in particular, of beauty in art. Stated differently, we are inclined to believe that humans are invariably determined by (or at least intertwined with) their natural basis that any neglect of it results in a humbling experience of irritation or even outright ugliness; no matter what social pressure groups or secret services may want to promote.

Thus, when it comes to the intensity of the experience, the human perception of artistic beauty, as sublime and refined as it may be, can hardly transcend natural beauty in its full exposure. In that way, art represents both the capacity as well as the humbling ineptitude of its creators and audiences.

Leaving these idealistic realms and come back to the quantization of musical systems. The universe of music consists of an infinity – indeed a continuum – of tones and ways to compose, correlate and arrange them. It is not evident how to quantize sounds, and in particular music, in general. One way to proceed would be a microphysical one: to start with frequencies of sound waves in air and quantize the spectral modes of these (longitudinal) vibrations very similar to phonons in solid state physics.

For the sake of relating to music, however, a different approach that is not dissimilar to the Deutsch-Turing approach to universal (quantum) computability, or Moore’s automata analogues to complementarity: a musical instrument is quantized, concerned with an octave, realized by the eight white keyboard keys typically written c, d, e, f, g, a, b, c′ (in the C major scale).

In analogy to quantum information quantization of tones is considered for a nomenclature in analogy to classical musical representation to be further followed up by introducing typical quantum mechanical features such as the coherent superposition of classically distinct tones, as well as entanglement and complementarity in music…..quantum music

von Neumann & Dis/belief in Hilbert Spaces

I would like to make a confession which may seem immoral: I do not believe absolutely in Hilbert space any more.

— John von Neumann, letter to Garrett Birkhoff, 1935.

15_03

The mathematics: Let us consider the raison d’ˆetre for the Hilbert space formalism. So why would one need all this ‘Hilbert space stuff, i.e. the continuum structure, the field structure of complex numbers, a vector space over it, inner-product structure, etc. Why? According to von Neumann, he simply used it because it happened to be ‘available’. The use of linear algebra and complex numbers in so many different scientific areas, as well as results in model theory, clearly show that quite a bit of modeling can be done using Hilbert spaces. On the other hand, we can also model any movie by means of the data stream that runs through your cables when watching it. But does this mean that these data streams make up the stuff that makes a movie? Clearly not, we should rather turn our attention to the stuff that is being taught at drama schools and directing schools. Similarly, von Neumann turned his attention to the actual physical concepts behind quantum theory, more specifically, the notion of a physical property and the structure imposed on these by the peculiar nature of quantum observation. His quantum logic gave the resulting ‘algebra of physical properties’ a privileged role. All of this leads us to … the physics of it. Birkhoff and von Neumann crafted quantum logic in order to emphasize the notion of quantum superposition. In terms of states of a physical system and properties of that system, superposition means that the strongest property which is true for two distinct states is also true for states other than the two given ones. In order-theoretic terms this means, representing states by the atoms of a lattice of properties, that the join p ∨ q of two atoms p and q is also above other atoms. From this it easily follows that the distributive law breaks down: given atom r ≠ p, q with r < p ∨ q we have r ∧ (p ∨ q) = r while (r ∧ p) ∨ (r ∧ q) = 0 ∨ 0 = 0. Birkhoff and von Neumann as well as many others believed that understanding the deep structure of superposition is the key to obtaining a better understanding of quantum theory as a whole.

For Schrödinger, this is the behavior of compound quantum systems, described by the tensor product. While the quantum information endeavor is to a great extend the result of exploiting this important insight, the language of the field is still very much that of strings of complex numbers, which is akin to the strings of 0’s and 1’s in the early days of computer programming. If the manner in which we describe compound quantum systems captures so much of the essence of quantum theory, then it should be at the forefront of the presentation of the theory, and not preceded by continuum structure, field of complex numbers, vector space over the latter, etc, to only then pop up as some secondary construct. How much quantum phenomena can be derived from ‘compoundness + epsilon’. It turned out that epsilon can be taken to be ‘very little’, surely not involving anything like continuum, fields, vector spaces, but merely a ‘2D space’ of temporal composition and compoundness, together with some very natural purely operational assertion, including one which in a constructive manner asserts entanglement; among many other things, trace structure then follows.

Quantum Entanglement, Post-Selection and Time Travel

If Copenhagen interpretation of quantum mechanics is to be believed, nothing exists in reality until a measurement is carried out. In the double slit experiment carried out by John Wheeler, post-selection can be made to work, after the experiment is finished, and that by delaying the observation after the photon has purportedly passed through the slits. Now, if post-selection is to work, there must be a change in the properties in the past. This has been experimentally proved by physicists like Jean-François Roch at the Ecole Normale Supérieure in Cachan, France. This is weird, but invoking the quantum entanglement and throwing it up for grabs against the philosophic principle of causality surprises. If the experimental set up impacts the future course of outcome, quantum particles in a most whimsical manner are susceptible to negate it. This happens due to the mathematics governing these particle, which enable or rather disable them to differentiate between the course of sense they are supposed to undertake. In short, what happens in the future could determine the past….

….If particles are caught up in quantum entanglement, the measurement of one immediately affects the other, some kind of a Einsteinian spooky action at a distance.

 A weird connection was what sprang up in my mind this morning, and the vestibule comes from French theoretical consideration. Without any kind of specificity, the knower and the known are crafted together by a meditation that rides on instability populated by discursive and linguistic norms and forms that is derided as secondary in the analytical tradition. The autonomy of the knower as against the known is questionable, and derives significance only when its trajectory is mapped by a simultaneity put forth by the known.
Does this not imply French theory getting close to interpreting quantum mechanics? just shocking weird….
Anyways, adieu to this and still firmed up in this post from last week.

Philosophy of Quantum Entanglement and Topology

58525360_35e55309c4_o

Many-body entanglement is essential for the existence of topological order in condensed matter systems and understanding many-body entanglement provides a promising approach to understand in general what topological orders exist. It also leads to tensor network descriptions of many-body wave functions potentializing the classification of phases of quantum matter. The generic many-body entanglement is reduced to specifically 2-body systems for choice of entanglement. Consider the equation,

S(A) ≡ −tr(ρA log2A)) —– (1)

where, ρA ≡ trBAB ⟩⟨ΨAB | is the density matrix for part A, and where we assumed that the whole system is in a pure state AB.

Specializing AB⟩ to a ground state in a local Hamiltonian in D dimensions spatially, the central observation being that the entanglement between of a region A of size LD and the (much larger) rest B of the lattice is then often proportional to the size |σ(A)| of the boundary σ(A) of region A,

S(A) ≈ |σ(A)| ≈ LD−1  —– (2)

where, the correction -1 is due to the topological order of the topic code, thus signifying adherence to Boundary Law observed in the ground state of gapped local Hamiltonian in arbitrary dimension D, as well as in some gapless systems in D > 1 dimensions. Instead, in gapless systems in D = 1 dimensions, as well as in certain gapless systems in D > 1 dimensions (namely systems with a Fermi surface of dimension D − 1), ground state entanglement displays a logarithmic correction to the boundary law,

S(A) ≈ |σ(A)| log2 (|σ(A)|) ≈ LD−1 log2(L) —– (3)

At an intuitive level, the boundary law of (2) is understood as resulting from entanglement that involves degrees of freedom located near the boundary between regions A and B. Also intuitively, the logarithmic correction of (3) is argued to have its origin in contributions to entanglement from degrees of freedom that are further away from the boundary between A and B. Given the entanglement between A and B, introducing an entanglement contour sA that assigns a real number sA(i) ≥ 0 to each lattice site i contained in region A such that the sum of sA(i) over all the sites i ∈ A is equal to the entanglement entropy S (A),

S(A) = Σi∈A sA(i) —– (4) 

and that aims to quantifying how much the degrees of freedom in site i participate in/contribute to the entanglement between A and B. And as Chen and Vidal put it, the entanglement contour sA(i) is not equivalent to the von Neumann entropy S(i) ≡ −tr ρ(i) log2 ρ(i) of the reduced density matrix ρ(i) at site i. Notice that, indeed, the von Neumann en- tropy of individual sites in region A is not additive in the presence of correlations between the sites, and therefore generically

S(A) ≠ Σi∈A S(i)

whereas the entanglement contour sA(i) is required to fulfil (4). Relatedly, when site i is only entangled with neighboring sites contained within region A, and it is thus uncorrelated with region B, the entanglement contour sA(i) will be required to vanish, whereas the one-site von Neumann entropy S(i) still takes a non-zero value due to the presence of local entanglement within region A.

As an aside, in the traditional approach to quantum mechanics, a physical system is described in a Hilbert space: Observables correspond to self-adjoint operators and statistical operators are associated with the states. In fact, a statistical operator describes a mixture of pure states. Pure states are the really physical states and they are given by rank one statistical operators, or equivalently by rays of the Hilbert space. Von Neumann associated an entropy quantity to a statistical operator and his argument was a gedanken experiment on the ground of phenomenological thermodynamics. Let us consider a gas of N(≫ 1) molecules in a rectangular box K. Suppose that the gas behaves like a quantum system and is described by a statistical operator D, which is a mixture λ|φ1⟩⟨φ1| + (1 − λ)|φ1⟩⟨φ2|, |φi⟩ ≡ φ is a state vector (i = 1, 2). We may take λN molecules in the pure state φ1 and (1−λ)N molecules in the pure state φ2. On the basis of phenomenological thermodynamics, we assume that if φ1 and φ2 are orthogonal, then there is a wall that is completely permeable for the φ1-molecules and isolating for the φ2-molecules. We add an equally large empty rectangular box K′ to the left of the box K and we replace the common wall with two new walls. Wall (a), the one to the left is impenetrable, whereas the one to the right, wall (b), lets through the φ1-molecules but keeps back the φ2-molecules. We add a third wall (c) opposite to (b) which is semipermeable, transparent for the φ2-molecules and impenetrable for the φ1-ones. Then we push slowly (a) and (c) to the left, maintaining their distance. During this process the φ1-molecules are pressed through (b) into K′ and the φ2-molecules diffuse through wall (c) and remain in K. No work is done against the gas pressure, no heat is developed. Replacing the walls (b) and (c) with a rigid absolutely impenetrable wall and removing (a) we restore the boxes K and K′ and succeed in the separation of the φ1-molecules from the φ2-ones without any work being done, without any temperature change and without evolution of heat. The entropy of the original D-gas ( with density N/V ) must be the sum of the entropies of the φ1- and φ2-gases ( with densities λ N/V and (1 − λ)N/V , respectively). If we compress the gases in K and K′ to the volumes λV and (1 − λ)V , respectively, keeping the temperature T constant by means of a heat reservoir, the entropy change amounts to κλN log λ and κ(1 − λ)N log(1 − λ), respectively. Indeed, we have to add heat in the amount of λiNκT logλi (< 0) when the φi-gas is compressed, and dividing by the temperature T we get the change of entropy. Finally, mixing the φ1- and φ2-gases of identical density we obtain a D-gas of N molecules in a volume V at the original temperature. If S0(ψ,N) denotes the entropy of a ψ-gas of N molecules (in a volume V and at the given temperature), we conclude that

S0(φ1,λN)+S0(φ2,(1−λ)N) = S0(D, N) + κλN log λ + κ(1 − λ)N log(1 − λ) —– (5)

must hold, where κ is Boltzmann’s constant. Assuming that S0(ψ,N) is proportional to N and dividing by N we have

λS(φ1) + (1 − λ)S(φ2) = S(D) + κλ log λ + κ(1 − λ) log(1 − λ) —– (6)

where S is certain thermodynamical entropy quantity ( relative to the fixed temperature and molecule density ). We arrived at the mixing property of entropy, but we should not forget about the initial assumption: φ1 and φ2 are supposed to be orthogonal. Instead of a two-component mixture, von Neumann operated by an infinite mixture, which does not make a big difference, and he concluded that

S (Σiλi|φi⟩⟨φi|) = ΣiλiS(|φi⟩⟨φi|) − κ Σiλi log λi —– (7)

Von Neumann’s argument does not require that the statistical operator D is a mixture of pure states. What we really needed is the property D = λD1 + (1 − λ)D2 in such a way that the possible mixed states D1 and D2 are disjoint. D1 and D2 are disjoint in the thermodynamical sense, when there is a wall which is completely permeable for the molecules of a D1gas and isolating for the molecules of a D2-gas. In other words, if the mixed states D1 and D2 are disjoint, then this should be demonstrated by a certain filter. Mathematically, the disjointness of D1 and D2 is expressed in the orthogonality of the eigenvectors corresponding to nonzero eigenvalues of the two density matrices. The essential point is in the remark that (6) must hold also in a more general situation when possibly the states do not correspond to density matrices, but orthogonality of the states makes sense:

λS(D1) + (1 − λ)S(D2) = S(D) + κλ log λ + κ(1 − λ) log(1 − λ) —– (8)

(7) reduces the determination of the (thermodynamical) entropy of a mixed state to that of pure states. The so-called Schatten decomposition Σi λi|φi⟩⟨φi| of a statistical operator is not unique even if ⟨φi , φj ⟩ = 0 is assumed for i ≠ j . When λi is an eigenvalue with multiplicity, then the corresponding eigenvectors can be chosen in many ways. If we expect the entropy S(D) to be independent of the Schatten decomposition, then we are led to the conclusion that S(|φ⟩⟨φ|) must be independent of the state vector |φ⟩. This argument assumes that there are no superselection sectors, that is, any vector of the Hilbert space can be a state vector. On the other hand, von Neumann wanted to avoid degeneracy of the spectrum of a statistical operator. Von Neumann’s proof of the property that S(|φ⟩⟨φ|) is independent of the state vector |φ⟩ was different. He did not want to refer to a unitary time development sending one state vector to another, because that argument requires great freedom in choosing the energy operator H. Namely, for any |φ1⟩ and |φ2⟩ we would need an energy operator H such that

eitH|φ1⟩ = |φ2⟩

This process would be reversible. Anyways, that was quite a digression.

Entanglement between A and B is naturally described by the coefficients {pα} appearing in the Schmidt decomposition of the state |ΨAB⟩,

AB⟩ = Σα √pαAα ⟩ ⊗ |ΨBα ⟩ —– (9)

These coefficients {pα} correspond to the eigenvalues of the reduced density matrix ρA, whose spectral decomposition reads

ρA = ΣαpAα⟩⟨ΨAα—– (10)

defining a probability distribution, pα ≥ 0, Σα pα = 1, in terms of which the von Neumann entropy S(A) is

S(A) = − Σαpα log2(pα—– (11)

On the other hand, the Hilbert space VA of region A factorizes as the tensor product

VA = ⊗ i∈A V(i) —– (12)

where V(i) describes the local Hilbert space of site i. The reduced density matrix ρA in (10) and the factorization of (12) define two inequivalent structures within the vector space VA of region A. The entanglement contours A is a function from the set of sites i∈A to the real numbers,

sA : A → ℜ —– (13)

that attempts to relate these two structures, by distributing the von-Neumann entropy S(A) of (11) among the sites i ∈ A. According to Chen and Vidal, there are five conditions/requirements on entanglement contours that need satiation.

a. Positivity: sA(i) ≥ 0

b. Normalization: Σi∈AsA(i) = S(A) 

These constraints amount to defining a probability distribution pi ≡ sA(i)/S(A) over the sites i ∈ A, with pi ≥ 0 and i Σipi = 1, such that sA(i) = piS(A), however, do not requiring sA to inform us about the spatial structure of entanglement in A, but only relating to the density matrix ρA through its total von Neumann entropy S(A).

c. Symmetry: if T is a symmetry of ρA, that is AT = ρA, and T exchanges site i with site j, then sA(i) = sA(j).

This condition ensures that the entanglement contour is the same on two sites i and j of region A that, as far as entanglement is concerned, play an equivalent role in region A. It uses the (possible) presence of a spatial symmetry, such as invariance under space reflection, or under discrete translations/rotations, to define an equivalence relation in the set of sites of region A, and requires that the entanglement contour be constant within each resulting equivalence class. Notice, however, that this condition does not tell us whether the entanglement contour should be large or small on a given site (or equivalence class of site). In particular, the three conditions above are satisfied by a canonical choice sA(i) = S (A)/|A|, that is a flat entanglement contour over the |A| sites contained in region A, which once more does not tell us anything about the spatial structure of the von Neumann entropy in ρA.

The remaining conditions refer to subregions within region A, instead of referring to single sites. It is therefore convenient to (trivially) extend the definition of entanglement contour to a set X of sites in region A, X ⊆ A, with vector space

VX = ⊗i∈X V(i) —– (14)

as the sum of the contour over the sites in X,

sA(X) ≡  Σi∈XsA(i) —– (15)

It follows from this extension that for any two disjoint subsets X1, X2 ⊆ A, with X1 ∩ X2 = ∅, the contour is additive,

sA(X1 ∪ X2) = sA(X1) + sA(X2—– (16)

In particular, condition 2 can be now recast as sA(A) =S(A). Similarly, if X, X ⊆ A, are such that all the sites of X1 are also contained in X2, X1X2 ,then the contour must be larger on X2 than on X1 (monotonicity of sA(X)),

sA(X1) ≤ sA(X2) if X1 ⊆ X2 —– (17)

d. Invariance under local unitary transformations: if the state |Ψ′AB is obtained from the state AB by means of a unitary transformation UX that acts on a subset X ⊆ A of sites of region A, that is |Ψ′AB⟩ ≡ UXAB, then the entanglement contour sA(X) must be the same for state AB and for state |Ψ′AB.

That is, the contribution of region X to the entanglement between A and B is not affected by a redefinition of the sites or change of basis within region X. Notice that it follows that  Ucan also not change sA(X’), where X’ ≡ A − X is the complement of X in A.

To motivate our last condition, let us consider a state AB that factorizes as the product

AB⟩ = |ΨXXB⟩ ⊗ |ΨX’X’B—– (18)

where X ⊆ A and XB ⊆ B are subsets of sites in regions A and B, respectively, and X’ ⊆ A and X’B ⊆ B are their complements within A and B, so that

VA = VX ⊗ VX’, —– (19)

VB = VXB ⊗ VX’B —– (20)

in this case the reduced density matrix ρA factorizes as ρA = ρX ⊗ ρX’ and the entanglement entropy is additive,

S(A) = S(X) + S(X’) —– (21)

Since the entanglement entropy S(X) of subregion X is well-defined, let the entanglement profile over X be equal to it,

sA(X) = S(X) —– (22)

The last condition refers to a more general situation where, instead of obeying (18), the state AB factorizes as the product

AB⟩ = |ΨΩAΩB⟩ ⊗ |ΨΩ’AΩ’B, —– (23)

with respect to some decomposition of VA and VB as

tensor products of factor spaces,

VA = VΩA ⊗ VΩ’A, —– (24)

VB = VΩB ⊗ VΩ’B —– (25)

Let S(ΩA) denote the entanglement entropy supported on the first factor space VΩA of  VA, that is

S(ΩA) = −tr(ρΩA log2ΩA)) —– (26)

ρΩA ≡ trΩB |Ψ ΩA ΩB⟩⟨Ψ ΩA ΩB| —– (27)

and let X ⊆ A be a subset of sites whose vector space VX is completely contained in VΩA , meaning that VΩA can be further decomposed as

VΩA  ≈ VX VX’ —– (28)

e. Upper bound: if a subregion X ⊆ A is contained in a factor space ΩA (24 and 28) then the entanglement contour of subregion X cannot be larger than the entanglement entropy S(ΩA) (26)

sA(X) S(ΩA) —– (29)

This condition says that whenever we can ascribe a concrete value S(ΩA) of the entanglement entropy to a factor space ΩA within region A (that is, whenever the state AB factorizes as in (24) then the entanglement contour has to be consistent with this fact, meaning that the contour S(X) in any subregion X contained in the factor space ΩA is upper bounded by S(ΩA).

Let us consider a particular case of condition e. When a region X ∈ A is not at all correlated with B, that is ρXBX ⊗ ρB,then it can be seen that X is contained in some factor space ΩA such that the state |Ψ ΩA ΩB itself further factorizes as |Ψ ΩA⟩ ⊗ |ΨΩB, so that (23) becomes

AB⟩ = |Ψ ΩA⟩ ⊗ |ΨΩB ⊗ |ΨΩ’AΩ’B ⟩, —– (30)

and S(ΩA) = 0. Condition e then requires that sA(X) = 0, that is

ρXBX ⊗ ρB sA(X) = 0, —– (31)

reflecting the fact that a region X ⊆ A that is not correlated with B does not contribute at all to the entanglement between A and B. Finally, the upper bound in e can be alternatively announced as a lower bound. Let Y ⊆ A be a subset of sites whose vector space VY completely contains VΩA in (24), meaning that VY can be further decomposed as

VY VΩA ⊗ VΩ’A —– (32)

e’. Lower bound: The entanglement contour of subregion Y is at least equal to the entanglement entropy S(ΩA) in (26),

sA(Y) ≥ S(ΩA) —– (33)

Conditions a-e (e’) are not expected to completely determine the entanglement contour. In other words, there probably are inequivalent functions sA : A → ℜ that conform to all the conditions above. So, where do we get philosophical from here? It is through the entanglement contour through selected states that a time evolution ensuing a global or a local quantum quench characterizing entanglement between regions rather than within regions, revealing a a detailed real-space structure of the entanglement of a region A and its dynamics, well beyond what is accessible from the entanglement entropy alone. But, that isn’t all. Questions of how to quantify entanglement and non-locality, and the need to clarify the relationship between them are important not only conceptually, but also practically, insofar as entanglement and non-locality seem to be different resources for the performance of quantum information processing tasks. Whether in a given quantum information protocol (cryptography, teleportation, and algorithm . . .) it is better to look for the largest amount of entanglement or the largest amount of non-locality becomes decisive. The ever-evolving field of quantum information theory is devoted to using the principles and laws of quantum mechanics to aid in the acquisition, transmission, and processing of information. In particular, it seeks to harness the peculiarly quantum phenomena of entanglement, superposition, and non-locality to perform all sorts of novel tasks, such as enabling computations that operate exponentially faster or more efficiently than their classical counterparts (via quantum computers) and providing unconditionally secure cryptographic systems for the transfer of secret messages over public channels (via quantum key distribution). By contrast, classical information theory is concerned with the storage and transfer of information in classical systems. It uses the “bit” as the fundamental unit of information, where the system capable of representing a bit can take on one of two values (typically 0 or 1). Classical information theory is based largely on the concept of information formalized by Claude Shannon in the late 1940s. Quantum information theory, which was later developed in analogy with classical information theory, is concerned with the storage and processing of information in quantum systems, such as the photon, electron, quantum dot, or atom. Instead of using the bit, however, it defines the fundamental unit of quantum information as the “qubit.” What makes the qubit different from a classical bit is that the smallest system capable of storing a qubit, the two-level quantum system, not only can take on the two distinct values |0 and |1 , but can also be in a state of superposition of these two states: |ψ = α0 |0 + α1 |1.

Quantum information theory has opened up a whole new range of philosophical and foundational questions in quantum cryptography or quantum key distribution, which involves using the principles of quantum mechanics to ensure secure communication. Some quantum cryptographic protocols make use of entanglement to establish correlations between systems that would be lost upon eavesdropping. Moreover, a quantum principle known as the no-cloning theorem prohibits making identical copies of an unknown quantum state. In the context of a C∗-algebraic formulation,  quantum theory can be characterized in terms of three information-theoretic constraints: (1) no superluminal signaling via measurement, (2) no cloning (for pure states) or no broadcasting (mixed states), and (3) no unconditionally secure bit commitment.

Entanglement does not refute the principle of locality. A sketch of the sort of experiment commonly said to refute locality runs as follows. Suppose that you have two electrons with entangled spin. For each electron you can measure the spin along the X, Y or Z direction. If you measure X on both electrons, then you get opposite values, likewise for measuring Y or Z on both electrons. If you measure X on one electron and Y or Z on the other, then you have a 50% probability of a match. And if you measure Y on one and Z on the other, the probability of a match is 50%. The crucial issue is that whether you find a correlation when you do the comparison depends on whether you measure the same quantity on each electron. Bell’s theorem just explains that the extent of this correlation is greater than a local theory would allow if the measured quantities were represented by stochastic variables (i.e. – numbers picked out of a hat). This fact is often misrepresented as implying that quantum mechanics is non-local. But in quantum mechanics, systems are not characterised by stochastic variables, but, rather, by Hermitian operators. There is an entirely local explanation of how the correlations arise in terms of properties of systems represented by such operators. But, another answer to such violations of the principle of locality could also be “Yes, unless you get really obsessive about it.” It has been formally proven that one can have determinacy in a model of quantum dynamics, or one can have locality, but cannot have both. If one gives up the determinacy of the theory in various ways, one can imagine all kinds of ‘planned flukes’ like the notion that the experiments that demonstrate entanglement leak information and pre-determine the environment to make the coordinated behavior seem real. Since this kind of information shaping through distributed uncertainty remains a possibility, folks can cling to locality until someone actually manages something like what those authors are attempting, or we find it impossible. If one gives up locality instead, entanglement does not present a problem, the theory of relativity does. Because the notion of a frame of reference is local. Experiments on quantum tunneling that violate the constraints of the speed of light have been explained with the idea that probabilistic partial information can ‘lead’ real information faster than light by pushing at the vacuum underneath via the ‘Casimir Effect’. If both of these make sense, then the information carried by the entanglement when it is broken would be limited as the particles get farther apart — entanglements would have to spontaneously break down over time or distance of separation so that the probabilities line up. This bodes ill for our ability to find entangled particles from the Big Bang, which seems to be the only prospect in progress to debunk the excessively locality-focussed.

But, much of the work remains undone and this is to be continued…..