Quantum Informational Biochemistry. Thought of the Day 71.0

el_net2

A natural extension of the information-theoretic Darwinian approach for biological systems is obtained taking into account that biological systems are constituted in their fundamental level by physical systems. Therefore it is through the interaction among physical elementary systems that the biological level is reached after increasing several orders of magnitude the size of the system and only for certain associations of molecules – biochemistry.

In particular, this viewpoint lies in the foundation of the “quantum brain” project established by Hameroff and Penrose (Shadows of the Mind). They tried to lift quantum physical processes associated with microsystems composing the brain to the level of consciousness. Microtubulas were considered as the basic quantum information processors. This project as well the general project of reduction of biology to quantum physics has its strong and weak sides. One of the main problems is that decoherence should quickly wash out the quantum features such as superposition and entanglement. (Hameroff and Penrose would disagree with this statement. They try to develop models of hot and macroscopic brain preserving quantum features of its elementary micro-components.)

However, even if we assume that microscopic quantum physical behavior disappears with increasing size and number of atoms due to decoherence, it seems that the basic quantum features of information processing can survive in macroscopic biological systems (operating on temporal and spatial scales which are essentially different from the scales of the quantum micro-world). The associated information processor for the mesoscopic or macroscopic biological system would be a network of increasing complexity formed by the elementary probabilistic classical Turing machines of the constituents. Such composed network of processors can exhibit special behavioral signatures which are similar to quantum ones. We call such biological systems quantum-like. In the series of works Asano and others (Quantum Adaptivity in Biology From Genetics to Cognition), there was developed an advanced formalism for modeling of behavior of quantum-like systems based on theory of open quantum systems and more general theory of adaptive quantum systems. This formalism is known as quantum bioinformatics.

The present quantum-like model of biological behavior is of the operational type (as well as the standard quantum mechanical model endowed with the Copenhagen interpretation). It cannot explain physical and biological processes behind the quantum-like information processing. Clarification of the origin of quantum-like biological behavior is related, in particular, to understanding of the nature of entanglement and its role in the process of interaction and cooperation in physical and biological systems. Qualitatively the information-theoretic Darwinian approach supplies an interesting possibility of explaining the generation of quantum-like information processors in biological systems. Hence, it can serve as the bio-physical background for quantum bioinformatics. There is an intriguing point in the fact that if the information-theoretic Darwinian approach is right, then it would be possible to produce quantum information from optimal flows of past, present and anticipated classical information in any classical information processor endowed with a complex enough program. Thus the unified evolutionary theory would supply a physical basis to Quantum Information Biology.

Black Hole Complementarity: The Case of the Infalling Observer

The four postulates of black hole complementarity are:

Postulate 1: The process of formation and evaporation of a black hole, as viewed by a distant observer, can be described entirely within the context of standard quantum theory. In particular, there exists a unitary S-matrix which describes the evolution from infalling matter to outgoing Hawking-like radiation.

Postulate 2: Outside the stretched horizon of a massive black hole, physics can be described to good approximation by a set of semi-classical field equations.

Postulate 3: To a distant observer, a black hole appears to be a quantum system with discrete energy levels. The dimension of the subspace of states describing a black hole of mass M is the exponential of the Bekenstein entropy S(M).

We take as implicit in postulate 2 that the semi-classical field equations are those of a low energy effective field theory with local Lorentz invariance. These postulates do not refer to the experience of an infalling observer, but states a ‘certainty,’ which for uniformity we label as a further postulate:

Postulate 4: A freely falling observer experiences nothing out of the ordinary when crossing the horizon.

To be more specific, we will assume that postulate 4 means both that any low-energy dynamics this observer can probe near his worldline is well-described by familiar Lorentz-invariant effective field theory and also that the probability for an infalling observer to encounter a quantum with energy E ≫ 1/rs (measured in the infalling frame) is suppressed by an exponentially decreasing adiabatic factor as predicted by quantum field theory in curved spacetime. We will argue that postulates 1, 2, and 4 are not consistent with one another for a sufficiently old black hole.

Consider a black hole that forms from collapse of some pure state and subsequently decays. Dividing the Hawking radiation into an early part and a late part, postulate 1 implies that the state of the Hawking radiation is pure,

|Ψ⟩= ∑ii⟩E ⊗|i⟩L —– (1)

Here we have taken an arbitrary complete basis |i⟩L for the late radiation. We use postulates 1, 2, and 3 to make the division after the Page time when the black hole has emitted half of its initial Bekenstein-Hawking entropy; we will refer to this as an ‘old’ black hole. The number of states in the early subspace will then be much larger than that in the late subspace and, as a result, for typical states |Ψ⟩ the reduced density matrix describing the late-time radiation is close to the identity. We can therefore construct operators acting on the early radiation, whose action on |Ψ⟩ is equal to that of a projection operator onto any given subspace of the late radiation.

To simplify the discussion, we treat gray-body factors by taking the transmission coefficients T to have unit magnitude for a few low partial waves and to vanish for higher partial waves. Since the total radiated energy is finite, this allows us to think of the Hawking radiation as defining a finite-dimensional Hilbert space.

Now, consider an outgoing Hawking mode in the later part of the radiation. We take this mode to be a localized packet with width of order rs corresponding to a superposition of frequencies O(r−1s). Note that postulate 2 allows us to assign a unique observer-independent s lowering operator b to this mode. We can project onto eigenspaces of the number operator bb. In other words, an observer making measurements on the early radiation can know the number of photons that will be present in a given mode of the late radiation.

Following postulate 2, we can now relate this Hawking mode to one at earlier times, as long as we stay outside the stretched horizon. The earlier mode is blue-shifted, and so may have frequency ω* much larger than O(r−1s) though still sub-Planckian.

Next consider an infalling observer and the associated set of infalling modes with lowering operators a. Hawking radiation arises precisely because

b = ∫0 dω B(ω)aω + C(ω)aω —– (2)

so that the full state cannot be both an a-vacuum (a|Ψ⟩ = 0) and a bb eigenstate. Here again we have used our simplified gray-body factors.

The application of postulates 1 and 2 has thus led to the conclusion that the infalling observer will encounter high-energy modes. Note that the infalling observer need not have actually made the measurement on the early radiation: to guarantee the presence of the high energy quanta it is enough that it is possible, just as shining light on a two-slit experiment destroys the fringes even if we do not observe the scattered light. Here we make the implicit assumption that the measurements of the infalling observer can be described in terms of an effective quantum field theory. Instead we could simply suppose that if he chooses to measure bb he finds the expected eigenvalue, while if he measures the noncommuting operator aa instead he finds the expected vanishing value. But this would be an extreme modification of the quantum mechanics of the observer, and does not seem plausible.

Figure below gives a pictorial summary of our argument, using ingoing Eddington-Finkelstein coordinates. The support of the mode b is shaded. At large distance it is a well-defined Hawking photon, in a predicted eigenstate of bb by postulate 1. The observer encounters it when its wavelength is much shorter: the field must be in the ground state aωaω = 0, by postulate 4, and so cannot be in an eigenstate of bb. But by postulate 2, the evolution of the mode outside the horizon is essentially free, so this is a contradiction.

Untitled

Figure: Eddington-Finkelstein coordinates, showing the infalling observer encountering the outgoing Hawking mode (shaded) at a time when its size is ω−1* ≪ rs. If the observer’s measurements are given by an eigenstate of aa, postulate 1 is violated; if they are given by an eigenstate of bb, postulate 4 is violated; if the result depends on when the observer falls in, postulate 2 is violated.

To restate our paradox in brief, the purity of the Hawking radiation implies that the late radiation is fully entangled with the early radiation, and the absence of drama for the infalling observer implies that it is fully entangled with the modes behind the horizon. This is tantamount to cloning. For example, it violates strong subadditivity of the entropy,

SAB + SBC ≥ SB + SABC —– (3)

Let A be the early Hawking modes, B be outgoing Hawking mode, and C be its interior partner mode. For an old black hole, the entropy is decreasing and so SAB < SA. The absence of infalling drama means that SBC = 0 and so SABC = SA. Subadditivity then gives SA ≥ SB + SA, which fails substantially since the density matrix for system B by itself is thermal.

Actually, assuming the Page argument, the inequality is violated even more strongly: for an old black hole the entropy decrease is maximal, SAB = SA − SB, so that we get from subadditivity that SA ≥ 2SB + SA.

Note that the measurement of Nb takes place entirely outside the horizon, while the measurement of Na (real excitations above the infalling vacuum) must involve a region that extends over both sides of the horizon. These are noncommuting measurements, but by measuring Nb the observer can infer something about what would have happened if Na had been measured instead. For an analogy, consider a set of identically prepared spins. If each is measured along the x-axis and found to be +1/2, we can infer that a measurement along the z-axis would have had equal probability to return +1/2 and −1/2. The multiple spins are needed to reduce statistical variance; similarly in our case the observer would need to measure several modes Nb to have confidence that he was actually entangled with the early radiation. One might ask if there could be a possible loophole in the argument: A physical observer will have a nonzero mass, and so the mass and entropy of the black hole will increase after he falls in. However, we may choose to consider a particular Hawking wavepacket which is already separated from the streched horizon by a finite amount when it is encountered by the infalling observer. Thus by postulate 2 the further evolution of this mode is semiclassical and not affected by the subsequent merging of the observer with the black hole. In making this argument we are also assuming that the dynamics of the stretched horizon is causal.

Thus far the asymptotically flat discussion applies to a black hole that is older than the Page time; we needed this in order to frame a sharp paradox using the entanglement with the Hawking radiation. However, we are discussing what should be intrinsic properties of the black hole, not dependent on its entanglement with some external system. After the black hole scrambling time, almost every small subsystem of the black hole is in an almost maximally mixed state. So if the degrees of freedom sampled by the infalling observer can be considered typical, then they are ‘old’ in an intrinsic sense. Our conclusions should then hold. If the black hole is a fast scrambler the scrambling time is rs ln(rs/lP), after which we have to expect either drama for the infalling observer or novel physics outside the black hole.

We note that the three postulates that are in conflict – purity of the Hawking radiation, absence of infalling drama, and semiclassical behavior outside the horizon — are widely held even by those who do not explicitly label them as ‘black hole complementarity.’ For example, one might imagine that if some tunneling process were to cause a shell of branes to appear at the horizon, an infalling observer would just go ‘splat,’ and of course Postulate 4 would not hold.

Derivability from Relational Logic of Charles Sanders Peirce to Essential Laws of Quantum Mechanics

Charles_Sanders_Peirce

Charles Sanders Peirce made important contributions in logic, where he invented and elaborated novel system of logical syntax and fundamental logical concepts. The starting point is the binary relation SiRSj between the two ‘individual terms’ (subjects) Sj and Si. In a short hand notation we represent this relation by Rij. Relations may be composed: whenever we have relations of the form Rij, Rjl, a third transitive relation Ril emerges following the rule

RijRkl = δjkRil —– (1)

In ordinary logic the individual subject is the starting point and it is defined as a member of a set. Peirce considered the individual as the aggregate of all its relations

Si = ∑j Rij —– (2)

The individual Si thus defined is an eigenstate of the Rii relation

RiiSi = Si —– (3)

The relations Rii are idempotent

R2ii = Rii —– (4)

and they span the identity

i Rii = 1 —– (5)

The Peircean logical structure bears resemblance to category theory. In categories the concept of transformation (transition, map, morphism or arrow) enjoys an autonomous, primary and irreducible role. A category consists of objects A, B, C,… and arrows (morphisms) f, g, h,… . Each arrow f is assigned an object A as domain and an object B as codomain, indicated by writing f : A → B. If g is an arrow g : B → C with domain B, the codomain of f, then f and g can be “composed” to give an arrow gof : A → C. The composition obeys the associative law ho(gof) = (hog)of. For each object A there is an arrow 1A : A → A called the identity arrow of A. The analogy with the relational logic of Peirce is evident, Rij stands as an arrow, the composition rule is manifested in equation (1) and the identity arrow for A ≡ Si is Rii.

Rij may receive multiple interpretations: as a transition from the j state to the i state, as a measurement process that rejects all impinging systems except those in the state j and permits only systems in the state i to emerge from the apparatus, as a transformation replacing the j state by the i state. We proceed to a representation of Rij

Rij = |ri⟩⟨rj| —– (6)

where state ⟨ri | is the dual of the state |ri⟩ and they obey the orthonormal condition

⟨ri |rj⟩ = δij —– (7)

It is immediately seen that our representation satisfies the composition rule equation (1). The completeness, equation (5), takes the form

n|ri⟩⟨ri|=1 —– (8)

All relations remain satisfied if we replace the state |ri⟩ by |ξi⟩ where

i⟩ = 1/√N ∑n |ri⟩⟨rn| —– (9)

with N the number of states. Thus we verify Peirce’s suggestion, equation (2), and the state |ri⟩ is derived as the sum of all its interactions with the other states. Rij acts as a projection, transferring from one r state to another r state

Rij |rk⟩ = δjk |ri⟩ —– (10)

We may think also of another property characterizing our states and define a corresponding operator

Qij = |qi⟩⟨qj | —– (11)

with

Qij |qk⟩ = δjk |qi⟩ —– (12)

and

n |qi⟩⟨qi| = 1 —– (13)

Successive measurements of the q-ness and r-ness of the states is provided by the operator

RijQkl = |ri⟩⟨rj |qk⟩⟨ql | = ⟨rj |qk⟩ Sil —– (14)

with

Sil = |ri⟩⟨ql | —– (15)

Considering the matrix elements of an operator A as Anm = ⟨rn |A |rm⟩ we find for the trace

Tr(Sil) = ∑n ⟨rn |Sil |rn⟩ = ⟨ql |ri⟩ —– (16)

From the above relation we deduce

Tr(Rij) = δij —– (17)

Any operator can be expressed as a linear superposition of the Rij

A = ∑i,j AijRij —– (18)

with

Aij =Tr(ARji) —– (19)

The individual states could be redefined

|ri⟩ → ei |ri⟩ —– (20)

|qi⟩ → ei |qi⟩ —– (21)

without affecting the corresponding composition laws. However the overlap number ⟨ri |qj⟩ changes and therefore we need an invariant formulation for the transition |ri⟩ → |qj⟩. This is provided by the trace of the closed operation RiiQjjRii

Tr(RiiQjjRii) ≡ p(qj, ri) = |⟨ri |qj⟩|2 —– (22)

The completeness relation, equation (13), guarantees that p(qj, ri) may assume the role of a probability since

j p(qj, ri) = 1 —– (23)

We discover that starting from the relational logic of Peirce we obtain all the essential laws of Quantum Mechanics. Our derivation underlines the outmost relational nature of Quantum Mechanics and goes in parallel with the analysis of the quantum algebra of microscopic measurement.

Quantum Music

Human neurophysiology suggests that artistic beauty cannot easily be disentangled from sexual attraction. It is, for instance, very difficult to appreciate Sandro Botticelli’s Primavera, the arguably “most beautiful painting ever painted,” when a beautiful woman or man is standing in front of that picture. Indeed so strong may be the distraction, and so deep the emotional impact, that it might not be unreasonable to speculate whether aesthetics, in particular beauty and harmony in art, could be best understood in terms of surrogates for natural beauty. This might be achieved through the process of artistic creation, idealization and “condensation.”

1200px-Botticelli-primavera

In this line of thought, in Hegelian terms, artistic beauty is the sublimation, idealization, completion, condensation and augmentation of natural beauty. Very different from Hegel who asserts that artistic beauty is “born of the spirit and born again, and the higher the spirit and its productions are above nature and its phenomena, the higher, too, is artistic beauty above the beauty of nature” what is believed here is that human neurophysiology can hardly be disregarded for the human creation and perception of art; and, in particular, of beauty in art. Stated differently, we are inclined to believe that humans are invariably determined by (or at least intertwined with) their natural basis that any neglect of it results in a humbling experience of irritation or even outright ugliness; no matter what social pressure groups or secret services may want to promote.

Thus, when it comes to the intensity of the experience, the human perception of artistic beauty, as sublime and refined as it may be, can hardly transcend natural beauty in its full exposure. In that way, art represents both the capacity as well as the humbling ineptitude of its creators and audiences.

Leaving these idealistic realms and come back to the quantization of musical systems. The universe of music consists of an infinity – indeed a continuum – of tones and ways to compose, correlate and arrange them. It is not evident how to quantize sounds, and in particular music, in general. One way to proceed would be a microphysical one: to start with frequencies of sound waves in air and quantize the spectral modes of these (longitudinal) vibrations very similar to phonons in solid state physics.

For the sake of relating to music, however, a different approach that is not dissimilar to the Deutsch-Turing approach to universal (quantum) computability, or Moore’s automata analogues to complementarity: a musical instrument is quantized, concerned with an octave, realized by the eight white keyboard keys typically written c, d, e, f, g, a, b, c′ (in the C major scale).

In analogy to quantum information quantization of tones is considered for a nomenclature in analogy to classical musical representation to be further followed up by introducing typical quantum mechanical features such as the coherent superposition of classically distinct tones, as well as entanglement and complementarity in music…..quantum music

Priest’s Razor: Metaphysics. Note Quote.

Quantum-Physics-and-Metaphysics

The very idea that some mathematical piece employed to develop an empirical theory may furnish us information about unobservable reality requires some care and philosophical reflection. The greatest difficulty for the scientifically minded metaphysician consists in furnishing the means for a “reading off” of ontology from science. What can come in, and what can be left out? Different strategies may provide for different results, and, as we know, science does not wear its metaphysics on its sleeves. The first worry may be making the metaphysical piece compatible with the evidence furnished by the theory.

The strategy adopted by da Costa and de Ronde may be called top-down: investigating higher science and, by judging from the features of the objects described by the theory, one can look for the appropriate logic to endow it with just those features. In this case (quantum mechanics), there is the theory, apparently attributing contradictory properties to entities, so that a logic that does cope with such feature of objects is called forth. Now, even though we believe that this is in great measure the right methodology to pursue metaphysics within scientific theories, there are some further methodological principles that also play an important role in these kind of investigation, principles that seem to lessen the preferability of the paraconsistent approach over alternatives.

To begin with, let us focus on the paraconsistent property attribution principle. According to this principle, the properties corresponding to the vectors in a superposition are all attributable to the system, they are all real. The first problem with this rendering of properties (whether they are taken to be actual or just potential) is that such a superabundance of properties may not be justified: not every bit of a mathematical formulation of a theory needs to be reified. Some of the parts of the theory are just that: mathematics required to make things work, others may correspond to genuine features of reality. The greatest difficulty is to distinguish them, but we should not assume that every bit of it corresponds to an entity in reality. So, on the absence of any justified reason to assume superpositions as a further entity on the realms of properties for quantum systems, we may keep them as not representing actual properties (even if merely possible or potential ones).

That is, when one takes into account other virtues of a metaphysical theory, such as economy and simplicity, the paraconsistent approach seems to inflate too much the population of our world. In the presence of more economical candidates doing the same job and absence of other grounds on which to choose the competing proposals, the more economical approaches take advantage. Furthermore, considering economy and the existence of theories not postulating contradictions in quantum mechanics, it seems reasonable to employ Priest’s razor – the principle according to which one should not assume contradictions beyond necessity – and stick with the consistent approaches. Once again, a useful methodological principle seems to deem the interpretation of superposition as contradiction as unnecessary.

The paraconsistent approach could take advantage over its competitors, even in the face of its disadvantage in order to accommodate such theoretical virtues, if it could endow quantum mechanics with a better understanding of quantum phenomena, or even if it could add some explanatory power to the theory. In the face of some such kind of gain, we could allow for some ontological extravagances: in most cases explanatory power rules over matters of economy. However, it does not seem that the approach is indeed going to achieve some such result.

Besides that lack of additional explanatory power or enlightenment on the theory, there are some additional difficulties here. There is a complete lack of symmetry with the standard case of property attribution in quantum mechanics. As it is usually understood, by adopting the minimal property attribution principle, it is not contentious that when a system is in one eigenstate of an observable, then we may reasonably infer that the system has the property represented by the associated observable, so that the probability of obtaining the eigenvalue associated is 1. In the case of superpositions, if they represented properties of their own, there is a complete disanalogy with that situation: probabilities play a different role, a system has a contradictory property attributed by a superposition irrespective of probability attribution and the role of probabilities in determining measurement outcomes. In a superposition, according to the proposal we are analyzing, probabilities play no role, the system simply has a given contradictory property by the simple fact of being in a (certain) superposition.

For another disanalogy with the usual case, one does not expect to observe a sys- tem in such a contradictory state: every measurement gives us a system in particular state, never in a superposition. If that is a property in the same foot as any other, why can’t we measure it? Obviously, this does not mean that we put measurement as a sign of the real, but when doubt strikes, it may be a good advice not to assume too much on the unobservable side. As we have observed before, a new problem is created by this interpretation, because besides explaining what is it that makes a measurement give a specific result when the system measured is in a superposition (a problem usually addressed by the collapse postulate, which seems to be out of fashion now), one must also explain why and how the contradictory properties that do not get actualized vanish. That is, besides explaining how one particular property gets actual, one must explain how the properties posed by the system that did not get actual vanish.

Furthermore, even if states like 1/√2 (| ↑x ⟩ + | ↓x ⟩) may provide for an example of a  candidate of a contradictory property, because the system seems to have both spin up and down in a given direction, there are some doubts when the distribution of probabilities is different, in cases such as 2/√7 | ↑x ⟩ + √(3/7) | ↓x ⟩. What are we to think about that? Perhaps there is still a contradiction, but it is a little more inclined to | ↓x⟩ than to | ↑x⟩? That is, it is difficult to see how a contradiction arises in such cases. Or should we just ignore the probabilities and take the states composing the superposition as somehow opposed to form a contradiction anyway? That would put metaphysics way too much ahead of science, by leaving the role of probabilities unexplained in quantum mechanics in order to allow a metaphysical view of properties in.

von Neumann & Dis/belief in Hilbert Spaces

I would like to make a confession which may seem immoral: I do not believe absolutely in Hilbert space any more.

— John von Neumann, letter to Garrett Birkhoff, 1935.

15_03

The mathematics: Let us consider the raison d’ˆetre for the Hilbert space formalism. So why would one need all this ‘Hilbert space stuff, i.e. the continuum structure, the field structure of complex numbers, a vector space over it, inner-product structure, etc. Why? According to von Neumann, he simply used it because it happened to be ‘available’. The use of linear algebra and complex numbers in so many different scientific areas, as well as results in model theory, clearly show that quite a bit of modeling can be done using Hilbert spaces. On the other hand, we can also model any movie by means of the data stream that runs through your cables when watching it. But does this mean that these data streams make up the stuff that makes a movie? Clearly not, we should rather turn our attention to the stuff that is being taught at drama schools and directing schools. Similarly, von Neumann turned his attention to the actual physical concepts behind quantum theory, more specifically, the notion of a physical property and the structure imposed on these by the peculiar nature of quantum observation. His quantum logic gave the resulting ‘algebra of physical properties’ a privileged role. All of this leads us to … the physics of it. Birkhoff and von Neumann crafted quantum logic in order to emphasize the notion of quantum superposition. In terms of states of a physical system and properties of that system, superposition means that the strongest property which is true for two distinct states is also true for states other than the two given ones. In order-theoretic terms this means, representing states by the atoms of a lattice of properties, that the join p ∨ q of two atoms p and q is also above other atoms. From this it easily follows that the distributive law breaks down: given atom r ≠ p, q with r < p ∨ q we have r ∧ (p ∨ q) = r while (r ∧ p) ∨ (r ∧ q) = 0 ∨ 0 = 0. Birkhoff and von Neumann as well as many others believed that understanding the deep structure of superposition is the key to obtaining a better understanding of quantum theory as a whole.

For Schrödinger, this is the behavior of compound quantum systems, described by the tensor product. While the quantum information endeavor is to a great extend the result of exploiting this important insight, the language of the field is still very much that of strings of complex numbers, which is akin to the strings of 0’s and 1’s in the early days of computer programming. If the manner in which we describe compound quantum systems captures so much of the essence of quantum theory, then it should be at the forefront of the presentation of the theory, and not preceded by continuum structure, field of complex numbers, vector space over the latter, etc, to only then pop up as some secondary construct. How much quantum phenomena can be derived from ‘compoundness + epsilon’. It turned out that epsilon can be taken to be ‘very little’, surely not involving anything like continuum, fields, vector spaces, but merely a ‘2D space’ of temporal composition and compoundness, together with some very natural purely operational assertion, including one which in a constructive manner asserts entanglement; among many other things, trace structure then follows.