Quantum Informational Biochemistry. Thought of the Day 71.0

el_net2

A natural extension of the information-theoretic Darwinian approach for biological systems is obtained taking into account that biological systems are constituted in their fundamental level by physical systems. Therefore it is through the interaction among physical elementary systems that the biological level is reached after increasing several orders of magnitude the size of the system and only for certain associations of molecules – biochemistry.

In particular, this viewpoint lies in the foundation of the “quantum brain” project established by Hameroff and Penrose (Shadows of the Mind). They tried to lift quantum physical processes associated with microsystems composing the brain to the level of consciousness. Microtubulas were considered as the basic quantum information processors. This project as well the general project of reduction of biology to quantum physics has its strong and weak sides. One of the main problems is that decoherence should quickly wash out the quantum features such as superposition and entanglement. (Hameroff and Penrose would disagree with this statement. They try to develop models of hot and macroscopic brain preserving quantum features of its elementary micro-components.)

However, even if we assume that microscopic quantum physical behavior disappears with increasing size and number of atoms due to decoherence, it seems that the basic quantum features of information processing can survive in macroscopic biological systems (operating on temporal and spatial scales which are essentially different from the scales of the quantum micro-world). The associated information processor for the mesoscopic or macroscopic biological system would be a network of increasing complexity formed by the elementary probabilistic classical Turing machines of the constituents. Such composed network of processors can exhibit special behavioral signatures which are similar to quantum ones. We call such biological systems quantum-like. In the series of works Asano and others (Quantum Adaptivity in Biology From Genetics to Cognition), there was developed an advanced formalism for modeling of behavior of quantum-like systems based on theory of open quantum systems and more general theory of adaptive quantum systems. This formalism is known as quantum bioinformatics.

The present quantum-like model of biological behavior is of the operational type (as well as the standard quantum mechanical model endowed with the Copenhagen interpretation). It cannot explain physical and biological processes behind the quantum-like information processing. Clarification of the origin of quantum-like biological behavior is related, in particular, to understanding of the nature of entanglement and its role in the process of interaction and cooperation in physical and biological systems. Qualitatively the information-theoretic Darwinian approach supplies an interesting possibility of explaining the generation of quantum-like information processors in biological systems. Hence, it can serve as the bio-physical background for quantum bioinformatics. There is an intriguing point in the fact that if the information-theoretic Darwinian approach is right, then it would be possible to produce quantum information from optimal flows of past, present and anticipated classical information in any classical information processor endowed with a complex enough program. Thus the unified evolutionary theory would supply a physical basis to Quantum Information Biology.

Evolutionary Game Theory. Note Quote

Untitled

In classical evolutionary biology the fitness landscape for possible strategies is considered static. Therefore optimization theory is the usual tool in order to analyze the evolution of strategies that consequently tend to climb the peaks of the static landscape. However in more realistic scenarios the evolution of populations modifies the environment so that the fitness landscape becomes dynamic. In other words, the maxima of the fitness landscape depend on the number of specimens that adopt every strategy (frequency-dependent landscape). In this case, when the evolution depends on agents’ actions, game theory is the adequate mathematical tool to describe the process. But this is precisely the scheme in that the evolving physical laws (i.e. algorithms or strategies) are generated from the agent-agent interactions (bottom-up process) submitted to natural selection.

The concept of evolutionarily stable strategy (ESS) is central to evolutionary game theory. An ESS is defined as that strategy that cannot be displaced by any alternative strategy when being followed by the great majority – almost all of systems in a population. In general,

an ESS is not necessarily optimal; however it might be assumed that in the last stages of evolution — before achieving the quantum equilibrium — the fitness landscape of possible strategies could be considered static or at least slow varying. In this simplified case an ESS would be one with the highest payoff therefore satisfying an optimizing criterion. Different ESSs could exist in other regions of the fitness landscape.

In the information-theoretic Darwinian approach it seems plausible to assume as optimization criterion the optimization of information flows for the system. A set of three regulating principles could be:

Structure: The complexity of the system is optimized (maximized).. The definition that is adopted for complexity is Bennett’s logical depth that for a binary string is the time needed to execute the minimal program that generates such string. There is no a general acceptance of the definition of complexity, neither is there a consensus on the relation between the increase of complexity – for a certain definition – and Darwinian evolution. However, it seems that there is some agreement on the fact that, in the long term, Darwinian evolution should drive to an increase in complexity in the biological realm for an adequate natural definition of this concept. Then the complexity of a system at time in this theory would be the Bennett’s logical depth of the program stored at time in its Turing machine. The increase of complexity is a characteristic of Lamarckian evolution, and it is also admitted that the trend of evolution in the Darwinian theory is in the direction in which complexity grows, although whether this tendency depends on the timescale – or some other factors – is still not very clear.

Dynamics: The information outflow of the system is optimized (minimized). The information is the Fisher information measure for the probability density function of the position of the system. According to S. A. Frank, natural selection acts maximizing the Fisher information within a Darwinian system. As a consequence, assuming that the flow of information between a system and its surroundings can be modeled as a zero-sum game, Darwinian systems would follow dynamics.

Interaction: The interaction between two subsystems optimizes (maximizes) the complexity of the total system. The complexity is again equated to the Bennett’s logical depth. The role of Interaction is central in the generation of composite systems, therefore in the structure for the information processor of composite systems resulting from the logical interconnections among the processors of the constituents. There is an enticing option of defining the complexity of a system in contextual terms as the capacity of a system for anticipating the behavior at t + ∆t of the surrounding systems included in the sphere of radius r centered in the position X(t) occupied by the system. This definition would directly drive to the maximization of the predictive power for the systems that maximized their complexity. However, this magnitude would definitely be very difficult to even estimate, in principle much more than the usual definitions for complexity.

Quantum behavior of microscopic systems should now emerge from the ESS. In other terms, the postulates of quantum mechanics should be deduced from the application of the three regulating principles on our physical systems endowed with an information processor.

Let us apply Structure. It is reasonable to consider that the maximization of the complexity of a system would in turn maximize the predictive power of such system. And this optimal statistical inference capacity would plausibly induce the complex Hilbert space structure for the system’s space of states. Let us now consider Dynamics. This is basically the application of the principle of minimum Fisher information or maximum Cramer-Rao bound on the probability distribution function for the position of the system. The concept of entanglement seems to be determinant to study the generation of composite systems, in particular in this theory through applying Interaction. The theory admits a simple model that characterizes the entanglement between two subsystems as the mutual exchange of randomizers (R1, R2), programs (P1, P2) – with their respective anticipation modules (A1, A2) – and wave functions (Ψ1, Ψ2). In this way, both subsystems can anticipate not only the behavior of their corresponding surrounding systems, but also that of the environment of its partner entangled subsystem. In addition, entanglement can be considered a natural phenomenon in this theory, a consequence of the tendency to increase the complexity, and therefore, in a certain sense, an experimental support to the theory.

In addition, the information-theoretic Darwinian approach is a minimalist realist theory – every system follows a continuous trajectory in time, as in Bohmian mechanics, a local theory in physical space – in this theory apparent nonlocality, as in Bell’s inequality violations, would be an artifact of the anticipation module in the information space, although randomness would necessarily be intrinsic to nature through the random number generator methodologically associated with every fundamental system at t = 0, and as essential ingredient to start and fuel – through variation – Darwinian evolution. As time increases, random events determined by the random number generators would progressively be replaced by causal events determined by the evolving programs that gradually take control of the elementary systems. Randomness would be displaced by causality as physical Darwinian evolution gave rise to the quantum equilibrium regime, but not completely, since randomness would play a crucial role in the optimization of strategies – thus, of information flows – as game theory states.

Acausal Propagation. Thought of the Day 62.1

10913_weavers

Whereas the Proca theory is the unique local linear massive variant of Maxwell’s electromagnetism, the most famous massive gravity with 6∞3 degrees of freedom, the Freund-Maheshwari-Schonberg massive gravity, is just one member (albeit the best in some respects) of a 2-parameter family of massive theories of gravity, all of which satisfy universal coupling. Adding a mass term involves adding a term quadratic in the potential; higher-order (cubic, quartic, etc.) self-interaction terms might also be present. The nonlinearity of the Einstein tensor implies, in contrast to the electromagnetic case, that there is no obviously best choice for defining the gravitational potential. While any such definition requires a background metric ημν in order that the potential vanish when gravity is turned off (typically flat space-time), thus making massive theories bimetric, one can still choose among gμν − ημν , √-ggμν − √-ηημν, gμν − ημν and so on, as well as various nonlinear choices such as gμαηαβgβν − ημν. In some cases the availability of nonlinear field redefinitions might make some expressions that look like mass term + interaction term with one definition of the gravitational potential, appear as a pure quadratic mass term with another definition; nonetheless the Einstein tensor remains nonlinear, no matter what definition of the potential is used. By contrast, the linearity of the Maxwell field strength tensor makes it natural to have a mass term that is also linear in Aμ in the field equations (and hence quadratic in Aμ in the Lagrangian density). While one can explore introducing nonlinear algebraic terms in Aμ describing self-interactions in electromagnetism, such terms induce acausal propagation if not chosen carefully.

Proca’s Abelian Sector: Approximate Equivalence. Thought of the Day 62.0

relativistic_quantum_electrodynamics

The underdetermination between the quantized Maxwell theory and the lower-mass quantized Proca theories is permanent (at least unless a photon mass is detected, in which case Proca wins). It does not immediately follow that our best science leaves the photon mass unspecified apart from empirical bounds, however. Electromagnetism can be unified with an SU(2) Yang-Mills field describing the weak nuclear force into the electroweak theory. The resulting electroweak unification of course is not simply a logical conjunction of the electromagnetic and weak theories; the theories undergoing unification are modified in the process. Maxwell’s theory can participate in this unification; can Proca theories participate while preserving renormalizability and unitarity? Probably they can. Thus evidently the underdetermination between Maxwell and Proca persists even in electroweak theory, though this unresolved rivalry is not widely noticed. There is some non-uniqueness in the photon mass term, partly due to the rotation by the weak mixing angle between the original fields in the SU(2) × U(1) group and the mass eigenstates after spontaneous symmetry breaking. Thus the physical photon is not simply the field corresponding to the original U(1) group, contrary to naive expectations. There are also various empirically negligible but perhaps conceptually important effects that can arise in such theories. Among these are charge dequantization – the charges of charged particles are no longer integral multiples of a smallest charge – and perhaps charge non-conservation. Crucial to the possibility of including a Proca-type mass term (as opposed to merely getting mass by spontaneous symmetry breaking) is the non-semi-simple nature of the gauge group SU(2) × U(1): this group has a subgroup U(1) that is Abelian and that commutes with the whole of the larger group. Were the electroweak theory to be embedded in some larger semi-simple group such as SU(5), then no Proca mass term could be included.

Dialectics of God: Lautman’s Mathematical Ascent to the Absolute. Paper.

centurionrage

Figure and Translation, visit Fractal Ontology

The first of Lautman’s two theses (On the unity of the mathematical sciences) takes as its starting point a distinction that Hermann Weyl made on group theory and quantum mechanics. Weyl distinguished between ‘classical’ mathematics, which found its highest flowering in the theory of functions of complex variables, and the ‘new’ mathematics represented by (for example) the theory of groups and abstract algebras, set theory and topology. For Lautman, the ‘classical’ mathematics of Weyl’s distinction is essentially analysis, that is, the mathematics that depends on some variable tending towards zero: convergent series, limits, continuity, differentiation and integration. It is the mathematics of arbitrarily small neighbourhoods, and it reached maturity in the nineteenth century. On the other hand, the ‘new’ mathematics of Weyl’s distinction is ‘global’; it studies the structures of ‘wholes’. Algebraic topology, for example, considers the properties of an entire surface rather than aggregations of neighbourhoods. Lautman re-draws the distinction:

In contrast to the analysis of the continuous and the infinite, algebraic structures clearly have a finite and discontinuous aspect. Though the elements of a group, field or algebra (in the restricted sense of the word) may be infinite, the methods of modern algebra usually consist in dividing these elements into equivalence classes, the number of which is, in most applications, finite.

In his other major thesis, (Essay on the notions of structure and existence in mathematics), Lautman gives his dialectical thought a more philosophical and polemical expression. His thesis is composed of ‘structural schemas’ and ‘origination schemas’ The three structural schemas are: local/global, intrinsic properties/induced properties and the ‘ascent to the absolute’. The first two of these three schemas close to Lautman’s ‘unity’ thesis. The ‘ascent to the absolute’ is a different sort of pattern; it involves a progress from mathematical objects that are in some sense ‘imperfect’, towards an object that is ‘perfect’ or ‘absolute’. His two mathematical examples of this ‘ascent’ are: class field theory, which ‘ascends’ towards the absolute class field, and the covering surfaces of a given surface, which ‘ascend’ towards a simply-connected universal covering surface. In each case, there is a corresponding sequence of nested subgroups, which induces a ‘stepladder’ structure on the ‘ascent’. This dialectical pattern is rather different to the others. The earlier examples were of pairs of notions (finite/infinite, local/global, etc.) and neither member of any pair was inferior to the other. Lautman argues that on some occasions, finite mathematics offers insight into infinite mathematics. In mathematics, the finite is not a somehow imperfect version of the infinite. Similarly, the ‘local’ mathematics of analysis may depend for its foundations on ‘global’ topology, but the former is not a botched or somehow inadequate version of the latter. Lautman introduces the section on the ‘ascent to the absolute’ by rehearsing Descartes’s argument that his own imperfections lead him to recognise the existence of a perfect being (God). Man (for Descartes) is not the dialectical opposite of or alternative to God; rather, man is an imperfect image of his creator. In a similar movement of thought, according to Lautman, reflection on ‘imperfect’ class fields and covering surfaces leads mathematicians up to ‘perfect’, ‘absolute’ class fields and covering surfaces respectively.

Albert Lautman Dialectics in mathematics

Is General Theory of Relativity a Gauge Theory? Trajectories of Diffeomorphism.

diff2_670

Historically the problem of observables in classical and quantum gravity is closely related to the so-called Einstein hole problem, i.e. to some of the consequences of general covariance in general relativity (GTR).

The central question is the physical meaning of the points of the event manifold underlying GTR. In contrast to pure mathematics this is a non-trivial point in physics. While in pure differential geometry one simply decrees the existence of, for example, a (pseudo-) Riemannian manifold with a differentiable structure (i.e., an appropriate cover with coordinate patches) plus a (pseudo-) Riemannian metric, g, the relation to physics is not simply one-one. In popular textbooks about GTR, it is frequently stated that all diffeomorphic (space-time) manifolds, M are physically indistinguishable. Put differently:

S − T = Riem/Diff —– (1)

This becomes particularly virulent in the Einstein hole problem. i.e., assuming that we have a region of space-time, free of matter, we can apply a local diffeomorphism which only acts within this hole, letting the exterior invariant. We get thus in general two different metric tensors

g(x) , g′(x) := Φ ◦ g(x) —– (2)

in the hole while certain inital conditions lying outside of the hole are unchanged, thus yielding two different solutions of the Einstein field equations.

Many physicists consider this to be a violation of determinism (which it is not!) and hence argue that the class of observable quantities have to be drastically reduced in (quantum) gravity theory. They follow the line of reasoning developed by Dirac in the context of gauge theory, thus implying that GTR is essentially also a gauge theory. This then winds up to the conclusion:

Dirac observables in quantum gravity are quantities which are diffeomorphism invariant with the diffeomorphism group, Diff acting from M to M, i.e.

Φ : M → M —– (3)

One should note that with respect to physical observations there is no violation of determinism. An observer can never really observe two different metric fields on one and the same space-time manifold. This can only happen on the mathematical paper. He will use a fixed measurement protocol, using rods and clocks in e.g. a local inertial frame where special relativity locally applies and then extend the results to general coordinate frames.

We get a certain orbit under Diff if we start from a particular manifold M with a metric tensor g and take the orbit

{M, Φ ◦g} —– (4)

In general we have additional fields and matter distributions on M which are transformd accordingly.

Note that not even scalars are invariant in general in the above sense, i.e., not even the Ricci scalar is observable in the Dirac sense:

R(x) ≠ Φ ◦ R(x) —– (5)

in the generic case. Thus, this would imply that the class of admissible observables can be pretty small (even empty!). Furthermore, it follows that points of M are not a priori distinguishable. On the other hand, many consider the Ricci scalar at a point to be an observable quantity.

This winds up to the question whether GTR is a true gauge theory or perhaps only apparently so at a first glance, while on a more fundamental level it is something different. In the words of Kuchar (What is observable..),

Quantities non-invariant under the full diffeomorphism group are observable in gravity.

The reason for these apparently diverging opinions stems from the role reference systems are assumed to play in GTR with some arguing that the gauge property of general coordinate invariance is only of a formal nature.

In the hole argument it is for example argued that it is important to add some particle trajectories which cross each other, thus generating concrete events on M. As these point events transform accordingly under a diffeomorphism, the distance between the corresponding coordinates x, y equals the distance between the transformed points Φ(x), Φ(y), thus being a Dirac observable. On the other hand, the coordinates x or y are not observable.

One should note that this observation is somewhat tautological in the realm of Riemannian geometry as the metric is an absolute quantity, put differently (and somewhat sloppily), ds2 is invariant under passive and by the same token active coordinate transformation (diffeomorphisms) because, while conceptually different, the transformation properties under the latter operations are defined as in the passive case. In the case of GTR this absolute quantity enters via the equivalence principle i.e., distances are measured for example in a local inertial frame (LIF) where special relativity holds and are then generalized to arbitrary coordinate systems.

Sellarsian Intentionality. Thought of the Day 59.0

121780

Sellars developed a theory of intentionality that seems calculated to so construe intentional phenomena as to make them compatible with developments in the sciences.

Now if thoughts are items which are conceived in terms of the roles they play, then there is no barrier in principle to the identification of conceptual thinking with neurophysiological process. There would be no “qualitative” remainder to be accounted for. The identification, curiously enough, would be even more straightforward than the identification of the physical things in the manifest image with complex systems of physical particles. And in this key, if not decisive, respect, the respect in which both images are concerned with conceptual thinking (which is the distinctive trait of man), the manifest and scientific images could merge without clash in the synoptic view. (Philosophy and the Scientific Image of Man).

The first thing to notice is that Sellars maintains that intentionality is irreducible in the sense that we cannot define in any of the vocabularies of the natural sciences concepts equivalent to the concepts of intentionality. The language of intentionality is introduced as an autonomous explanatory vocabulary tied, of course, to the vocabulary of empirical behavior, but not reducible to that language. The autonomy of mentalistic discourse surely commits us to a new ideology, a new set of basic predicates, above and beyond what can be constructed in the vocabularies of the natural sciences. What we get from the sciences can be the whole truth about the world, including intentional phenomena, then, only if there is some way to construct, using proper scientific methodology, concepts in the scientific image that are legitimate successors to the concepts of intentionality present in the manifest image. That there is such a rigorous construction of successors to the concepts of intentionality is, a clear commitment on Sellars’s part. The only real alternative is some form of eliminativism, an alternative that some of his students adopted and some of his critics thought Sellars was committed to, but which never held any real attraction for Sellars.

The second thing to notice is that the concepts of intentionality, especially the concepts of agency, differ in some significant ways from the normal concepts of the natural sciences. Sellars puts it this way:

To say that a certain person desired to do A, thought it his duty to do B but was forced to do C, is not to describe him as one might describe a scientific specimen. One does, indeed, describe him, but one does something more. And it is this something more which is the irreducible core of the framework of persons.

Here the focus is explicitly on the language of agency, but the point is fundamentally the same as in Sellars’s well-known dictum from Empiricism and Philosophy of Mind:

in characterizing an episode or a state as that of knowing, we are not giving an empirical description of that episode or state; we are placing it in the logical space of reasons, of justifying and being able to justify what one says.

In both epistemic and agential language something extra-descriptive is going on. In order to accommodate this important aspect of such phenomena, Sellars tells us, we must add to the purely descriptive/explanatory vocabulary of the sciences “the language of individual and community intentions”. He points to intentions here because the point is that epistemic and agential language – mentalistic language in general – is ineluctably normative; it always contains a prescriptive, action-oriented dimension and engages in direct or indirect assessment against normative standards. In Sellars’s own theory, norms are grounded in the structure of intentions, particularly community intentions, so any truly complete image must contain the language of intentions.

HumanaMente