Bacteria’s Perception-Action Circle: Materiality of the Ontological. Thought of the Day 136.0


The unicellular organism has thin filaments protruding from its cell membrane, and in the absence of any stimuli, it simply wanders randomly around by changing between two characteristical movement patterns. One is performed by rotating the flagella counterclockwise. In that case, they form a bundle which pushes the cell forward along a curved path, a ‘run’ of random duration with these runs interchanging with ‘tumbles’ where the flagella shifts to clockwise rotation, making them work independently and hence moving the cell erratically around with small net displacement. The biased random walk now consists in the fact than in the presence of a chemical attractant, the runs happening to carry the cell closer to the attractant are extended, while runs in other directions are not. The sensation of the chemical attractant is performed temporally rather than spatially, because the cell moves too rapidly for concentration comparisons between its two ends to be possible. A chemical repellant in the environment gives rise to an analogous behavioral structure – now the biased random walk takes the cell away from the repellant. The bias saturates very quickly – which is what prevents the cell from continuing in a ‘false’ direction, because a higher concentration of attractant will now be needed to repeat the bias. The reception system has three parts, one detecting repellants such as leucin, the other detecting sugars, the third oxygen and oxygen-like substances.


The cell’s behavior forms a primitive, if full-fledged example of von Uexküll’s functional circle connecting specific perception signs and action signs. Functional circle behavior is thus no privilege for animals equipped with central nervous systems (CNS). Both types of signs involve categorization. First, the sensory receptors of the bacterium evidently are organized after categorization of certain biologically significant chemicals, while most chemicals that remain insignificant for the cell’s metabolism and survival are ignored. The self-preservation of metabolism and cell structure is hence the ultimate regulator which is supported by the perception-action cycles described. The categorization inherent in the very structure of the sensors is mirrored in the categorization of act types. Three act types are outlined: a null-action, composed of random running and tumbling, and two mirroring biased variants triggered by attractants and repellants, respectively. Moreover, a negative feed-back loop governed by quick satiation grants that the window of concentration shifts to which the cell is able to react appropriately is large – it so to speak calibrates the sensory system so that it does not remain blinded by one perception and does not keep moving the cell forward on in one selected direction. This adaptation of the system grants that it works in a large scale of different attractor/repellor concentrations. These simple signals at stake in the cell’s functional circle display an important property: at simple biological levels, the distinction between signs and perception vanish – that distinction is supposedly only relevant for higher CNS-based animals. Here, the signals are based on categorical perception – a perception which immediately categorizes the entity perceived and thus remains blind to internal differences within the category.

Pandemic e coli

The mechanism by which the cell identifies sugar, is partly identical to what goes on in human taste buds. Sensation of sugar gradients must, of course, differ from the consumption of it – while the latter, of course, destroys the sugar molecule, the former merely reads an ‘active site’ on the outside of the macromolecule. E . Coli – exactly like us – may be fooled by artificial sweeteners bearing the same ‘active site’ on their outer perimeter, even if being completely different chemicals (this is, of course, the secret behind such sweeteners, they are not sugars and hence do not enter the digestion process carrying the energy of carbohydrates). This implies that E . coli may be fooled. Bacteria may not lie, but a simpler process than lying (which presupposes two agents and the ability of being fooled) is, in fact, being fooled (presupposing, in turn, only one agent and an ambiguous environment). E . coli has the ability to categorize a series of sugars – but, by the same token, the ability to categorize a series of irrelevant substances along with them. On the one hand, the ability to recognize and categorize an object by a surface property only (due to the weak van der Waal-bonds and hydrogen bonds to the ‘active site’, in contrast to the strong covalent bonds holding the molecule together) facilitates perception economy and quick action adaptability. On the other hand, the economy involved in judging objects from their surface only has an unavoidable flip side: it involves the possibility of mistake, of being fooled by allowing impostors in your categorization. So in the perception-action circle of a bacterium, some of the self-regulatory stability of a metabolism involving categorized signal and action involvement with the surroundings form intercellular communication in multicellular organisms to reach out to complicated perception and communication in higher animals.


Infinite Sequences and Halting Problem. Thought of the Day 76.0


In attempting to extend the notion of depth from finite strings to infinite sequences, one encounters a familiar phenomenon: the definitions become sharper (e.g. recursively invariant), but their intuitive meaning is less clear, because of distinctions (e.g. between infintely-often and almost-everywhere properties) that do not exist in the finite case.

An infinite sequence X is called strongly deep if at every significance level s, and for every recursive function f, all but finitely many initial segments Xn have depth exceeding f(n).

It is necessary to require the initial segments to be deep almost everywhere rather than infinitely often, because even the most trivial sequence has infinitely many deep initial segments Xn (viz. the segments whose lengths n are deep numbers).

It is not difficult to show that the property of strong depth is invariant under truth-table equivalence (this is the same as Turing equivalence in recursively bounded time, or via a total recursive operator), and that the same notion would result if the initial segments were required to be deep in the sense of receiving less than 2−s of their algorithmic probability from f(n)-fast programs. The characteristic sequence of the halting set K is an example of a strongly deep sequence.

A weaker definition of depth, also invariant under truth-table equivalence, is perhaps more analogous to that adopted for finite strings:

An infinite sequence X is weakly deep if it is not computable in recursively bounded time from any algorithmically random infinite sequence.

Computability in recursively bounded time is equivalent to two other properties, viz. truth-table reducibility and reducibility via a total recursive operator.

By contrast to the situation with truth-table reducibility, Péter Gacs has shown that every sequence is computable from (i.e. Turing reducible to) an algorithmically random sequence if no bound is imposed on the time. This is the infinite analog of far more obvious fact that every finite string is computable from an algorithmically random string (e.g. its minimal program).

Every strongly deep sequence is weakly deep, but by intermittently padding K with large blocks of zeros, one can construct a weakly deep sequence with infinitely many shallow initial segments.

Truth table reducibility to an algorithmically random sequence is equivalent to the property studied by Levin et. al. of being random with respect to some recursive measure. Levin calls sequences with this property “proper” or “complete” sequences, and views them as more realistic and interesting than other sequences because they are the typical outcomes of probabilistic or deterministic effective processes operating in recursively bounded time.

Weakly deep sequences arise with finite probability when a universal Turing machine (with one-way input and output tapes, so that it can act as a transducer of infinite sequences) is given an infinite coin toss sequence for input. These sequences are necessarily produced very slowly: the time to output the n’th digit being bounded by no recursive function, and the output sequence contains evidence of this slowness. Because they are produced with finite probability, such sequences can contain only finite information about the halting problem.

Quantum Informational Biochemistry. Thought of the Day 71.0


A natural extension of the information-theoretic Darwinian approach for biological systems is obtained taking into account that biological systems are constituted in their fundamental level by physical systems. Therefore it is through the interaction among physical elementary systems that the biological level is reached after increasing several orders of magnitude the size of the system and only for certain associations of molecules – biochemistry.

In particular, this viewpoint lies in the foundation of the “quantum brain” project established by Hameroff and Penrose (Shadows of the Mind). They tried to lift quantum physical processes associated with microsystems composing the brain to the level of consciousness. Microtubulas were considered as the basic quantum information processors. This project as well the general project of reduction of biology to quantum physics has its strong and weak sides. One of the main problems is that decoherence should quickly wash out the quantum features such as superposition and entanglement. (Hameroff and Penrose would disagree with this statement. They try to develop models of hot and macroscopic brain preserving quantum features of its elementary micro-components.)

However, even if we assume that microscopic quantum physical behavior disappears with increasing size and number of atoms due to decoherence, it seems that the basic quantum features of information processing can survive in macroscopic biological systems (operating on temporal and spatial scales which are essentially different from the scales of the quantum micro-world). The associated information processor for the mesoscopic or macroscopic biological system would be a network of increasing complexity formed by the elementary probabilistic classical Turing machines of the constituents. Such composed network of processors can exhibit special behavioral signatures which are similar to quantum ones. We call such biological systems quantum-like. In the series of works Asano and others (Quantum Adaptivity in Biology From Genetics to Cognition), there was developed an advanced formalism for modeling of behavior of quantum-like systems based on theory of open quantum systems and more general theory of adaptive quantum systems. This formalism is known as quantum bioinformatics.

The present quantum-like model of biological behavior is of the operational type (as well as the standard quantum mechanical model endowed with the Copenhagen interpretation). It cannot explain physical and biological processes behind the quantum-like information processing. Clarification of the origin of quantum-like biological behavior is related, in particular, to understanding of the nature of entanglement and its role in the process of interaction and cooperation in physical and biological systems. Qualitatively the information-theoretic Darwinian approach supplies an interesting possibility of explaining the generation of quantum-like information processors in biological systems. Hence, it can serve as the bio-physical background for quantum bioinformatics. There is an intriguing point in the fact that if the information-theoretic Darwinian approach is right, then it would be possible to produce quantum information from optimal flows of past, present and anticipated classical information in any classical information processor endowed with a complex enough program. Thus the unified evolutionary theory would supply a physical basis to Quantum Information Biology.


Suspicion on Consciousness as an Immanent Derivative


The category of the subject (like that of the object) has no place in an immanent world. There can be no transcendent, subjective essence. What, then, is the ontological status of a body and its attendant instance of consciousness? In what would it exist? Sanford Kwinter (conjuncted here) here offers:

It would exist precisely in the ever-shifting pattern of mixtures or composites: both internal ones – the body as a site marked and traversed by forces that converge upon it in continuous variation; and external ones – the capacity of any individuated substance to combine and recombine with other bodies or elements (ensembles), both influencing their actions and undergoing influence by them. The ‘subject’ … is but a synthetic unit falling at the midpoint or interface of two more fundamental systems of articulation: the first composed of the fluctuating microscopic relations and mixtures of which the subject is made up, the second of the macro-blocs of relations or ensembles into which it enters. The image produced at the interface of these two systems – that which replaces, yet is too often mistaken for, subjective essence – may in turn have its own individuality characterized with a certain rigor. For each mixture at this level introduces into the bloc a certain number of defining capacities that determine both what the ‘subject’ is capable of bringing to pass outside of itself and what it is capable of receiving (undergoing) in terms of effects.

This description is sufficient to explain the immanent nature of the subjective bloc as something entirely embedded in and conditioned by its surroundings. What it does not offer – and what is not offered in any detail in the entirety of the work – is an in-depth account of what, exactly, these “defining capacities” are. To be sure, it would be unfair to demand a complete description of these capacities. Kwinter himself has elsewhere referred to the states of the nervous system as “magically complex”. Regardless of the specificity with which these capacities can presently be defined, we must nonetheless agree that it is at this interface, as he calls it, at this location where so many systems are densely overlaid, that consciousness is produced. We may be convinced that this consciousness, this apparent internal space of thought, is derived entirely from immanent conditions and can only be granted the ontological status of an effect, but this effect still manages to produce certain difficulties when attempting to define modes of behavior appropriate to an immanent world.

There is a palpable suspicion of the role of consciousness throughout Kwinter’s work, at least insofar as it is equated with some kind of internal, subjective space. (In one text he optimistically awaits the day when this space will “be left utterly in shreds.”) The basis of this suspicion is multiple and obvious. Among the capacities of consciousness is the ability to attribute to itself the (false) image of a stable and transcendent essence. The workings of consciousness are precisely what allow the subjective bloc to orient itself in a sequence of time, separating itself from an absolute experience of the moment. It is within consciousness that limiting and arbitrary moral categories seem to most stubbornly lodge themselves. (To be sure this is the location of all critical thought.) And, above all, consciousness may serve as the repository for conditioned behaviors which believe themselves to be free of external determination. Consciousness, in short, contains within itself an enormous number of limiting factors which would retard the production of novelty. Insofar as it appears to possess the capacity for self-determination, this capacity would seem most productively applied by turning on itself – that is, precisely by making the choice not to make conscious decisions and instead to permit oneself to be seized by extra-subjective forces.


Potential Synapses. Thought of the Day 52.0

For a neuron to recognize a pattern of activity it requires a set of co-located synapses (typically fifteen to twenty) that connect to a subset of the cells that are active in the pattern to be recognized. Learning to recognize a new pattern is accomplished by the formation of a set of new synapses collocated on a dendritic segment.


Figure: Learning by growing new synapses. Learning in an HTM neuron is modeled by the growth of new synapses from a set of potential synapses. A “permanence” value is assigned to each potential synapse and represents the growth of the synapse. Learning occurs by incrementing or decrementing permanence values. The synapse weight is a binary value set to 1 if the permanence is above a threshold.

Figure shows how we model the formation of new synapses in a simulated Hierarchical Temporal Memory (HTM) neuron. For each dendritic segment we maintain a set of “potential” synapses between the dendritic segment and other cells in the network that could potentially form a synapse with the segment. The number of potential synapses is larger than the number of actual synapses. We assign each potential synapse a scalar value called “permanence” which represents stages of growth of the synapse. A permanence value close to zero represents an axon and dendrite with the potential to form a synapse but that have not commenced growing one. A 1.0 permanence value represents an axon and dendrite with a large fully formed synapse.

The permanence value is incremented and decremented using a Hebbian-like rule. If the permanence value exceeds a threshold, such as 0.3, then the weight of the synapse is 1, if the permanence value is at or below the threshold then the weight of the synapse is 0. The threshold represents the establishment of a synapse, albeit one that could easily disappear. A synapse with a permanence value of 1.0 has the same effect as a synapse with a permanence value at threshold but is not as easily forgotten. Using a scalar permanence value enables on-line learning in the presence of noise. A previously unseen input pattern could be noise or it could be the start of a new trend that will repeat in the future. By growing new synapses, the network can start to learn a new pattern when it is first encountered, but only act differently after several presentations of the new pattern. Increasing permanence beyond the threshold means that patterns experienced more than others will take longer to forget.

HTM neurons and HTM networks rely on distributed patterns of cell activity, thus the activation strength of any one neuron or synapse is not very important. Therefore, in HTM simulations we model neuron activations and synapse weights with binary states. Additionally, it is well known that biological synapses are stochastic, so a neocortical theory cannot require precision of synaptic efficacy. Although scalar states and weights might improve performance, they are not required from a theoretical point of view.


US Stock Market Interaction Network as Learned by the Boltzmann Machine


Price formation on a financial market is a complex problem: It reflects opinion of investors about true value of the asset in question, policies of the producers, external regulation and many other factors. Given the big number of factors influencing price, many of which unknown to us, describing price formation essentially requires probabilistic approaches. In the last decades, synergy of methods from various scientific areas has opened new horizons in understanding the mechanisms that underlie related problems. One of the popular approaches is to consider a financial market as a complex system, where not only a great number of constituents plays crucial role but also non-trivial interaction properties between them. For example, related interdisciplinary studies of complex financial systems have revealed their enhanced sensitivity to fluctuations and external factors near critical events with overall change of internal structure. This can be complemented by the research devoted to equilibrium and non-equilibrium phase transitions.

In general, statistical modeling of the state space of a complex system requires writing down the probability distribution over this space using real data. In a simple version of modeling, the probability of an observable configuration (state of a system) described by a vector of variables s can be given in the exponential form

p(s) = Z−1 exp {−βH(s)} —– (1)

where H is the Hamiltonian of a system, β is inverse temperature (further β ≡ 1 is assumed) and Z is a statistical sum. Physical meaning of the model’s components depends on the context and, for instance, in the case of financial systems, s can represent a vector of stock returns and H can be interpreted as the inverse utility function. Generally, H has parameters defined by its series expansion in s. Basing on the maximum entropy principle, expansion up to the quadratic terms is usually used, leading to the pairwise interaction models. In the equilibrium case, the Hamiltonian has form

H(s) = −hTs − sTJs —– (2)

where h is a vector of size N of external fields and J is a symmetric N × N matrix of couplings (T denotes transpose). The energy-based models represented by (1) play essential role not only in statistical physics but also in neuroscience (models of neural networks) and machine learning (generative models, also known as Boltzmann machines). Given topological similarities between neural and financial networks, these systems can be considered as examples of complex adaptive systems, which are characterized by the adaptation ability to changing environment, trying to stay in equilibrium with it. From this point of view, market structural properties, e.g. clustering and networks, play important role for modeling of the distribution of stock prices. Adaptation (or learning) in these systems implies change of the parameters of H as financial and economic systems evolve. Using statistical inference for the model’s parameters, the main goal is to have a model capable of reproducing the same statistical observables given time series for a particular historical period. In the pairwise case, the objective is to have

⟨sidata = ⟨simodel —– (3a)

⟨sisjdata = ⟨sisjmodel —– (3b)

where angular brackets denote statistical averaging over time. Having specified general mathematical model, one can also discuss similarities between financial and infinite- range magnetic systems in terms of phenomena related, e.g. extensivity, order parameters and phase transitions, etc. These features can be captured even in the simplified case, when si is a binary variable taking only two discrete values. Effect of the mapping to a binarized system, when the values si = +1 and si = −1 correspond to profit and loss respectively. In this case, diagonal elements of the coupling matrix, Jii, are zero because s2i = 1 terms do not contribute to the Hamiltonian….

US stock market interaction network as learned by the Boltzmann Machine


Algorithmic Randomness and Complexity


How random is a real? Given two reals, which is more random? How should we even try to quantify these questions, and how do various choices of measurement relate? Once we have reasonable measuring devices, and, using these devices, we divide the reals into equivalence classes of the same “degree of randomness” what do the resulting structures look like? Once we measure the level of randomness how does the level of randomness relate to classical measures of complexity Turing degrees of unsolvability? Should it be the case that high levels of randomness mean high levels of complexity in terms of computational power, or low levels of complexity? Conversely should the structures of computability such as the degrees and the computably enumerable sets have anything to say about randomness for reals?

Algorithmic Randomness and Complexity