Bacteria’s Perception-Action Circle: Materiality of the Ontological. Thought of the Day 136.0

diatoms_in_the_ice

The unicellular organism has thin filaments protruding from its cell membrane, and in the absence of any stimuli, it simply wanders randomly around by changing between two characteristical movement patterns. One is performed by rotating the flagella counterclockwise. In that case, they form a bundle which pushes the cell forward along a curved path, a ‘run’ of random duration with these runs interchanging with ‘tumbles’ where the flagella shifts to clockwise rotation, making them work independently and hence moving the cell erratically around with small net displacement. The biased random walk now consists in the fact than in the presence of a chemical attractant, the runs happening to carry the cell closer to the attractant are extended, while runs in other directions are not. The sensation of the chemical attractant is performed temporally rather than spatially, because the cell moves too rapidly for concentration comparisons between its two ends to be possible. A chemical repellant in the environment gives rise to an analogous behavioral structure – now the biased random walk takes the cell away from the repellant. The bias saturates very quickly – which is what prevents the cell from continuing in a ‘false’ direction, because a higher concentration of attractant will now be needed to repeat the bias. The reception system has three parts, one detecting repellants such as leucin, the other detecting sugars, the third oxygen and oxygen-like substances.

Fig-4-Uexkull's-model-of-the-functional-cycle

The cell’s behavior forms a primitive, if full-fledged example of von Uexküll’s functional circle connecting specific perception signs and action signs. Functional circle behavior is thus no privilege for animals equipped with central nervous systems (CNS). Both types of signs involve categorization. First, the sensory receptors of the bacterium evidently are organized after categorization of certain biologically significant chemicals, while most chemicals that remain insignificant for the cell’s metabolism and survival are ignored. The self-preservation of metabolism and cell structure is hence the ultimate regulator which is supported by the perception-action cycles described. The categorization inherent in the very structure of the sensors is mirrored in the categorization of act types. Three act types are outlined: a null-action, composed of random running and tumbling, and two mirroring biased variants triggered by attractants and repellants, respectively. Moreover, a negative feed-back loop governed by quick satiation grants that the window of concentration shifts to which the cell is able to react appropriately is large – it so to speak calibrates the sensory system so that it does not remain blinded by one perception and does not keep moving the cell forward on in one selected direction. This adaptation of the system grants that it works in a large scale of different attractor/repellor concentrations. These simple signals at stake in the cell’s functional circle display an important property: at simple biological levels, the distinction between signs and perception vanish – that distinction is supposedly only relevant for higher CNS-based animals. Here, the signals are based on categorical perception – a perception which immediately categorizes the entity perceived and thus remains blind to internal differences within the category.

Pandemic e coli

The mechanism by which the cell identifies sugar, is partly identical to what goes on in human taste buds. Sensation of sugar gradients must, of course, differ from the consumption of it – while the latter, of course, destroys the sugar molecule, the former merely reads an ‘active site’ on the outside of the macromolecule. E . Coli – exactly like us – may be fooled by artificial sweeteners bearing the same ‘active site’ on their outer perimeter, even if being completely different chemicals (this is, of course, the secret behind such sweeteners, they are not sugars and hence do not enter the digestion process carrying the energy of carbohydrates). This implies that E . coli may be fooled. Bacteria may not lie, but a simpler process than lying (which presupposes two agents and the ability of being fooled) is, in fact, being fooled (presupposing, in turn, only one agent and an ambiguous environment). E . coli has the ability to categorize a series of sugars – but, by the same token, the ability to categorize a series of irrelevant substances along with them. On the one hand, the ability to recognize and categorize an object by a surface property only (due to the weak van der Waal-bonds and hydrogen bonds to the ‘active site’, in contrast to the strong covalent bonds holding the molecule together) facilitates perception economy and quick action adaptability. On the other hand, the economy involved in judging objects from their surface only has an unavoidable flip side: it involves the possibility of mistake, of being fooled by allowing impostors in your categorization. So in the perception-action circle of a bacterium, some of the self-regulatory stability of a metabolism involving categorized signal and action involvement with the surroundings form intercellular communication in multicellular organisms to reach out to complicated perception and communication in higher animals.

Simultaneity

Untitled

Let us introduce the concept of space using the notion of reflexive action (or reflex action) between two things. Intuitively, a thing x acts on another thing y if the presence of x disturbs the history of y. Events in the real world seem to happen in such a way that it takes some time for the action of x to propagate up to y. This fact can be used to construct a relational theory of space à la Leibniz, that is, by taking space as a set of equitemporal things. It is necessary then to define the relation of simultaneity between states of things.

Let x and y be two things with histories h(xτ) and h(yτ), respectively, and let us suppose that the action of x on y starts at τx0. The history of y will be modified starting from τy0. The proper times are still not related but we can introduce the reflex action to define the notion of simultaneity. The action of y on x, started at τy0, will modify x from τx1 on. The relation “the action of x on y is reflected to x” is the reflex action. Historically, Galileo introduced the reflection of a light pulse on a mirror to measure the speed of light. With this relation we will define the concept of simultaneity of events that happen on different basic things.

Untitled

Besides we have a second important fact: observation and experiment suggest that gravitation, whose source is energy, is a universal interaction, carried by the gravitational field.

Let us now state the above hypothesis axiomatically.

Axiom 1 (Universal interaction): Any pair of basic things interact. This extremely strong axiom states not only that there exist no completely isolated things but that all things are interconnected.

This universal interconnection of things should not be confused with “universal interconnection” claimed by several mystical schools. The present interconnection is possible only through physical agents, with no mystical content. It is possible to model two noninteracting things in Minkowski space assuming they are accelerated during an infinite proper time. It is easy to see that an infinite energy is necessary to keep a constant acceleration, so the model does not represent real things, with limited energy supply.

Now consider the time interval (τx1 − τx0). Special Relativity suggests that it is nonzero, since any action propagates with a finite speed. We then state

Axiom 2 (Finite speed axiom): Given two different and separated basic things x and y, such as in the above figure, there exists a minimum positive bound for the interval (τx1 − τx0) defined by the reflex action.

Now we can define Simultaneity as τy0 is simultaneous with τx1/2 =Df (1/2)(τx1 + τx0)

The local times on x and y can be synchronized by the simultaneity relation. However, as we know from General Relativity, the simultaneity relation is transitive only in special reference frames called synchronous, thus prompting us to include the following axiom:

Axiom 3 (Synchronizability): Given a set of separated basic things {xi} there is an assignment of proper times τi such that the relation of simultaneity is transitive.

With this axiom, the simultaneity relation is an equivalence relation. Now we can define a first approximation to physical space, which is the ontic space as the equivalence class of states defined by the relation of simultaneity on the set of things is the ontic space EO.

The notion of simultaneity allows the analysis of the notion of clock. A thing y ∈ Θ is a clock for the thing x if there exists an injective function ψ : SL(y) → SL(x), such that τ < τ′ ⇒ ψ(τ) < ψ(τ′). i.e.: the proper time of the clock grows in the same way as the time of things. The name Universal time applies to the proper time of a reference thing that is also a clock. From this we see that “universal time” is frame dependent in agreement with the results of Special Relativity.

Economics is the Science which Studies Human Behaviour as a Relationship Between Ends and Scarce Means which have Alternative Uses. Is Equilibrium a Choice? Note Quote.

What is the place of choice in equilibrium theory? Alfred Marshall and Leon Walras, who introduced competitive equilibrium theory, employed the theory of choice in terms of utility, analogously to the Austrian school. Enrico Barone and Karl Gustav Cassel (the latter introducing general equilibrium theory in the German speaking world. Walras-Cassel System) used demand and supply functions as starting data, disregarding the theory of choice. Pareto, on the one hand, argued that the two approaches are compatible. However, he discarded cardinal utility introducing the notion of preferences, i.e. ordinal utility, as sufficient foundations for the theory of choice, thus starting the modern analysis of choice. Pareto also suggested that these data can be derived directly from choices, so short-cutting the theory of choice (since choices are not to be explained) and anticipating the theory of revealed preferences. This theory is, perhaps, the point of maximal distance between equilibrium theory and the Austrian school. On the other hand, Pareto’s theory of economic efficiency, or Pareto-optimality, and all analysis connected with it (such as, for example, the theory of the core of an economy) requires at least individual preferences, an element which underlies choices and helps to explain them.

What was presented above is the present state of competitive equilibrium theory. Demand and supply functions are sufficient for determining prices and equilibrium allocations. These functions represent choices. In other words, theory of choice is not necessarily an integral element of competitive equilibrium theory, only a prerequisite. However, individual preferences and the theory of choice are required in order to define Pareto-optimal allocations and demonstrate the two theorems of welfare economics. Competitive equilibrium studies the compatibility of price-taking agents’ choices. Thus, it concerns choices without representing a theory about them. Nevertheless, such a theory is required if statements on Pareto-optimality and other relevant characteristics of competitive equilibrium are to be made.

In similar terms, the theory of choice is required by non-competitive equilibrium theory as well. For instance, game theory deeply analyzes strategic choices and in every non-competitive market equilibrium price-making agents’ choices have to be analyzed to a certain extent. However, this analysis differs from that given by the Austrian school. The difference lies in the aim of the two approaches. While the Austrian school is interested mainly in individual choices and their implications in as much as according to the famous Robbins’s definition, “economics is the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses”, equilibrium theory, including game theory, is interested mainly in the compatibility of choices. That is, equilibrium theory is not so much a theory of intentional actions as a the theory of intentional interactions. The two approaches overlap but do not coincide, even if they share the same vision of society and main assumptions about the behavior of its components.

Maffeo Pantaleoni  did not accept Pareto’s new theory of choice. He continued to follow the Jevons-Menger-Walras hedonistic approach to utility and he identified economic theory with the theory of subjective value. Probably, there was a courious change of position beteween Pantaleoni and Pareto about the political significance of economic theory. On the hand, Pareto was initially reluctant to accept Walras’s economic theory (introduced to him by Pantaleoni) because he was too liberal for sharing Walras’s socialism. In fact, he accepted Walras’s economic theory, not at all Walras’s political and philosophical view. On the other hand, Pantaleoni seems to have refuted Pareto’s new theory also because of its focus on equilibrium instead of individual choice. This would limit the liberal doctrine he envisaged strictly connected to economics as the theory of the individual choice.

For instance, let us take into consideration the theory of non-cooperative games. Its focus is on strategic interdependence, i.e. those situations in which each agent chooses their action knowing the result also depends on the actions of other individuals and that those actions as well generally depend on theirs. Individual action (for better clarity, plan of actions, or strategy) is not simply determined selecting the option that maximizes one’s utility from the set of actions available to each agent. The agent under consideration knows that other agents actions could prevent him from performing their optimal action and reaching the desired result. Individual equilibrium actions are determined simultaneously, i.e. we can generally determine the choice of an individual only by determining also the choices of all other individuals. Both the Austrian school and (competitive) equilibrium theory isolate the individual agent’s choice. However, while the Austrian school does not analyze the compatibility of the actions chosen by individuals (compatibility is presumed), competitive equilibrium theory analyzes interactions, although those among price-taking agents are rather weak. Interaction is predominant in strategic situations, where choices cannot be analyzed without taking interdependence explicitly into account.

Complexity Wrapped Uncertainty in the Bazaar

economic_growth_atlas

One could conceive a financial market as a set of N agents each of them taking a binary decision every time step. This is an extremely crude representation, but capture the essential feature that decision could be coded by binary symbols (buy = 0, sell = 1, for example). Although the extreme simplification, the above setup allow a “stylized” definition of price.

Let Nt0, Nt1 be the number of agents taking the decision 0, 1 respectively at the time t. Obviously, N = Nt0 + Nt1 for every t . Then, with the above definition of the binary code the price can be defined as:

pt = f(Nt0/Nt1)

where f is an increasing and convex function which also hold that:

a) f(0)=0

b) limx→∞ f(x) = ∞

c) limx→∞ f'(x) = 0

The above definition perfectly agree with the common believe about how offer and demand work. If Nt0 is small and Nt1 large, then there are few agents willing to buy and a lot of agents willing to sale, hence the price should be low. If on the contrary, Nt0 is large and Nt1 is small, then there are a lot of agents willing to buy and just few agents willing to sale, hence the price should be high. Notice that the winning choice is related with the minority choice. We exploit the above analogy to construct a binary time-series associated to each real time-series of financial markets. Let {pt}t∈N be the original real time-series. Then we construct a binary time-series {at}t∈N by the rule:

at = {1 pt > pt-1

at = {0 pt < pt-1

Physical complexity is defined as the number of binary digits that are explainable (or meaningful) with respect to the environment in a string η. In reference to our problem the only physical record one gets is the binary string built up from the original real time series and we consider it as the environment ε . We study the physical complexity of substrings of ε . The comprehension of their complex features has high practical importance. The amount of data agents take into account in order to elaborate their choice is finite and of short range. For every time step t, the binary digits at-l, at-l+1,…, at-1 carry some information about the behavior of agents. Hence, the complexity of these finite strings is a measure of how complex information agents face. The Kolmogorov – Chaitin complexity is defined as the length of the shortest program π producing the sequence η when run on universal Turing machine T:

K(η) = min {|π|: η = T(π)}

where π represent the length of π in bits, T(π) the result of running π on Turing machine T and K(η) the Kolmogorov-Chaitin complexity of sequence π. In the framework of this theory, a string is said to be regular if K(η) < η . It means that η can be described by a program π with length smaller than η length. The interpretation of a string should be done in the framework of an environment. Hence, let imagine a Turing machine that takes the string ε as input. We can define the conditional complexity K(η / ε) as the length of the smallest program that computes η in a Turing machine having ε as input:

K(η / ε) = min {|π|: η = CT(π, ε)}

We want to stress that K(η / ε) represents those bits in η that are random with respect to ε. Finally, the physical complexity can be defined as the number of bits that are meaningful in η with respect to ε :

K(η : ε) = |η| – K(η / ε)

η also represent the unconditional complexity of string η i.e., the value of complexity if the input would be ε = ∅ . Of course, the measure K (η : ε ) as defined in the above equation has few practical applications, mainly because it is impossible to know the way in which information about ε is encoded in η . However, if a statistical ensemble of strings is available to us, then the determination of complexity becomes an exercise in information theory. It can be proved that the average values C(η) of the physical complexity K(η : ε) taken over an ensemble Σ of strings of length η can be approximated by:

C|(η)| = 〈K(η : ε) ≅  |η| – K(η : ε), where

K(η : ε) = -∑η∈∑p(η / ε) log2p(η / ε)

and the sum is taking over all the strings η in the ensemble Σ. In a population of N strings in environment ε, the quantity n(η)/N, where n(s) denotes the number of strings equal to η in ∑, approximates p(η / ε) as N → ∞.

Let ε = {at}t∈N and l be a positive integer l ≥ 2. Let Σl be the ensemble of sequences of length l built up by a moving window of length l i.e., if η ∈ Σl then η = aiai+1ai+l−1 for some value of i. The selection of strings ε is related to periods before crashes and in contrast, period with low uncertainty in the market…..