The Statistical Physics of Stock Markets. Thought of the Day 143.0

This video is an Order Routing Animation

The externalist view argues that we can make sense of, and profit from stock markets’ behavior, or at least few crucial properties of it, by crunching numbers and looking for patterns and regularities in certain sets of data. The notion of data, hence, is a key element in such an understanding and the quantitative side of the problem is prominent even if it does not mean that a qualitative analysis is ignored. The point here that the outside view maintains that it provides a better understanding than the internalist view. To this end, it endorses a functional perspective on finance and stock markets in particular.

The basic idea of the externalist view is that there are general properties and behavior of stock markets that can be detected and studied through mathematical lens, and they do not depend so much on contextual or domain-specific factors. The point at stake here is that the financial systems can be studied and approached at different scales, and it is virtually impossible to produce all the equations describing at a micro level all the objects of the system and their relations. So, in response, this view focuses on those properties that allow us to get an understanding of the behavior of the systems at a global level without having to produce a detailed conceptual and mathematical account of the inner ‘machinery’ of the system. Hence the two roads: The first one is to embrace an emergentist view on stock market, that is a specific metaphysical, ontological, and methodological thesis, while the second one is to embrace a heuristic view, that is the idea that the choice to focus on those properties that are tractable by the mathematical models is a pure problem-solving option.

A typical view of the externalist approach is the one provided, for instance, by statistical physics. In describing collective behavior, this discipline neglects all the conceptual and mathematical intricacies deriving from a detailed account of the inner, individual, and at micro level functioning of a system. Concepts such as stochastic dynamics, self-similarity, correlations (both short- and long-range), and scaling are tools to get this aim. Econophysics is a stock example in this sense: it employs methods taken from mathematics and mathematical physics in order to detect and forecast the driving forces of stock markets and their critical events, such as bubbles, crashes and their tipping points. Under this respect, markets are not ‘dark boxes’: you can see their characteristics from the outside, or better you can see specific dynamics that shape the trends of stock markets deeply and for a long time. Moreover, these dynamics are complex in the technical sense. This means that this class of behavior is such to encompass timescales, ontology, types of agents, ecologies, regulations, laws, etc. and can be detected, even if not strictly predictable. We can focus on the stock markets as a whole, on few of their critical events, looking at the data of prices (or other indexes) and ignoring all the other details and factors since they will be absorbed in these global dynamics. So this view provides a look at stock markets such that not only they do not appear as a unintelligible casino where wild gamblers face each other, but that shows the reasons and the properties of a systems that serve mostly as a means of fluid transactions that enable and ease the functioning of free markets.

Moreover the study of complex systems theory and that of stock markets seem to offer mutual benefits. On one side, complex systems theory seems to offer a key to understand and break through some of the most salient stock markets’ properties. On the other side, stock markets seem to provide a ‘stress test’ of the complexity theory. Didier Sornette expresses the analogies between stock markets and phase transitions, statistical mechanics, nonlinear dynamics, and disordered systems mold the view from outside:

Take our personal life. We are not really interested in knowing in advance at what time we will go to a given store or drive to a highway. We are much more interested in forecasting the major bifurcations ahead of us, involving the few important things, like health, love, and work, that count for our happiness. Similarly, predicting the detailed evolution of complex systems has no real value, and the fact that we are taught that it is out of reach from a fundamental point of view does not exclude the more interesting possibility of predicting phases of evolutions of complex systems that really count, like the extreme events. It turns out that most complex systems in natural and social sciences do exhibit rare and sudden transitions that occur over time intervals that are short compared to the characteristic time scales of their posterior evolution. Such extreme events express more than anything else the underlying “forces” usually hidden by almost perfect balance and thus provide the potential for a better scientific understanding of complex systems.

Phase transitions, critical points, extreme events seem to be so pervasive in stock markets that they are the crucial concepts to explain and, in case, foresee. And complexity theory provides us a fruitful reading key to understand their dynamics, namely their generation, growth and occurrence. Such a reading key proposes a clear-cut interpretation of them, which can be explained again by means of an analogy with physics, precisely with the unstable position of an object. Complexity theory suggests that critical or extreme events occurring at large scale are the outcome of interactions occurring at smaller scales. In the case of stock markets, this means that, unlike many approaches that attempt to account for crashes by searching for ‘mechanisms’ that work at very short time scales, complexity theory indicates that crashes have causes that date back months or year before it. This reading suggests that it is the increasing, inner interaction between the agents inside the markets that builds up the unstable dynamics (typically the financial bubbles) that eventually ends up with a critical event, the crash. But here the specific, final step that triggers the critical event: the collapse of the prices is not the key for its understanding: a crash occurs because the markets are in an unstable phase and any small interference or event may trigger it. The bottom line: the trigger can be virtually any event external to the markets. The real cause of the crash is its overall unstable position, the proximate ‘cause’ is secondary and accidental. Or, in other words, a crash could be fundamentally endogenous in nature, whilst an exogenous, external, shock is simply the occasional triggering factors of it. The instability is built up by a cooperative behavior among traders, who imitate each other (in this sense is an endogenous process) and contribute to form and reinforce trends that converge up to a critical point.

The main advantage of this approach is that the system (the market) would anticipate the crash by releasing precursory fingerprints observable in the stock market prices: the market prices contain information on impending crashes and this implies that:

if the traders were to learn how to decipher and use this information, they would act on it and on the knowledge that others act on it; nevertheless, the crashes would still probably happen. Our results suggest a weaker form of the “weak efficient market hypothesis”, according to which the market prices contain, in addition to the information generally available to all, subtle information formed by the global market that most or all individual traders have not yet learned to decipher and use. Instead of the usual interpretation of the efficient market hypothesis in which traders extract and consciously incorporate (by their action) all information contained in the market prices, we propose that the market as a whole can exhibit “emergent” behavior not shared by any of its constituents.

In a nutshell, the critical events emerge in a self-organized and cooperative fashion as the macro result of the internal and micro interactions of the traders, their imitation and mirroring.

 

Advertisements

Momentum Space Topology Generates Massive Fermions. Thought of the Day 142.0

Untitled

Topological quantum phase transitions: The vacua at b0 ≠ 0 and b > M have Fermi surfaces. At b2 > b20 + M2, these Fermi surfaces have nonzero global topological charges N3 = +1 and N3 = −1. At the quantum phase transition occurring on the line b0 = 0, b > M (thick horizontal line) the Fermi surfaces shrink to the Fermi points with nonzero N3. At M2 < b2 < b20 + M2 the global topology of the Fermi surfaces is trivial, N3 = 0. At the quantum phase transition occurring on the line b = M (thick vertical line), the Fermi surfaces shrink to the points; and since their global topology is trivial the zeroes disappear at b < M where the vacuum is fully gapped. The quantum phase transition between the Fermi surfaces with and without topological charge N3 occurs at b2 = b20 + M2 (dashed line). At this transition, the Fermi surfaces touch each other, and their topological charges annihilate each other.

What we have assumed here is that the Fermi point in the Standard Model above the electroweak energy scale is marginal, i.e. its total topological charge is N3 = 0. Since the topology does not protect such a point, everything depends on symmetry, which is more subtle. In principle, one may expect that the vacuum is always fully gapped. This is supported by the Monte-Carlo simulations which suggest that in the Standard Model there is no second-order phase transition at finite temperature, instead one has either the first-order electroweak transition or crossover depending on the ratio of masses of the Higgs and gauge bosons. This would actually mean that the fermions are always massive.

Such scenario does not contradict to the momentum-space topology, only if the total topological charge N3 is zero. However, from the point of view of the momentum-space topology there is another scheme of the description of the Standard Model. Let us assume that the Standard Model follows from the GUT with SO(10) group. Here, the 16 Standard Model fermions form at high energy the 16-plet of the SO(10) group. All the particles of this multiplet are left-handed fermions. These are: four left-handed SU(2) doublets (neutrino-electron and 3 doublets of quarks) + eight left SU(2) singlets of anti-particles (antineutrino, positron and 6 anti-quarks). The total topological charge of the Fermi point at p = 0 is N3 = −16, and thus such a vacuum is topologically stable and is protected against the mass of fermions. This topological protection works even if the SU (2) × U (1) symmetry is violated perturbatively, say, due to the mixing of different species of the 16-plet. Mixing of left leptonic doublet with left singlets (antineutrino and positron) violates SU(2) × U(1) symmetry, but this does not lead to annihilation of Fermi points and mass formation since the topological charge N3 is conserved.

What this means in a nutshell is that if the total topological charge of the Fermi surfaces is non-zero, the gap cannot appear perturbatively. It can only arise due to the crucial reconstruction of the fermionic spectrum with effective doubling of fermions. In the same manner, in the SO(10) GUT model the mass generation can only occur non-perturbatively. The mixing of the left and right fermions requires the introduction of the right fermions, and thus the effective doubling of the number of fermions. The corresponding Gor’kov’s Green’s function in this case will be the (16 × 2) × (16 × 2) matrix. The nullification of the topological charge N3 = −16 occurs exactly in the same manner, as in superconductors. In the extended (Gor’kov) Green’s function formalism appropriate below the transition, the topological charge of the original Fermi point is annihilated by the opposite charge N3 = +16 of the Fermi point of “holes” (right-handed particles).

This demonstrates that the mechanism of generation of mass of fermions essentially depends on the momentum space topology. If the Standard Model originates from the SO(10) group, the vacuum belongs to the universality class with the topologically non-trivial chiral Fermi point (i.e. with N3 ≠ 0), and the smooth crossover to the fully-gapped vacuum is impossible. On the other hand, if the Standard Model originates from the left-right symmetric Pati–Salam group such as SU(2)L × SU(2)R × SU(4), and its vacuum has the topologically trivial (marginal) Fermi point with N3 = 0, the smooth crossover to the fully-gapped vacuum is possible.

Black Hole Analogue: Extreme Blue Shift Disturbance. Thought of the Day 141.0

One major contribution of the theoretical study of black hole analogues has been to help clarify the derivation of the Hawking effect, which leads to a study of Hawking radiation in a more general context, one that involves, among other features, two horizons. There is an apparent contradiction in Hawking’s semiclassical derivation of black hole evaporation, in that the radiated fields undergo arbitrarily large blue-shifting in the calculation, thus acquiring arbitrarily large masses, which contravenes the underlying assumption that the gravitational effects of the quantum fields may be ignored. This is known as the trans-Planckian problem. A similar issue arises in condensed matter analogues such as the sonic black hole.

Untitled

Sonic horizons in a moving fluid, in which the speed of sound is 1. The velocity profile of the fluid, v(z), attains the value −1 at two values of z; these are horizons for sound waves that are right-moving with respect to the fluid. At the right-hand horizon right-moving waves are trapped, with waves just to the left of the horizon being swept into the supersonic flow region v < −1; no sound can emerge from this region through the horizon, so it is reminiscent of a black hole. At the left-hand horizon right-moving waves become frozen and cannot enter the supersonic flow region; this is reminiscent of a white hole.

Considering the sonic horizons in one-dimensional fluid flow, the velocity profile of the fluid as depicted in the figure above, the two horizons are formed for sound waves that propagate to the right with respect to the fluid. The horizon on the right of the supersonic flow region v < −1 behaves like a black hole horizon for right-moving waves, while the horizon on the left of the supersonic flow region behaves like a white hole horizon for these waves. In such a system, the equation for a small perturbation φ of the velocity potential is

(∂t + ∂zv)(∂t + v∂z)φ − ∂z2φ = 0 —– (1)

In terms of a new coordinate τ defined by

dτ := dt + v/(1 – v2) dz

(1) is the equation φ = 0 of a scalar field in the black-hole-type metric

ds2 = (1 – v2)dτ2 – dz2/(1 – v2)

Each horizon will produce a thermal spectrum of phonons with a temperature determined by the quantity that corresponds to the surface gravity at the horizon, namely the absolute value of the slope of the velocity profile:

kBT = ħα/2π, α := |dv/dz|v=-1 —– (2)

Untitled

Hawking phonons in the fluid flow: Real phonons have positive frequency in the fluid-element frame and for right-moving phonons this frequency (ω − vk) is ω/(1 + v) = k. Thus in the subsonic-flow regions ω (conserved 1 + v for each ray) is positive, whereas in the supersonic-flow region it is negative; k is positive for all real phonons. The frequency in the fluid-element frame diverges at the horizons – the trans-Planckian problem.

The trajectories of the created phonons are formally deduced from the dispersion relation of the sound equation (1). Geometrical acoustics applied to (1) gives the dispersion relation

ω − vk = ±k —– (3)

and the Hamiltonians

dz/dt = ∂ω/∂k = v ± 1 —– (4)

dk/dt = -∂ω/∂z = − v′k —– (5)

The left-hand side of (3) is the frequency in the frame co-moving with a fluid element, whereas ω is the frequency in the laboratory frame; the latter is constant for a time-independent fluid flow (“time-independent Hamiltonian” dω/dt = ∂ω/∂t = 0). Since the Hawking radiation is right-moving with respect to the fluid, we clearly must choose the positive sign in (3) and hence in (4) also. By approximating v(z) as a linear function near the horizons we obtain from (4) and (5) the ray trajectories. The disturbing feature of the rays is the behavior of the wave vector k: at the horizons the radiation is exponentially blue-shifted, leading to a diverging frequency in the fluid-element frame. These runaway frequencies are unphysical since (1) asserts that sound in a fluid element obeys the ordinary wave equation at all wavelengths, in contradiction with the atomic nature of fluids. Moreover the conclusion that this Hawking radiation is actually present in the fluid also assumes that (1) holds at all wavelengths, as exponential blue-shifting of wave packets at the horizon is a feature of the derivation. Similarly, in the black-hole case the equation does not hold at arbitrarily high frequencies because it ignores the gravity of the fields. For the black hole, a complete resolution of this difficulty will require inputs from the gravitational physics of quantum fields, i.e. quantum gravity, but for the dumb hole the physics is available for a more realistic treatment.

 

Adjacency of the Possible: Teleology of Autocatalysis. Thought of the Day 140.0

abiogenesisautocatalysis

Given a network of catalyzed chemical reactions, a (sub)set R of such reactions is called:

  1. Reflexively autocatalytic (RA) if every reaction in R is catalyzed by at least one molecule involved in any of the reactions in R;
  2. F-generated (F) if every reactant in R can be constructed from a small “food set” F by successive applications of reactions from R;
  3. Reflexively autocatalytic and F-generated (RAF) if it is both RA and F.

The food set F contains molecules that are assumed to be freely available in the environment. Thus, an RAF set formally captures the notion of “catalytic closure”, i.e., a self-sustaining set supported by a steady supply of (simple) molecules from some food set….

Stuart Kauffman begins with the Darwinian idea of the origin of life in a biological ‘primordial soup’ of organic chemicals and investigates the possibility of one chemical substance to catalyze the reaction of two others, forming new reagents in the soup. Such catalyses may, of course, form chains, so that one reagent catalyzes the formation of another catalyzing another, etc., and self-sustaining loops of reaction chains is an evident possibility in the appropriate chemical environment. A statistical analysis would reveal that such catalytic reactions may form interdependent networks when the rate of catalyzed reactions per molecule approaches one, creating a self-organizing chemical cycle which he calls an ‘autocatalytic set’. When the rate of catalyses per reagent is low, only small local reaction chains form, but as the rate approaches one, the reaction chains in the soup suddenly ‘freeze’ so that what was a group of chains or islands in the soup now connects into one large interdependent network, constituting an ‘autocatalytic set’. Such an interdependent reaction network constitutes the core of the body definition unfolding in Kauffman, and its cyclic character forms the basic precondition for self-sustainment. ‘Autonomous agent’ is an autocatalytic set able to reproduce and to undertake at least one thermodynamic work cycle.

This definition implies two things: reproduction possibility, and the appearance of completely new, interdependent goals in work cycles. The latter idea requires the ability of the autocatalytic set to save energy in order to spend it in its own self-organization, in its search for reagents necessary to uphold the network. These goals evidently introduce a – restricted, to be sure – teleology defined simply by the survival of the autocatalytic set itself: actions supporting this have a local teleological character. Thus, the autocatalytic set may, as it evolves, enlarge its cyclic network by recruiting new subcycles supporting and enhancing it in a developing structure of subcycles and sub-sub-cycles. 

Kauffman proposes that the concept of ‘autonomous agent’ implies a whole new cluster of interdependent concepts. Thus, the autonomy of the agent is defined by ‘catalytic closure’ (any reaction in the network demanding catalysis will get it) which is a genuine Gestalt property in the molecular system as a whole – and thus not in any way derivable from the chemistry of single chemical reactions alone.

Kauffman’s definitions on the basis of speculative chemistry thus entail not only the Kantian cyclic structure, but also the primitive perception and action phases of Uexküll’s functional circle. Thus, Kauffman’s definition of the organism in terms of an ‘autonomous agent’ basically builds on an Uexküllian intuition, namely the idea that the most basic property in a body is metabolism: the constrained, organizing processing of high-energy chemical material and the correlated perception and action performed to localize and utilize it – all of this constituting a metabolic cycle coordinating the organism’s in- and outside, defining teleological action. Perception and action phases are so to speak the extension of the cyclical structure of the closed catalytical set to encompass parts of its surroundings, so that the circle of metabolism may only be completed by means of successful perception and action parts.

The evolution of autonomous agents is taken as the empirical basis for the hypothesis of a general thermodynamic regularity based on non-ergodicity: the Big Bang universe (and, consequently, the biosphere) is not at equilibrium and will not reach equilibrium during the life-time of the universe. This gives rise to Kauffman’s idea of the ‘adjacent possible’. At a given point in evolution, one can define the set of chemical substances which do not exist in the universe – but which is at a distance of one chemical reaction only from a substance already existing in the universe. Biological evolution has, evidently, led to an enormous growth of types of organic macromolecules, and new such substances come into being every day. Maybe there is a sort of chemical potential leading from the actually realized substances and into the adjacent possible which is in some sense driving the evolution? In any case, Kauffman claims the hypothesis that the biosphere as such is supercritical in the sense that there is, in general, more than one action catalyzed by each reagent. Cells, in order not to be destroyed by this chemical storm, must be internally subcritical (even if close to the critical boundary). But if the biosphere as such is, in fact, supercritical, then this distinction seemingly a priori necessitates the existence of a boundary of the agent, protecting it against the environment.

BASEL III: The Deflationary Symbiotic Alliance Between Governments and Banking Sector. Thought of the Day 139.0

basel_reforms

The Bank for International Settlements (BIS) is steering the banks to deal with government debt, since the governments have been running large deficits to deal with the catastrophe of BASEL 2-inspired mortgaged-backed securities collapse. The deficits are ranged anywhere between 3 to 7 per cent of the GDP, and in cases even higher. These deficits were being used to create a floor under growth by stimulating the economy and bailing out financial institutions that got carried away by the wholesale funding of real estate. And this is precisely what BASEL 2 promulgated, i.e. encouraging financial institutions to hold mortgage-backed securities for investments.

In comes the BASEL 3 rules that implore than banks must be in compliance with these regulations. But, who gets to decide these regulations? Actually, banks do, since they then come on board for discussions with the governments, and such negotiations are catered to bail banks out with government deficits in order to oil the engine of economic growth. The logic here underlines the fact that governments can continue to find a godown of sorts for their deficits, while the banks can buy government debt without any capital commitment and make a good spread without the risk, thus serving the interests of the both parties involved mutually. Moreover, for the government, the process is political, as no government would find it acceptable to be objective in its viewership of letting a bubble deflate, because any process of deleveraging would cause the banks to offset their lending orgy, which is detrimental to the engineered economic growth. Importantly, without these deficits, the financial system could go down the deflationary spiral, which might turn out to be a difficult proposition to recover if there isn’t any complicity in rhyme and reason accorded to this particular dysfunctional and symbiotic relationship. So, whats the implication of all this? The more government debt banks hold, the less overall capital they need. And who says so? BASEL 3.

But, the mesh just seems to be building up here. In the same way that banks engineered counterfeit AAA-backed securities that were in fact an improbable financial hoax, how can countries that have government debt/GDP ratio to the tune of 90 – 120 per cent get a Standard&Poor’s ratings of a double-A? They have these ratings because they belong to a apical club that gives their members exclusive rights to a high rating even if they are irresponsible with their issuing of debts. Well, is that this simple? Yes and no. Yes, as is above, and no is merely clothing itself in a bit of an economic jargon, in that these are the countries where the government debt can be held without any capital against it. In other words, if a debt cannot be held, it cannot be issued, and that is the reason why countries are striving for issuing debts that have a zero weighting.

Let us take snippets across gradations of BASEL 1, 2 and 3. In BASEL 1, the unintended consequences were that banks were all buying equity in cross-owned companies. When the unwinding happened, equity just fell apart, since any beginning of a financial crisis is tailored to smash bank equities to begin with. Thats the first wound to rationality. In BASEL 2, banks were told to hold as much AAA-rated paper as they wanted with no capital against it. What happened if these ratings were downgraded? It would trigger a tsunami cutting through pension and insurance schemes to begin with forcing them to sell their papers and pile up huge losses meant to absorbed by capital, which doesn’t exist against these papers. So whatever gets sold is politically cushioned and buffered for by the governments, for the risks cannot be afforded to get any more denser as that explosion would sound the catastrophic death knell for the economy. BASEL 3 doesn’t really help, even if it mandated to hold a concentrated portfolio of government debt without any capital against it, for absorption of losses in case of a crisis hitting would have to exhumed through government bail-outs in scenarios where government debts are a century plus. So, are the banks in-stability, or given to more instability via BASEL 3?  The incentives to ever more hold government securities increase bank exposure to sovereign bonds, adding to existing exposure of government securities via repurchase transactions, investments and trading inventories. A ratings downgrade results in a fall in value of bonds triggering losses. Banks would then face calls for additional collateral, which would drain liquidity, and which would then require additional capital as way of compensation. where would this capital come in from, if not for the governments to source it? One way out would be recapitalization through government debt. On the other hand, the markets are required to hedge against the large holdings of government securities and so short stocks, currencies and insurance companies are all made to stare in the face of volatility that rips through them, of which the net resultant is falling liquidity. So, this vicious cycle would continue to cycle its way through any downgrades. And thats why the deflationary symbiotic alliance between the governments and banking sector isn’t anything more than high-fatigue tolerance….

Hypostatic Abstraction. Thought of the Day 138.0

maxresdefault

Hypostatic abstraction is linguistically defined as the process of making a noun out of an adjective; logically as making a subject out of a predicate. The idea here is that in order to investigate a predicate – which other predicates it is connected to, which conditions it is subjected to, in short to test its possible consequences using Peirce’s famous pragmatic maxim – it is necessary to posit it as a subject for investigation.

Hypostatic abstraction is supposed to play a crucial role in the reasoning process for several reasons. The first is that by making a thing out of a thought, it facilitates the possibility for thought to reflect critically upon the distinctions with which it operates, to control them, reshape them, combine them. Thought becomes emancipated from the prison of the given, in which abstract properties exist only as Husserlian moments, and even if prescission may isolate those moments and induction may propose regularities between them, the road for thought to the possible establishment of abstract objects and the relations between them seems barred. The object created by a hypostatic abstraction is a thing, but it is of course no actually existing thing, rather it is a scholastic ens rationis, it is a figment of thought. It is a second intention thought about a thought – but this does not, in Peirce’s realism, imply that it is necessarily fictitious. In many cases it may indeed be, but in other cases we may hit upon an abstraction having real existence:

Putting aside precisive abstraction altogether, it is necessary to consider a little what is meant by saying that the product of subjectal abstraction is a creation of thought. (…) That the abstract subject is an ens rationis, or creation of thought does not mean that it is a fiction. The popular ridicule of it is one of the manifestations of that stoical (and Epicurean, but more marked in stoicism) doctrine that existence is the only mode of being which came in shortly before Descartes, in concsequence of the disgust and resentment which progressive minds felt for the Dunces, or Scotists. If one thinks of it, a possibility is a far more important fact than any actuality can be. (…) An abstraction is a creation of thought; but the real fact which is important in this connection is not that actual thinking has caused the predicate to be converted into a subject, but that this is possible. The abstraction, in any important sense, is not an actual thought but a general type to which thought may conform.

The seemingly scepticist pragmatic maxim never ceases to surprise: if we take all possible effects we can conceive an object to have, then our conception of those effects is identical with our conception of that object, the maxim claims – but if we can conceive of abstract properties of the objects to have effects, then they are part of our conception of it, and hence they must possess reality as well. An abstraction is a possible way for an object to behave – and if certain objects do in fact conform to this behavior, then that abstraction is real; it is a ‘real possibility’ or a general object. If not, it may still retain its character of possibility. Peirce’s definitions of hypostatic abstractions now and then confuse this point. When he claims that

An abstraction is a substance whose being consists in the truth of some proposition concerning a more primary substance,

then the abstraction’s existence depends on the truth of some claim concerning a less abstract substance. But if the less abstract substance in question does not exist, and the claim in question consequently will be meaningless or false, then the abstraction will – following that definition – cease to exist. The problem is only that Peirce does not sufficiently clearly distinguish between the really existing substances which abstractive expressions may refer to, on the one hand, and those expressions themselves, on the other. It is the same confusion which may make one shuttle between hypostatic abstraction as a deduction and as an abduction. The first case corresponds to there actually existing a thing with the quality abstracted, and where we consequently may expect the existence of a rational explanation for the quality, and, correlatively, the existence of an abstract substance corresponding to the supposed ens rationis – the second case corresponds to the case – or the phase – where no such rational explanation and corresponding abstract substance has yet been verified. It is of course always possible to make an abstraction symbol, given any predicate – whether that abstraction corresponds to any real possibility is an issue for further investigation to estimate. And Peirce’s scientific realism makes him demand that the connections to actual reality of any abstraction should always be estimated (The Essential Peirce):

every kind of proposition is either meaningless or has a Real Secondness as its object. This is a fact that every reader of philosophy should carefully bear in mind, translating every abstractly expressed proposition into its precise meaning in reference to an individual experience.

This warning is directed, of course, towards empirical abstractions which require the support of particular instances to be pragmatically relevant but could hardly hold for mathematical abstraction. But in any case hypostatic abstraction is necessary for the investigation, be it in pure or empirical scenarios.

Bacteria’s Perception-Action Circle: Materiality of the Ontological. Thought of the Day 136.0

diatoms_in_the_ice

The unicellular organism has thin filaments protruding from its cell membrane, and in the absence of any stimuli, it simply wanders randomly around by changing between two characteristical movement patterns. One is performed by rotating the flagella counterclockwise. In that case, they form a bundle which pushes the cell forward along a curved path, a ‘run’ of random duration with these runs interchanging with ‘tumbles’ where the flagella shifts to clockwise rotation, making them work independently and hence moving the cell erratically around with small net displacement. The biased random walk now consists in the fact than in the presence of a chemical attractant, the runs happening to carry the cell closer to the attractant are extended, while runs in other directions are not. The sensation of the chemical attractant is performed temporally rather than spatially, because the cell moves too rapidly for concentration comparisons between its two ends to be possible. A chemical repellant in the environment gives rise to an analogous behavioral structure – now the biased random walk takes the cell away from the repellant. The bias saturates very quickly – which is what prevents the cell from continuing in a ‘false’ direction, because a higher concentration of attractant will now be needed to repeat the bias. The reception system has three parts, one detecting repellants such as leucin, the other detecting sugars, the third oxygen and oxygen-like substances.

Fig-4-Uexkull's-model-of-the-functional-cycle

The cell’s behavior forms a primitive, if full-fledged example of von Uexküll’s functional circle connecting specific perception signs and action signs. Functional circle behavior is thus no privilege for animals equipped with central nervous systems (CNS). Both types of signs involve categorization. First, the sensory receptors of the bacterium evidently are organized after categorization of certain biologically significant chemicals, while most chemicals that remain insignificant for the cell’s metabolism and survival are ignored. The self-preservation of metabolism and cell structure is hence the ultimate regulator which is supported by the perception-action cycles described. The categorization inherent in the very structure of the sensors is mirrored in the categorization of act types. Three act types are outlined: a null-action, composed of random running and tumbling, and two mirroring biased variants triggered by attractants and repellants, respectively. Moreover, a negative feed-back loop governed by quick satiation grants that the window of concentration shifts to which the cell is able to react appropriately is large – it so to speak calibrates the sensory system so that it does not remain blinded by one perception and does not keep moving the cell forward on in one selected direction. This adaptation of the system grants that it works in a large scale of different attractor/repellor concentrations. These simple signals at stake in the cell’s functional circle display an important property: at simple biological levels, the distinction between signs and perception vanish – that distinction is supposedly only relevant for higher CNS-based animals. Here, the signals are based on categorical perception – a perception which immediately categorizes the entity perceived and thus remains blind to internal differences within the category.

Pandemic e coli

The mechanism by which the cell identifies sugar, is partly identical to what goes on in human taste buds. Sensation of sugar gradients must, of course, differ from the consumption of it – while the latter, of course, destroys the sugar molecule, the former merely reads an ‘active site’ on the outside of the macromolecule. E . Coli – exactly like us – may be fooled by artificial sweeteners bearing the same ‘active site’ on their outer perimeter, even if being completely different chemicals (this is, of course, the secret behind such sweeteners, they are not sugars and hence do not enter the digestion process carrying the energy of carbohydrates). This implies that E . coli may be fooled. Bacteria may not lie, but a simpler process than lying (which presupposes two agents and the ability of being fooled) is, in fact, being fooled (presupposing, in turn, only one agent and an ambiguous environment). E . coli has the ability to categorize a series of sugars – but, by the same token, the ability to categorize a series of irrelevant substances along with them. On the one hand, the ability to recognize and categorize an object by a surface property only (due to the weak van der Waal-bonds and hydrogen bonds to the ‘active site’, in contrast to the strong covalent bonds holding the molecule together) facilitates perception economy and quick action adaptability. On the other hand, the economy involved in judging objects from their surface only has an unavoidable flip side: it involves the possibility of mistake, of being fooled by allowing impostors in your categorization. So in the perception-action circle of a bacterium, some of the self-regulatory stability of a metabolism involving categorized signal and action involvement with the surroundings form intercellular communication in multicellular organisms to reach out to complicated perception and communication in higher animals.