Revisiting Catastrophes. Thought of the Day 134.0

The most explicit influence from mathematics in semiotics is probably René Thom’s controversial theory of catastrophes (here and here), with philosophical and semiotic support from Jean Petitot. Catastrophe theory is but one of several formalisms in the broad field of qualitative dynamics (comprising also chaos theory, complexity theory, self-organized criticality, etc.). In all these cases, the theories in question are in a certain sense phenomenological because the focus is different types of qualitative behavior of dynamic systems grasped on a purely formal level bracketing their causal determination on the deeper level. A widespread tool in these disciplines is phase space – a space defined by the variables governing the development of the system so that this development may be mapped as a trajectory through phase space, each point on the trajectory mapping one global state of the system. This space may be inhabited by different types of attractors (attracting trajectories), repellors (repelling them), attractor basins around attractors, and borders between such basins characterized by different types of topological saddles which may have a complicated topology.

Catastrophe theory has its basis in differential topology, that is, the branch of topology keeping various differential properties in a function invariant under transformation. It is, more specifically, the so-called Whitney topology whose invariants are points where the nth derivative of a function takes the value 0, graphically corresponding to minima, maxima, turning tangents, and, in higher dimensions, different complicated saddles. Catastrophe theory takes its point of departure in singularity theory whose object is the shift between types of such functions. It thus erects a distinction between an inner space – where the function varies – and an outer space of control variables charting the variation of that function including where it changes type – where, e.g. it goes from having one minimum to having two minima, via a singular case with turning tangent. The continuous variation of control parameters thus corresponds to a continuous variation within one subtype of the function, until it reaches a singular point where it discontinuously, ‘catastrophically’, changes subtype. The philosophy-of-science interpretation of this formalism now conceives the stable subtype of function as representing the stable state of a system, and the passage of the critical point as the sudden shift to a new stable state. The configuration of control parameters thus provides a sort of map of the shift between continuous development and discontinuous ‘jump’. Thom’s semiotic interpretation of this formalism entails that typical catastrophic trajectories of this kind may be interpreted as stable process types phenomenologically salient for perception and giving rise to basic verbal categories.

Untitled

One of the simpler catastrophes is the so-called cusp (a). It constitutes a meta-diagram, namely a diagram of the possible type-shifts of a simpler diagram (b), that of the equation ax4 + bx2 + cx = 0. The upper part of (a) shows the so-called fold, charting the manifold of solutions to the equation in the three dimensions a, b and c. By the projection of the fold on the a, b-plane, the pointed figure of the cusp (lower a) is obtained. The cusp now charts the type-shift of the function: Inside the cusp, the function has two minima, outside it only one minimum. Different paths through the cusp thus corresponds to different variations of the equation by the variation of the external variables a and b. One such typical path is the path indicated by the left-right arrow on all four diagrams which crosses the cusp from inside out, giving rise to a diagram of the further level (c) – depending on the interpretation of the minima as simultaneous states. Here, thus, we find diagram transformations on three different, nested levels.

The concept of transformation plays several roles in this formalism. The most spectacular one refers, of course, to the change in external control variables, determining a trajectory through phase space where the function controlled changes type. This transformation thus searches the possibility for a change of the subtypes of the function in question, that is, it plays the role of eidetic variation mapping how the function is ‘unfolded’ (the basic theorem of catastrophe theory refers to such unfolding of simple functions). Another transformation finds stable classes of such local trajectory pieces including such shifts – making possible the recognition of such types of shifts in different empirical phenomena. On the most empirical level, finally, one running of such a trajectory piece provides, in itself, a transformation of one state into another, whereby the two states are rationally interconnected. Generally, it is possible to make a given transformation the object of a higher order transformation which by abstraction may investigate aspects of the lower one’s type and conditions. Thus, the central unfolding of a function germ in Catastrophe Theory constitutes a transformation having the character of an eidetic variation making clear which possibilities lie in the function germ in question. As an abstract formalism, the higher of these transformations may determine the lower one as invariant in a series of empirical cases.

Complexity theory is a broader and more inclusive term covering the general study of the macro-behavior of composite systems, also using phase space representation. The theoretical biologist Stuart Kauffman (intro) argues that in a phase space of all possible genotypes, biological evolution must unfold in a rather small and specifically qualified sub-space characterized by many, closely located and stable states (corresponding to the possibility of a species to ‘jump’ to another and better genotype in the face of environmental change) – as opposed to phase space areas with few, very stable states (which will only be optimal in certain, very stable environments and thus fragile when exposed to change), and also opposed, on the other hand, to sub-spaces with a high plurality of only metastable states (here, the species will tend to merge into neighboring species and hence never stabilize). On the base of this argument, only a small subset of the set of virtual genotypes possesses ‘evolvability’ as this special combination between plasticity and stability. The overall argument thus goes that order in biology is not a pure product of evolution; the possibility of order must be present in certain types of organized matter before selection begins – conversely, selection requires already organized material on which to work. The identification of a species with a co-localized group of stable states in genome space thus provides a (local) invariance for the transformation taking a trajectory through space, and larger groups of neighboring stabilities – lineages – again provide invariants defined by various more or less general transformations. Species, in this view, are in a certain limited sense ‘natural kinds’ and thus naturally signifying entities. Kauffman’s speculations over genotypical phase space have a crucial bearing on a transformation concept central to biology, namely mutation. On this basis far from all virtual mutations are really possible – even apart from their degree of environmental relevance. A mutation into a stable but remotely placed species in phase space will be impossible (evolution cannot cross the distance in phase space), just like a mutation in an area with many, unstable proto-species will not allow for any stabilization of species at all and will thus fall prey to arbitrary small environment variations. Kauffman takes a spontaneous and non-formalized transformation concept (mutation) and attempts a formalization by investigating its condition of possibility as movement between stable genomes in genotype phase space. A series of constraints turn out to determine type formation on a higher level (the three different types of local geography in phase space). If the trajectory of mutations must obey the possibility of walking between stable species, then the space of possibility of trajectories is highly limited. Self-organized criticality as developed by Per Bak (How Nature Works the science of self-organized criticality) belongs to the same type of theories. Criticality is here defined as that state of a complicated system where sudden developments in all sizes spontaneously occur.

Advertisement

Cryptocurrency and Efficient Market Hypothesis. Drunken Risibility.

According to the traditional definition, a currency has three main properties: (i) it serves as a medium of exchange, (ii) it is used as a unit of account and (iii) it allows to store value. Along economic history, monies were related to political power. In the beginning, coins were minted in precious metals. Therefore, the value of a coin was intrinsically determined by the value of the metal itself. Later, money was printed in paper bank notes, but its value was linked somewhat to a quantity in gold, guarded in the vault of a central bank. Nation states have been using their political power to regulate the use of currencies and impose one currency (usually the one issued by the same nation state) as legal tender for obligations within their territory. In the twentieth century, a major change took place: abandoning gold standard. The detachment of the currencies (specially the US dollar) from the gold standard meant a recognition that the value of a currency (specially in a world of fractional banking) was not related to its content or representation in gold, but to a broader concept as the confidence in the economy in which such currency is based. In this moment, the value of a currency reflects the best judgment about the monetary policy and the “health” of its economy.

In recent years, a new type of currency, a synthetic one, emerged. We name this new type as “synthetic” because it is not the decision of a nation state, nor represents any underlying asset or tangible wealth source. It appears as a new tradable asset resulting from a private agreement and facilitated by the anonymity of internet. Among this synthetic currencies, Bitcoin (BTC) emerges as the most important one, with a market capitalization of a few hundred million short of $80 billions.

bitcoin-price-bitstamp-sept1

Bitcoin Price Chart from Bitstamp

There are other cryptocurrencies, based on blockchain technology, such as Litecoin (LTC), Ethereum (ETH), Ripple (XRP). The website https://coinmarketcap.com/currencies/ counts up to 641 of such monies. However, as we can observe in the figure below, Bitcoin represents 89% of the capitalization of the market of all cryptocurrencies.

Untitled

Cryptocurrencies. Share of market capitalization of each currency.

One open question today is if Bitcoin is in fact a, or may be considered as a, currency. Until now, we cannot observe that Bitcoin fulfills the main properties of a standard currency. It is barely (though increasingly so!) accepted as a medium of exchange (e.g. to buy some products online), it is not used as unit of account (there are no financial statements valued in Bitcoins), and we can hardly believe that, given the great swings in price, anyone can consider Bitcoin as a suitable option to store value. Given these characteristics, Bitcoin could fit as an ideal asset for speculative purposes. There is no underlying asset to relate its value to and there is an open platform to operate round the clock.

Untitled

Bitcoin returns, sampled every 5 hours.

Speculation has a long history and it seems inherent to capitalism. One common feature of speculative assets in history has been the difficulty in valuation. Tulipmania, the South Sea bubble, and more others, reflect on one side human greedy behavior, and on the other side, the difficulty to set an objective value to an asset. All speculative behaviors were reflected in a super-exponential growth of the time series.

Cryptocurrencies can be seen as the libertarian response to central bank failure to manage financial crises, as the one occurred in 2008. Also cryptocurrencies can bypass national restrictions to international transfers, probably at a cheaper cost. Bitcoin was created by a person or group of persons under the pseudonym Satoshi Nakamoto. The discussion of Bitcoin has several perspectives. The computer science perspective deals with the strengths and weaknesses of blockchain technology. In fact, according to R. Ali et. al., the introduction of a “distributed ledger” is the key innovation. Traditional means of payments (e.g. a credit card), rely on a central clearing house that validate operations, acting as “middleman” between buyer and seller. On contrary, the payment validation system of Bitcoin is decentralized. There is a growing army of miners, who put their computer power at disposal of the network, validating transactions by gathering together blocks, adding them to the ledger and forming a ’block chain’. This work is remunerated by giving the miners Bitcoins, what makes (until now) the validating costs cheaper than in a centralized system. The validation is made by solving some kind of algorithm. With the time solving the algorithm becomes harder, since the whole ledger must be validated. Consequently it takes more time to solve it. Contrary to traditional currencies, the total number of Bitcoins to be issued is beforehand fixed: 21 million. In fact, the issuance rate of Bitcoins is expected to diminish over time. According to Laursen and Kyed, validating the public ledger was initially rewarded with 50 Bitcoins, but the protocol foresaw halving this quantity every four years. At the current pace, the maximum number of Bitcoins will be reached in 2140. Taking into account the decentralized character, Bitcoin transactions seem secure. All transactions are recorded in several computer servers around the world. In order to commit fraud, a person should change and validate (simultaneously) several ledgers, which is almost impossible. Additional, ledgers are public, with encrypted identities of parties, making transactions “pseudonymous, not anonymous”. The legal perspective of Bitcoin is fuzzy. Bitcoin is not issued, nor endorsed by a nation state. It is not an illegal substance. As such, its transaction is not regulated.

In particular, given the nonexistence of saving accounts in Bitcoin, and consequently the absense of a Bitcoin interest rate, precludes the idea of studying the price behavior in relation with cash flows generated by Bitcoins. As a consequence, the underlying dynamics of the price signal, finds the Efficient Market Hypothesis as a theoretical framework. The Efficient Market Hypothesis (EMH) is the cornerstone of financial economics. One of the seminal works on the stochastic dynamics of speculative prices is due to L Bachelier, who in his doctoral thesis developed the first mathematical model concerning the behavior of stock prices. The systematic study of informational efficiency begun in the 1960s, when financial economics was born as a new area within economics. The classical definition due to Eugene Fama (Foundations of Finance_ Portfolio Decisions and Securities Prices 1976-06) says that a market is informationally efficient if it “fully reflects all available information”. Therefore, the key element in assessing efficiency is to determine the appropriate set of information that impels prices. Following Efficient Capital Markets, informational efficiency can be divided into three categories: (i) weak efficiency, if prices reflect the information contained in the past series of prices, (ii) semi-strong efficiency, if prices reflect all public information and (iii) strong efficiency, if prices reflect all public and private information. As a corollary of the EMH, one cannot accept the presence of long memory in financial time series, since its existence would allow a riskless profitable trading strategy. If markets are informationally efficient, arbitrage prevent the possibility of such strategies. If we consider the financial market as a dynamical structure, short term memory can exist (to some extent) without contradicting the EMH. In fact, the presence of some mispriced assets is the necessary stimulus for individuals to trade and reach an (almost) arbitrage free situation. However, the presence of long range memory is at odds with the EMH, because it would allow stable trading rules to beat the market.

The presence of long range dependence in financial time series generates a vivid debate. Whereas the presence of short term memory can stimulate investors to exploit small extra returns, making them disappear, long range correlations poses a challenge to the established financial model. As recognized by Ciaian et. al., Bitcoin price is not driven by macro-financial indicators. Consequently a detailed analysis of the underlying dynamics (Hurst exponent) becomes important to understand its emerging behavior. There are several methods (both parametric and non parametric) to calculate the Hurst exponent, which become a mandatory framework to tackle BTC trading.

Dynamics of Point Particles: Orthogonality and Proportionality

optical

Let γ be a smooth, future-directed, timelike curve with unit tangent field ξa in our background spacetime (M, gab). We suppose that some massive point particle O has (the image of) this curve as its worldline. Further, let p be a point on the image of γ and let λa be a vector at p. Then there is a natural decomposition of λa into components proportional to, and orthogonal to, ξa:

λa = (λbξba + (λa −(λbξba) —– (1)

Here, the first part of the sum is proportional to ξa, whereas the second one is orthogonal to ξa.

These are standardly interpreted, respectively, as the “temporal” and “spatial” components of λa relative to ξa (or relative to O). In particular, the three-dimensional vector space of vectors at p orthogonal to ξa is interpreted as the “infinitesimal” simultaneity slice of O at p. If we introduce the tangent and orthogonal projection operators

kab = ξa ξb —– (2)

hab = gab − ξa ξb —– (3)

then the decomposition can be expressed in the form

λa = kab λb + hab λb —– (4)

We can think of kab and hab as the relative temporal and spatial metrics determined by ξa. They are symmetric and satisfy

kabkbc = kac —– (5)

habhbc = hac —– (6)

Many standard textbook assertions concerning the kinematics and dynamics of point particles can be recovered using these decomposition formulas. For example, suppose that the worldline of a second particle O′ also passes through p and that its four-velocity at p is ξ′a. (Since ξa and ξ′a are both future-directed, they are co-oriented; i.e., ξa ξ′a > 0.) We compute the speed of O′ as determined by O. To do so, we take the spatial magnitude of ξ′a relative to O and divide by its temporal magnitude relative to O:

v = speed of O′ relative to O = ∥hab ξ′b∥ / ∥kab ξ′b∥ —– (7)

For any vector μa, ∥μa∥ is (μaμa)1/2 if μ is causal, and it is (−μaμa)1/2 otherwise.

We have from equations 2, 3, 5 and 6

∥kab ξ′b∥ = (kab ξ′b kac ξ′c)1/2 = (kbc ξ′bξ′c)1/2 = (ξ′bξb)

and

∥hab ξ′b∥ = (−hab ξ′b hac ξ′c)1/2 = (−hbc ξ′bξ′c)1/2 = ((ξ′bξb)2 − 1)1/2

so

v = ((ξ’bξb)2 − 1)1/2 / (ξ′bξb) < 1 —– (8)

Thus, as measured by O, no massive particle can ever attain the maximal speed 1. We note that equation (8) implies that

(ξ′bξb) = 1/√(1 – v2) —– (9)

It is a basic fact of relativistic life that there is associated with every point particle, at every event on its worldline, a four-momentum (or energy-momentum) vector Pa that is tangent to its worldline there. The length ∥Pa∥ of this vector is what we would otherwise call the mass (or inertial mass or rest mass) of the particle. So, in particular, if Pa is timelike, we can write it in the form Pa =mξa, where m = ∥Pa∥ > 0 and ξa is the four-velocity of the particle. No such decomposition is possible when Pa is null and m = ∥Pa∥ = 0.

Suppose a particle O with positive mass has four-velocity ξa at a point, and another particle O′ has four-momentum Pa there. The latter can either be a particle with positive mass or mass 0. We can recover the usual expressions for the energy and three-momentum of the second particle relative to O if we decompose Pa in terms of ξa. By equations (4) and (2), we have

Pa = (Pbξb) ξa + habPb —– (10)

the first part of the sum is the energy component, while the second is the three-momentum. The energy relative to O is the coefficient in the first term: E = Pbξb. If O′ has positive mass and Pa = mξ′a, this yields, by equation (9),

E = m (ξ′bξb) = m/√(1 − v2) —– (11)

(If we had not chosen units in which c = 1, the numerator in the final expression would have been mc2 and the denominator √(1 − (v2/c2)). The three−momentum relative to O is the second term habPb in the decomposition of Pa, i.e., the component of Pa orthogonal to ξa. It follows from equations (8) and (9) that it has magnitude

p = ∥hab mξ′b∥ = m((ξ′bξb)2 − 1)1/2 = mv/√(1 − v2) —– (12)

Interpretive principle asserts that the worldlines of free particles with positive mass are the images of timelike geodesics. It can be thought of as a relativistic version of Newton’s first law of motion. Now we consider acceleration and a relativistic version of the second law. Once again, let γ : I → M be a smooth, future-directed, timelike curve with unit tangent field ξa. Just as we understand ξa to be the four-velocity field of a massive point particle (that has the image of γ as its worldline), so we understand ξnnξa – the directional derivative of ξa in the direction ξa – to be its four-acceleration field (or just acceleration) field). The four-acceleration vector at any point is orthogonal to ξa. (This is, since ξannξa) = 1/2 ξnnaξa) = 1/2 ξnn (1) = 0). The magnitude ∥ξnnξa∥ of the four-acceleration vector at a point is just what we would otherwise describe as the curvature of γ there. It is a measure of the rate at which γ “changes direction.” (And γ is a geodesic precisely if its curvature vanishes everywhere).

The notion of spacetime acceleration requires attention. Consider an example. Suppose you decide to end it all and jump off the tower. What would your acceleration history be like during your final moments? One is accustomed in such cases to think in terms of acceleration relative to the earth. So one would say that you undergo acceleration between the time of your jump and your calamitous arrival. But on the present account, that description has things backwards. Between jump and arrival, you are not accelerating. You are in a state of free fall and moving (approximately) along a spacetime geodesic. But before the jump, and after the arrival, you are accelerating. The floor of the observation deck, and then later the sidewalk, push you away from a geodesic path. The all-important idea here is that we are incorporating the “gravitational field” into the geometric structure of spacetime, and particles traverse geodesics iff they are acted on by no forces “except gravity.”

The acceleration of our massive point particle – i.e., its deviation from a geodesic trajectory – is determined by the forces acting on it (other than “gravity”). If it has mass m, and if the vector field Fa on I represents the vector sum of the various (non-gravitational) forces acting on it, then the particle’s four-acceleration ξnnξa satisfies

Fa = mξnnξa —– (13)

This is Newton’s second law of motion. Consider an example. Electromagnetic fields are represented by smooth, anti-symmetric fields Fab. If a particle with mass m > 0, charge q, and four-velocity field ξa is present, the force exerted by the field on the particle at a point is given by qFabξb. If we use this expression for the left side of equation (13), we arrive at the Lorentz law of motion for charged particles in the presence of an electromagnetic field:

qFabξb = mξbbξa —– (14)

This equation makes geometric sense. The acceleration field on the right is orthogonal to ξa. But so is the force field on the left, since ξa(Fabξb) = ξaξbFab = ξaξbF(ab), and F(ab) = 0 by the anti-symmetry of Fab.

Evolutionary Game Theory. Note Quote

Untitled

In classical evolutionary biology the fitness landscape for possible strategies is considered static. Therefore optimization theory is the usual tool in order to analyze the evolution of strategies that consequently tend to climb the peaks of the static landscape. However in more realistic scenarios the evolution of populations modifies the environment so that the fitness landscape becomes dynamic. In other words, the maxima of the fitness landscape depend on the number of specimens that adopt every strategy (frequency-dependent landscape). In this case, when the evolution depends on agents’ actions, game theory is the adequate mathematical tool to describe the process. But this is precisely the scheme in that the evolving physical laws (i.e. algorithms or strategies) are generated from the agent-agent interactions (bottom-up process) submitted to natural selection.

The concept of evolutionarily stable strategy (ESS) is central to evolutionary game theory. An ESS is defined as that strategy that cannot be displaced by any alternative strategy when being followed by the great majority – almost all of systems in a population. In general,

an ESS is not necessarily optimal; however it might be assumed that in the last stages of evolution — before achieving the quantum equilibrium — the fitness landscape of possible strategies could be considered static or at least slow varying. In this simplified case an ESS would be one with the highest payoff therefore satisfying an optimizing criterion. Different ESSs could exist in other regions of the fitness landscape.

In the information-theoretic Darwinian approach it seems plausible to assume as optimization criterion the optimization of information flows for the system. A set of three regulating principles could be:

Structure: The complexity of the system is optimized (maximized).. The definition that is adopted for complexity is Bennett’s logical depth that for a binary string is the time needed to execute the minimal program that generates such string. There is no a general acceptance of the definition of complexity, neither is there a consensus on the relation between the increase of complexity – for a certain definition – and Darwinian evolution. However, it seems that there is some agreement on the fact that, in the long term, Darwinian evolution should drive to an increase in complexity in the biological realm for an adequate natural definition of this concept. Then the complexity of a system at time in this theory would be the Bennett’s logical depth of the program stored at time in its Turing machine. The increase of complexity is a characteristic of Lamarckian evolution, and it is also admitted that the trend of evolution in the Darwinian theory is in the direction in which complexity grows, although whether this tendency depends on the timescale – or some other factors – is still not very clear.

Dynamics: The information outflow of the system is optimized (minimized). The information is the Fisher information measure for the probability density function of the position of the system. According to S. A. Frank, natural selection acts maximizing the Fisher information within a Darwinian system. As a consequence, assuming that the flow of information between a system and its surroundings can be modeled as a zero-sum game, Darwinian systems would follow dynamics.

Interaction: The interaction between two subsystems optimizes (maximizes) the complexity of the total system. The complexity is again equated to the Bennett’s logical depth. The role of Interaction is central in the generation of composite systems, therefore in the structure for the information processor of composite systems resulting from the logical interconnections among the processors of the constituents. There is an enticing option of defining the complexity of a system in contextual terms as the capacity of a system for anticipating the behavior at t + ∆t of the surrounding systems included in the sphere of radius r centered in the position X(t) occupied by the system. This definition would directly drive to the maximization of the predictive power for the systems that maximized their complexity. However, this magnitude would definitely be very difficult to even estimate, in principle much more than the usual definitions for complexity.

Quantum behavior of microscopic systems should now emerge from the ESS. In other terms, the postulates of quantum mechanics should be deduced from the application of the three regulating principles on our physical systems endowed with an information processor.

Let us apply Structure. It is reasonable to consider that the maximization of the complexity of a system would in turn maximize the predictive power of such system. And this optimal statistical inference capacity would plausibly induce the complex Hilbert space structure for the system’s space of states. Let us now consider Dynamics. This is basically the application of the principle of minimum Fisher information or maximum Cramer-Rao bound on the probability distribution function for the position of the system. The concept of entanglement seems to be determinant to study the generation of composite systems, in particular in this theory through applying Interaction. The theory admits a simple model that characterizes the entanglement between two subsystems as the mutual exchange of randomizers (R1, R2), programs (P1, P2) – with their respective anticipation modules (A1, A2) – and wave functions (Ψ1, Ψ2). In this way, both subsystems can anticipate not only the behavior of their corresponding surrounding systems, but also that of the environment of its partner entangled subsystem. In addition, entanglement can be considered a natural phenomenon in this theory, a consequence of the tendency to increase the complexity, and therefore, in a certain sense, an experimental support to the theory.

In addition, the information-theoretic Darwinian approach is a minimalist realist theory – every system follows a continuous trajectory in time, as in Bohmian mechanics, a local theory in physical space – in this theory apparent nonlocality, as in Bell’s inequality violations, would be an artifact of the anticipation module in the information space, although randomness would necessarily be intrinsic to nature through the random number generator methodologically associated with every fundamental system at t = 0, and as essential ingredient to start and fuel – through variation – Darwinian evolution. As time increases, random events determined by the random number generators would progressively be replaced by causal events determined by the evolving programs that gradually take control of the elementary systems. Randomness would be displaced by causality as physical Darwinian evolution gave rise to the quantum equilibrium regime, but not completely, since randomness would play a crucial role in the optimization of strategies – thus, of information flows – as game theory states.

Nomological Possibility and Necessity

6a010535ce1cf6970c01bb096e3f72970d

An event E is nomologically possible in history h at time t if the initial segment of that history up to t admits at least one continuation in Ω that lies in E; and E is nomologically necessary in h at t if every continuation of the history’s initial segment up to t lies in E.

More formally, we say that one history, h’, is accessible from another, h, at time t if the initial segments of h and h’ up to time t coincide, i.e., ht = ht‘. We then write h’Rth. The binary relation Rt on possible histories is in fact an equivalence relation (reflexive, symmetric, and transitive). Now, an event E ⊆ Ω is nomologically possible in history h at time t if some history h’ in Ω that is accessible from h at t is contained in E. Similarly, an event E ⊆ Ω is nomologically necessary in history h at time t if every history h’ in Ω that is accessible from h at t is contained in E.

In this way, we can define two modal operators, ♦t and ¤t, to express possibility and necessity at time t. We define each of them as a mapping from events to events. For any event E ⊆ Ω,

t E = {h ∈ Ω : for some h’ ∈ Ω with h’Rth, we have h’ ∈ E},

¤t E = {h ∈ Ω : for all h’ ∈ Ω with h’Rth, we have h’ ∈ E}.

So, ♦t E is the set of all histories in which E is possible at time t, and ¤t E is the set of all histories in which E is necessary at time t. Accordingly, we say that “ ♦t E” holds in history h if h is an element of ♦t E, and “ ¤t E” holds in h if h is an element of ¤t E. As one would expect, the two modal operators are duals of each other: for any event E ⊆ Ω, we have ¤t E = ~ ♦t ~E and ♦E = ~ ¤t ~E.

Although we have here defined nomological possibility and necessity, we can analogously define logical possibility and necessity. To do this, we must simply replace every occurrence of the set Ω of nomologically possible histories in our definitions with the set H of logically possible histories. Second, by defining the operators ♦t and ¤t as functions from events to events, we have adopted a semantic definition of these modal notions. However, we could also define them syntactically, by introducing an explicit modal logic. For each point in time t, the logic corresponding to the operators ♦t and ¤t would then be an instance of a standard S5 modal logic.

The analysis shows how nomological possibility and necessity depend on the dynamics of the system. In particular, as time progresses, the notion of possibility becomes more demanding: fewer events remain possible at each time. And the notion of necessity becomes less demanding: more events become necessary at each time, for instance due to having been “settled” in the past. Formally, for any t and t’ in T with t < t’ and any event E ⊆ Ω,

if ♦t’ E then ♦E,

if ¤t E then ¤t’ E.

Furthermore, in a deterministic system, for every event E and any time t, we have ♦t E = ¤t E. In other words, an event is possible in any history h at time t if and only if it is necessary in h at t. In an indeterministic system, by contrast, necessity and possibility come apart.

Let us say that one history, h’, is accessible from another, h, relative to a set T’ of time points, if the restrictions of h and h’ to T’ coincide, i.e., h’T’ = hT’. We then write h’RT’h. Accessibility at time t is the special case where T’ is the set of points in time up to time t. We can define nomological possibility and necessity relative to T’ as follows. For any event E ⊆ Ω,

T’ E = {h ∈ Ω : for some h’ ∈ Ω with h’RT’h, we have h’ ∈ E},

¤T’ E = {h ∈ Ω : for all h’ ∈ Ω with h’RT’h, we have h’ ∈ E}.

Although these modal notions are much less familiar than the standard ones (possibility and necessity at time t), they are useful for some purposes. In particular, they allow us to express the fact that the states of a system during a particular period of time, T’ ⊆ T, render some events E possible or necessary.

Finally, our definitions of possibility and necessity relative to some general subset T’ of T also allow us to define completely “atemporal” notions of possibility and necessity. If we take T’ to be the empty set, then the accessibility relation RT’ becomes the universal relation, under which every history is related to every other. An event E is possible in this atemporal sense (i.e., ♦E) iff E is a non-empty subset of Ω, and it is necessary in this atemporal sense (i.e., ¤E) if E coincides with all of Ω. These notions might be viewed as possibility and necessity from the perspective of some observer who has no temporal or historical location within the system and looks at it from the outside.

Production Function as a Growth Model

Cobb-Douglas_Production_Function

Any science is tempted by the naive attitude of describing its object of enquiry by means of input-output representations, regardless of state. Typically, microeconomics describes the behavior of firms by means of a production function:

y = f(x) —– (1)

where x ∈ R is a p×1 vector of production factors (the input) and y ∈ R is a q × 1 vector of products (the output).

Both y and x are flows expressed in terms of physical magnitudes per unit time. Thus, they may refer to both goods and services.

Clearly, (1) is independent of state. Economics knows state variables as capital, which may take the form of financial capital (the financial assets owned by a firm), physical capital (the machinery owned by a firm) and human capital (the skills of its employees). These variables should appear as arguments in (1).

This is done in the Georgescu-Roegen production function, which may be expressed as follows:

y= f(k,x) —– (2)

where k ∈ R is a m × 1 vector of capital endowments, measured in physical magnitudes. Without loss of generality, we may assume that the first mp elements represent physical capital, the subsequent mh elements represent human capital and the last mf elements represent financial capital, with mp + mh + mf = m.

Contrary to input and output flows, capital is a stock. Physical capital is measured by physical magnitudes such as the number of machines of a given type. Human capital is generally proxied by educational degrees. Financial capital is measured in monetary terms.

Georgescu-Roegen called the stocks of capital funds, to be contrasted to the flows of products and production factors. Thus, Georgescu-Roegen’s production function is also known as the flows-funds model.

Georgescu-Roegen’s production function is little known and seldom used, but macroeconomics often employs aggregate production functions of the following form:

Y = f(K,L) —– (3)

where Y ∈ R is aggregate income, K ∈ R is aggregate capital and L ∈ R is aggregate labor. Though this connection is never made, (3) is a special case of (2).

The examination of (3) highlighted a fundamental difficulty. In fact, general equilibrium theory requires that the remunerations of production factors are proportional to the corresponding partial derivatives of the production function. In particular, the wage must be proportional to ∂f/∂L and the interest rate must be proportional to ∂f/∂K. These partial derivatives are uniquely determined if df is an exact differential.

If the production function is (1), this translates into requiring that:

2f/∂xi∂xj = ∂2f/∂xj∂xi ∀i, j —– (4)

which are surely satisfied because all xi are flows so they can be easily reverted. If the production function is expressed by (2), but m = 1 the following conditions must be added to (4):

2f/∂k∂xi2f/∂xi∂k ∀i —– (5)

Conditions 5 are still surely satisfied because there is only one capital good. However, if m > 1 the following conditions must be added to conditions 4:

2f/∂ki∂xj = ∂2f/∂xj∂ki ∀i, j —– (6)

2f/∂ki∂kj = ∂2f/∂kj∂ki ∀i, j —– (7)

Conditions 6 and 7 are not necessarily satisfied because each derivative depends on all stocks of capital ki. In particular, conditions 6 and 7 do not hold if, after capital ki has been accumulated in order to use the technique i, capital kj is accumulated in order to use the technique j but, subsequently, production reverts to technique i. This possibility, known as reswitching of techniques, undermines the validity of general equilibrium theory.

For many years, the reswitching of techniques has been regarded as a theoretical curiosum. However, the recent resurgence of coal as a source of energy may be regarded as instances of reswitching.

Finally, it should be noted that as any input-state-output representation, (2) must be complemented by the dynamics of the state variables:

k ̇ = g ( k , x , y ) —– ( 8 )

which updates the vector k in (2) making it dependent on time. In the case of aggregate production function (3), (8) combines with (3) to constitute a growth model.

Dissipations – Bifurcations – Synchronicities. Thought of the Day 29.0

Deleuze’s thinking expounds on Bergson’s adaptation of multiplicities in step with the catastrophe theory, chaos theory, dissipative systems theory, and quantum theory of his era. For Bergson, hybrid scientific/philosophical methodologies were not viable. He advocated tandem explorations, the two “halves” of the Absolute “to which science and metaphysics correspond” as a way to conceive the relations of parallel domains. The distinctive creative processes of these disciplines remain irreconcilable differences-in-kind, commonly manifesting in lived experience. Bergson: Science is abstract, philosophy is concrete. Deleuze and Guattari: Science thinks the function, philosophy the concept. Bergson’s Intuition is a method of division. It differentiates tendencies, forces. Division bifurcates. Bifurcations are integral to contingency and difference in systems logic.

The branching of a solution into multiple solutions as a system is varied. This bifurcating principle is also known as contingency. Bifurcations mark a point or an event at which a system divides into two alternative behaviours. Each trajectory is possible. The line of flight actually followed is often indeterminate. This is the site of a contingency, were it a positionable “thing.” It is at once a unity, a dualism and a multiplicity:

Bifurcations are the manifestation of an intrinsic differentiation between parts of the system itself and the system and its environment. […] The temporal description of such systems involves both deterministic processes (between bifurcations) and probabilistic processes (in the choice of branches). There is also a historical dimension involved […] Once we have dissipative structures we can speak of self-organisation.

Untitled

Figure: In a dynamical system, a bifurcation is a period doubling, quadrupling, etc., that accompanies the onset of chaos. It represents the sudden appearance of a qualitatively different solution for a nonlinear system as some parameter is varied. The illustration above shows bifurcations (occurring at the location of the blue lines) of the logistic map as the parameter r is varied. Bifurcations come in four basic varieties: flip bifurcation, fold bifurcation, pitchfork bifurcation, and transcritical bifurcation. 

A bifurcation, according to Prigogine and Stengers, exhibits determinacy and choice. It pertains to critical points, to singular intensities and their division into multiplicities. The scientific term, bifurcation, can be substituted for differentiation when exploring processes of thought or as Massumi explains affect:

Affect and intensity […] is akin to what is called a critical point, or bifurcation point, or singular point, in chaos theory and the theory of dissipative structures. This is the turning point at which a physical system paradoxically embodies multiple and normally mutually exclusive potentials… 

The endless bifurcating division of progressive iterations, the making of multiplicities by continually differentiating binaries, by multiplying divisions of dualities – this is the ontological method of Bergson and Deleuze after him. Bifurcations diagram multiplicities, from monisms to dualisms, from differentiation to differenciation, creatively progressing. Manuel Delanda offers this account, which describes the additional technicality of control parameters, analogous to higher-level computer technologies that enable dynamic interaction. These protocols and variable control parameters are later discussed in detail in terms of media objects in the metaphorical state space of an in situ technology:

[…] for the purpose of defining an entity to replace essences, the aspect of state space that mattered was its singularities. One singularity (or set of singularities) may undergo a symmetry-breaking transition and be converted into another one. These transitions are called bifurcations and may be studied by adding to a particular state space one or more ‘control knobs’ (technically control parameters) which determine the strength of external shocks or perturbations to which the system being modeled may be subject.

Another useful example of bifurcation with respect to research in the neurological and cognitive sciences is Francesco Varela’s theory of the emergence of microidentities and microworlds. The ready-for-action neuronal clusters that produce microindentities, from moment to moment, are what he calls bifurcating “break- downs”. These critical events in which a path or microidentity is chosen are, by implication, creative:

The Semiotic Theory of Autopoiesis, OR, New Level Emergentism

higher-consciousness

The dynamics of all the life-cycle meaning processes can be described in terms of basic semiotic components, algebraic constructions of the following forms:

Pnn:fnn] → Ξn+1)

where Ξn is a sign system corresponding to a representation of a (design) problem at time t1, Ξn+1 is a sign system corresponding to a representation of the problem at time t2, t2 > t1, fn is a composition of semiotic morphisms that specifies the interaction of variation and selection under the condition of information closure, which requires no external elements be added to the current sign system; мn is a semiotic morphism, and Pn is the probability associated with мn, ΣPn = 1, n=1,…,M, where M is the number of the meaningful transformations of the resultant sign system after fn. There is a partial ranking – importance ordering – on the constraints of A in every Ξn, such that lower ranked constraints can be violated in order for higher ranked constraints to be satisfied. The morphisms of fn preserve the ranking.

The Semiotic Theory of Self-Organizing Systems postulates that in the scale hierarchy of dynamical organization, a new level emerges if and only if a new level in the hierarchy of semiotic interpretance emerges. As the development of a new product always and naturally causes the emergence of a new meaning, the above-cited Principle of Emergence directly leads us to the formulation of the first law of life-cycle semiosis as follows:

I. The semiosis of a product life cycle is represented by a sequence of basic semiotic components, such that at least one of the components is well defined in the sense that not all of its morphisms of м and f are isomorphisms, and at least one м in the sequence is not level-preserving in the sense that it does not preserve the original partial ordering on levels.

For the present (i.e. for an on-going process), there exists a probability distribution over the possible мn for every component in the sequence. For the past (i.e. retrospectively), each of the distributions collapses to a single mapping with Pn = 1, while the sequence of basic semiotic components is degenerated to a sequence of functions. For the future, the life-cycle meaning-making

Industrial Semiosis. Note Quote.

rUNdh

The concept of Industrial Semiosis categorizes the product life-cycle processes along three semiotic levels of meaning emergence: 1) the ontogenic level that deals with the life history data and future expectations about a single occurrence of a product; 2) the typogenic level that holds the processes related to a product type or generation; and 3) the phylogenic level that embraces the meaning-affecting processes common to all of the past and current types and occurrences of a product. The three levels naturally differ by the characteristic durational times of the grouped semiosis processes: as one moves from the lowest, ontogenic level to the higher levels, the objects become larger and more complicated and have slower dynamics in both original interpretation and meaning change. The semantics of industrial semiosis in industry investigates the relationships that hold between the syntactical elements — the signs in language, models, data — and the objects that matter in industry, such as customers, suppliers, work-pieces, products, processes, resources, tools, time, space, investments, costs, etc. The pragmatics of industrial semiosis deals with the expression and appeal functions of all kinds of languages, data and models and their interpretations in the setting of any possible enterprise context, as part of the enterprise realising its mission by enterprising, engineering, manufacturing, servicing, re-engineering, competing, etc. The relevance of the presented definitions for infor- mation systems engineering is still limited and vague: the definitions are very general and hardly reflect any knowledge about the industrial domain and its objects, nor do they reflect knowledge about the ubiquitous information infrastructure and the sign systems it accommodates.

A product (as concept) starts its development with initially coinciding onto-, typo-, and phylogenesis processes but distinct and pre-existing semiotic levels of interpretation. The concept is evolved, and typogenesis works to reorganize the relationships between the onto- and phylogenesis processes, as the variety of objects involved in product development increases. Product types and their interactions mediate – filter and buffer – between the levels above and below: not all variety of distinctions remains available for re-organization as phylos, nor every lowest-level object have a material relevance there. The phylogenic level is buffered against variations at the ontogenic level by the stabilizing mediations at the typogenic level.

The dynamics of the interactions between the semiotic levels can well be described in terms of the basic processes of variation and selection. In complex system evolution, variation stands for the generation of a variety of simultaneously present, distinct entities (synchronic variety), or of subsequent, distinct states of the same entity (diachronic variety). Variation makes variety increase and produces more distinctions. Selection means, in essence, the elimination of certain distinct entities and/or states, and it reduces the number of remaining entities and/or states.

From a semiotic point of view, the variety of a product intended to operate in an environment is determined by the devised product structure (i.e. the relations established between product parts – its synchronic variety) and the possible relations between the product and the anticipated environment (i.e. the product feasible states – its potential diachronic variety), which together aggregate the product possible configurations. The variety is defined on the ontogenic level that includes elements for description of both the structure and environment. The ontogenesis is driven by variation that goes through different configurations of the product and eventually discovers (by distinction selection at every stage of the product life cycle) configurations, which are stable on one or another time-scale. A constraint on the configurations is then imposed, resulting in the selective retention – emergence of a new meaning for a (not necessarily new) sign – at the typogenic level. The latter decreases the variety but specializes the ontogenic level so that only those distinctions ultimately remain, which fit to the environment (i.e. only dynamically stable relation patterns are preserved). Analogously but at a slower time- scale, the typogenesis results in the emergence of a new meaning on the phylogenic level that consecutively specializes the lower levels. Thus, the main semiotic principle of product development is such that the dynamics of the meaning-making processes always seeks to decrease the number of possible relations between the product and its environment and hence, the semiosis of product life cycle is naturally simplified. At the same time, however, the ‘natural’ dynamics is such that augments the evolutive potential of the product concept for increasing its organizational richness: the emergence of new signs (that may lead to the emergence of new levels of interpretation) requires a new kind of information and new descriptive categories must be given to deal with the still same product.

Conjuncted Again: Noise Traders, Chartists and Fundamentalists: The In-Betweeners

image011

Interesting questions are whether the rational traders (fundamentalists) will drive irrational traders (chartists and noise traders) out of the market, or whether the irrational traders (chartists and noise traders) will derive the rational traders (fundamentalists) out of the market. As in many studies on heterogeneous interacting agent models, it may seem natural that switching between different trading strategies plays an important role. Now the question is: how a trader makes his choice between the fundamentalist and chartist strategies. The basic idea is that he chooses according to the accuracy of prediction. More precisely, the proportion of chartists κt is updated according to the difference between the squared prediction errors of each strategy. Formally, we write the dynamics of the proportion of chartists κ as

κt = (1 – ξ)/(1 + exp(Ψ(Ect – Eft)) —– (1)

Ect = (pt – pct)2>, Eft = (pt – pft)2

where ψ measures how sensitively the mass of traders selects the optimal prediction strategy at period t. This parameter was introduced as the intensity of choice to switch trading strategies. Equation 1 shows that if the chartists’ squared prediction error Ect is smaller than that of fundamentalists Eft , some fraction of fundamentalists will become chartists, and visa versa.