Catastrophe Revisited. Note Quote.

Transversality and structural stability are the topics of Thom’s important transversality and isotopy theorems; the first one says that transversality is a stable property, the second one that transverse crossings are themselves stable. These theorems can be extended to families of functions: If f: Rn x Rr–>R is equivalent to any family f + p: Rn x Rr–>R, where p is a sufficiently small family Rn x Rr–> R, then f is structurally stable. There may be individual functions with degenerate critical points in such a family, but these exceptions from the rule are in a sense “checked” by the other family members. Such families can be obtained e.g. by parametrizing the original function with one or several extra variables. Thom’s classification theorem, comes in at this level.

So, in a given state function, catastrophe theory separates between two kinds of functions: one “Morse” piece, containing the nondegenerate critical points, and one piece, where the (parametrized) family contains at least one degenerate critical point. The second piece has two sets of variables; the state variables (denoted x, y…) responsible for the critical points, and the control variables or parameters (denoted a, b, c…), capable of stabilizing a degenerate critical point or steering away from it to nondegenerate members of the same function family. Each control parameter can control the degenerate point only in one direction; the more degenerate a singular point is (the number of independent directions equal to the corank), the more control parameters will be needed. The number of control parameters needed to stabilize a degenerate point (“the universal unfolding of the singularity” with the same dimension as the number of control parameters) is called the codimension of the system. With these considerations in mind, keeping close to surveyable, four-dimensional spacetime, Thom defined an “elementary catastrophe theory” with seven elementary catastrophes, where the number of state variables is one or two: x, y, and the number of control parameters, equal to the codimension, at most four: a, b, c, d. (With five parameters there will be eleven catastrophes). The tool used here is the above mentioned classification theorem, which lists all possible organizing centers (quadratic, cubic forms etc.) in which there are stable unfoldings (by means of control parameters acting on state variables). 

Two elementary catastrophes: fold and cusp

1. In the first place the classification theorem points out the simple potential function y = x3 as a candidate for study. It has a degenerate critical point at {0, 0} and is always declining (with minus sign), needing an addition from the outside in order to grow locally. All possible perturbations of this function are essentially of type x3 + x or type x3 – x (more generally x3 + ax); which means that the critical point (y, x = 0) is of codimension one. Fig. below shows the universal unfolding of the organizing centre y = x3, the fold:

fold1

This catastrophe, says Thom, can be interpreted as “the start of something” or “the end of something”, in other words as a “limit”, temporal or spatial. In this particular case (and only in this case) the complete graph in internal (x) and external space (y) with the control parameter a running from positive to negative values can be shown in a three-dimensional graph (Fig. below); it is evident why this catastrophe is called “fold”:

fold2

One point should be stressed already at this stage, it will be repeated again later on. In “Topological models…”, Thom remarks on the “informational content” of the degenerate critical point: 

This notion of universal unfolding plays a central role in our biological models. To some extent, it replaces the vague and misused term of ‘information’, so frequently found in the writings of geneticists and molecular biologists. The ‘information’ symbolized by the degenerate singularity V(x) is ‘transcribed’, ‘decoded’ or ‘unfolded’ into the morphology appearing in the space of external variables which span the universal unfolding family of the singularity V(x). 

2. Next let us as organizing centre pick the second potential function pointed out by the classification theorem: y = x4. It has a unique minimum (0, 0), but it is not generic , since nearby potentials can be of a different qualitative type, e.g. they can have two minima. But the two-parameter function x4 + ax2 + bx is generic and contains all possible unfoldings of y = x4. The graph of this function, with four variables: y, x, a, b, can not be shown, the display must be restricted to three dimensions. The obvious way out is to study the derivative f'(x) = 4x3 + 2ax + b for y = 0 and in the proximity of x = 0. It turns out, that this derivative has the qualities of the fold, shown in the Fig. below; the catastrophes are like Chinese boxes, one contained within the next of the hierarchy. 

cuspder

Finally we look for the position of the degenerate critical points projected on (a,b)-space, this projection has given the catastrophe its name: the “cusp” (Fig. below). (An arrowhead or a spearhead is a cusp). The edges of the cusp, the bifurcation set, point out the catastrophe zone, above the area between these limits the potential has two Morse minima and one maximum, outside the cusp limits there is one single Morse minimum. With the given configuration (the parameter a perpendicular to the axis of the cusp) a is called the normal factor – since x will increase continuously with a if b < 0, while b is called the splitting factor because the fold surface is split into two sheets if b > 0. If the control axes are instead located on either side of the cusp (A = b + a and B = b – a) A and B are called conflicting factors; A tries to push the result to the upper sheet (attractor), B to the lower sheet of the fold. (Here is an “inlet” for truly external factors; it is well-known how e.g. shadow or excessive light affects the morphogenetic process of plants. 

cusp

Thom states: the cusp is a pleat, a fault, its temporal interpretation is “to separate, to unite, to capture, to generate, to change”. Countless attempts to model bimodal distributions are connected with the cusp, it is the most used (and maybe the most misused) of the elementary catastrophes. 

Zeeman has treated stock exchange and currency behaviour from one and the same model, namely what he terms the cusp catastrophe with a slow feedback. Here the rate of change of indexes (or currencies) is considered as dependent variable, while different buying patterns (“fundamental” /N in fig. below and “chartist” /S in fig. below) serve as normal and splitting parameters. Zeeman argues: the response time of X to changes in N and S is much faster than the feedback of X on N and S, so the flow lines will be almost vertical everywhere. If we fix N and S, X will seek a stable equilibrium position, an attractor surface (or: two attractor surfaces, separated by a repellor sheet and “connected” by catastrophes; one sheet is inflation/bull market, one sheet deflation/bear market, one catastrophe collapse of market or currency. Note that the second catastrophe is absent with the given flow direction. This is important, it tells us that the whole pattern can be manipulated, “adapted” by means of feedbacks/flow directions). Close to the attractor surface, N and S become increasingly important; there will be two horizontal components, representing the (slow) feedback effects of N and S on X. The whole sheet (the fold) is given by the equation X3 – (S – So)X – N = 0, the edge of the cusp by 3X2 + So = S, which gives the equation 4(S – So)3 = 27 N2 for the bifurcation curve. 

cusp2

Figure: “Cusp with a slow feedback”, according to Zeeman (1977). X, the state variable, measures the rate of change of an index, N = normal parameter, S = splitting parameter, the catastrophic behaviour begins at So. On the back part of the upper sheet, N is assumed constant and dS/dt positive, on the fore part dN/dT is assumed to be negative and dS/dt positive; this gives the flow direction of the feedback. On the fore part of the lower sheet both dN/dt and dS/dt are assumed to be negative, on the back part dN/dt is assumed to be positive and dS/dt still negative, this gives the flow direction of feedback on this sheet. The cusp projection on the {N,S}-plane is shaded grey, the visible part of the repellor sheet black. (The reductionist character of these models must always be kept in mind; here two obvious key parameters are considered, while others of a weaker or more ephemeral kind – e.g. interest levels – are ignored.) 

Advertisements

Austrian Economics. Some Further Ruminations. Part 2.

There are two Austrian theories of capital, at least surfacially with two completely different objectives. The first one concentrates on the physical activities roundabout, time-consuming production processes which are common to all economic systems, and it defines capital as a parameter of production. This theory is considered to be universal and ahistorical, and the present connotation of Austrian Theory of Capital falls congruent with this view. This is often denoted by physical capital and consists of concrete and heterogenous capital goods, which is nothing but an alternative expression for production goods. The second and relatively lesser known theory is the beginning point for a historically specific theory, and shies away with the production process and falls in tune with the economic system called capitalism. Capital isn’t anymore dealing with the production processes, but exclusively with the amount of money invested in a business venture. It is regarded as the central tool of economic calculations by profit-oriented enterprises, and rests on the social role of financial accounting. This historically specific theory of capital is termed business capital and is in a sense simply money invested in business assets.

A deeper analysis, however, projects that these divisions are unnecessary, and that physical capital is not a theory of physical capital at all. Its tacit but implicit research object is always the specific framework of the market economy where production is exercised nearly exclusively by profit-oriented enterprises calculating in monetary terms. Austrian capital theory is used as an element of the Austrian theory of the business cycle. This business cycle theory, if expounded consistently, deals with the way the monetary calculations of enterprises are distorted by changes in the rate of interest, not with the production process as such. In a long and rather unnoticed essay on the theory of capital, Menger (German, 1888) recanted what he had said in his Principles about the role of capital theory in economics. He criticized his fellow economists for creating artificial definitions of capital only because it dovetailed into their personal vision of the task of economics. In respect of the Austrian theory of capital as expounded by himself in his Principles and elaborated on by Böhm-Bawerk, he declared that the division of goods into production goods and consumption goods, important as it may be, cannot serve as a basis for the definition of capital and therefore cannot be used as a foundation of a theory of capital. As for entrepreneurs and lawyers, according to Menger, only sums of money dedicated to the acquisition of income are denoted by this word. Of course, Menger’s real-life oriented notion of capital does not only comprise concrete pieces of money but

all assets of a business, of whichever technical nature they may be, in so far as their monetary value is the object of our economic calculations, i.e., when they calculatorily constitute sums of money for us that are dedicated to the acquisition of income.

An analysis of capital presupposes the historically specific framework of capitalism, characterized by profit-oriented enterprises.

Some economists concluded therefrom that “capital” is a category of all human production, that it is present in every thinkable system of the conduct of production processes—i.e., no less in Robinson Crusoe’s involuntary hermitage than in a socialist society—and that it does not depend upon the practice of monetary calculation. This is, however, a confusion (Mises).

Capital, for Mises, is a device that stems from and belongs to financial accounting of businesses under conditions of capitalism. For him, the term “capital” does not signify anything peculiar to the production process as such. It belongs to the sphere of acquisition, not to the sphere of production.  Accordingly, there is no theory of physical capital as an element or factor in the production process. There is rather a theory of capitalism. For him, the existence of financial accounting on the basis of (business) capital invested in an enterprise is the defining characteristic of this economic system. Capital is “the fundamental notion of economic calculation” which is the foremost mental tool used in the conduct of affairs in the market economy. A more elaborate historically specific theory of capital that expands upon Mises’s thoughts would analyze the function of economic calculation based on business capital in the coordination of plans and the allocation of resources in capitalism. It would not deal with the production process as such but, generally, would concern itself with the allocation and distribution of goods and resources by a system of profit-oriented enterprises.

Conjuncted: Austrian Economics. Some Ruminations. Part 1.

Ludwig von Mises’ argument concerning the impossibility of economic calculation under socialism provides a hint as to what a historical specific theory of capital could look like. He argues that financial accounting based on business capital is an indispensable tool when it comes to the allocation and distribution of resources in the economy. Socialism, which has to do without private ownership of means of production and, therefore, also must sacrifice the concepts of (business) capital and financial accounting, cannot rationally appraise the value of the production factors. Without such an appraisal, production must necessarily result in chaos. 

The Politics of War on Coal. Drunken Risibility.

Coal is deemed to phase out, but the transition is going to be a slow process – an evolution/devolution simultaneously, and would be dependent largely on market conditions, for its the latter that could act the slug to phasing out. War on Coal is a political line that needs to be tread carefully for it lies on a liminal threat to slip either side, viz. war on coal as a source of energy, or war on coal as a policy to be implemented calling out for phasing out. This political line ceases to trudge the moment markets start dictating priorities as is evident in the case of the largest Sovereign Fund (Norway), or even in the US where phasing out, clauses repairment to economic-employment-geologic depression, the costs of doing which are astronomical, and thus revoking any such decrees is a trap onto eating a little bit of crow.

wupt-plantscherer

Incisively how the public money is channeled from source to destination in the journey of coal needs to be looked at in-depth, for mere hedging such a source would be an economic disaster rippling into sociological/ecological stalemate. Coal is cheap and dirty without doubt, but it becomes burdensome due to a host of factors, the chief among which is financialisation of it. By this is meant capital taking on garbs, which we honestly are not too equipped to understand, but equally adept at underestimating, for every ill is a result of economic liberalisation or neoliberalism (right?, pun intended!), the latter of which I personally detest using, since economies have long transcended the notion.
Please find attached the Fund’s annual report and coal criterion.

Comment (Monotone Selection) on What’s a Market Password Anyway? Towards Defining a Financial Market Random Sequence. Note Quote.

CA Stochastic

Definition:

A selection rule is a partial function r : 2 → {yes, no}.

The subsequence of a sequence A selected by a selection rule r is that with r(A|n − 1) = yes. The sequence of selected places are those ni such that r(A|ni − 1) = yes. Then for a given selection rule r and a given real A, we generate a sequence n0, n1 , . . . of selected places, and we say that a real is stochastic with respect to admissible selection rules iff for any such selection rule, either the sequence of selected places is finite, or

What’s a Market Password Anyway? Towards Defining a Financial Market Random Sequence. Note Quote.

From the point of view of cryptanalysis, the algorithmic view based on frequency analysis may be taken as a hacker approach to the financial market. While the goal is clearly to find a sort of password unveiling the rules governing the price changes, what we claim is that the password may not be immune to a frequency analysis attack, because it is not the result of a true random process but rather the consequence of the application of a set of (mostly simple) rules. Yet that doesn’t mean one can crack the market once and for all, since for our system to find the said password it would have to outperform the unfolding processes affecting the market – which, as Wolfram’s PCE suggests, would require at least the same computational sophistication as the market itself, with at least one variable modelling the information being assimilated into prices by the market at any given moment. In other words, the market password is partially safe not because of the complexity of the password itself but because it reacts to the cracking method.

Figure-6-By-Extracting-a-Normal-Distribution-from-the-Market-Distribution-the-Long-Tail

Whichever kind of financial instrument one looks at, the sequences of prices at successive times show some overall trends and varying amounts of apparent randomness. However, despite the fact that there is no contingent necessity of true randomness behind the market, it can certainly look that way to anyone ignoring the generative processes, anyone unable to see what other, non-random signals may be driving market movements.

Von Mises’ approach to the definition of a random sequence, which seemed at the time of its formulation to be quite problematic, contained some of the basics of the modern approach adopted by Per Martin-Löf. It is during this time that the Keynesian kind of induction may have been resorted to as a starting point for Solomonoff’s seminal work (1 and 2) on algorithmic probability.

Per Martin-Löf gave the first suitable definition of a random sequence. Intuitively, an algorithmically random sequence (or random sequence) is an infinite sequence of binary digits that appears random to any algorithm. This contrasts with the idea of randomness in probability. In that theory, no particular element of a sample space can be said to be random. Martin-Löf randomness has since been shown to admit several equivalent characterisations in terms of compression, statistical tests, and gambling strategies.

The predictive aim of economics is actually profoundly related to the concept of predicting and betting. Imagine a random walk that goes up, down, left or right by one, with each step having the same probability. If the expected time at which the walk ends is finite, predicting that the expected stop position is equal to the initial position, it is called a martingale. This is because the chances of going up, down, left or right, are the same, so that one ends up close to one’s starting position,if not exactly at that position. In economics, this can be translated into a trader’s experience. The conditional expected assets of a trader are equal to his present assets if a sequence of events is truly random.

If market price differences accumulated in a normal distribution, a rounding would produce sequences of 0 differences only. The mean and the standard deviation of the market distribution are used to create a normal distribution, which is then subtracted from the market distribution.

Schnorr provided another equivalent definition in terms of martingales. The martingale characterisation of randomness says that no betting strategy implementable by any computer (even in the weak sense of constructive strategies, which are not necessarily computable) can make money betting on a random sequence. In a true random memoryless market, no betting strategy can improve the expected winnings, nor can any option cover the risks in the long term.

Over the last few decades, several systems have shifted towards ever greater levels of complexity and information density. The result has been a shift towards Paretian outcomes, particularly within any event that contains a high percentage of informational content, i.e. when one plots the frequency rank of words contained in a large corpus of text data versus the number of occurrences or actual frequencies, Zipf showed that one obtains a power-law distribution

Departures from normality could be accounted for by the algorithmic component acting in the market, as is consonant with some empirical observations and common assumptions in economics, such as rule-based markets and agents. The paper.

Stephen Wolfram and Stochasticity of Financial Markets. Note Quote.

The most obvious feature of essentially all financial markets is the apparent randomness with which prices tend to fluctuate. Nevertheless, the very idea of chance in financial markets clashes with our intuitive sense of the processes regulating the market. All processes involved seem deterministic. Traders do not only follow hunches but act in accordance with specific rules, and even when they do appear to act on intuition, their decisions are not random but instead follow from the best of their knowledge of the internal and external state of the market. For example, traders copy other traders, or take the same decisions that have previously worked, sometimes reacting against information and sometimes acting in accordance with it. Furthermore, nowadays a greater percentage of the trading volume is handled algorithmically rather than by humans. Computing systems are used for entering trading orders, for deciding on aspects of an order such as the timing, price and quantity, all of which cannot but be algorithmic by definition.

Algorithmic however, does not necessarily mean predictable. Several types of irreducibility, from non-computability to intractability to unpredictability, are entailed in most non-trivial questions about financial markets.

Wolfram asks

whether the market generates its own randomness, starting from deterministic and purely algorithmic rules. Wolfram points out that the fact that apparent randomness seems to emerge even in very short timescales suggests that the randomness (or a source of it) that one sees in the market is likely to be the consequence of internal dynamics rather than of external factors. In economists’ jargon, prices are determined by endogenous effects peculiar to the inner workings of the markets themselves, rather than (solely) by the exogenous effects of outside events.

Wolfram points out that pure speculation, where trading occurs without the possibility of any significant external input, often leads to situations in which prices tend to show more, rather than less, random-looking fluctuations. He also suggests that there is no better way to find the causes of this apparent randomness than by performing an almost step-by-step simulation, with little chance of besting the time it takes for the phenomenon to unfold – the time scales of real world markets being simply too fast to beat. It is important to note that the intrinsic generation of complexity proves the stochastic notion to be a convenient assumption about the market, but not an inherent or essential one.

Economists may argue that the question is irrelevant for practical purposes. They are interested in decomposing time-series into a non-predictable and a presumably predictable signal in which they have an interest, what is traditionally called a trend. Whether one, both or none of the two signals is deterministic may be considered irrelevant as long as there is a part that is random-looking, hence most likely unpredictable and consequently worth leaving out.

What Wolfram’s simplified model show, based on simple rules, is that despite being so simple and completely deterministic, these models are capable of generating great complexity and exhibit (the lack of) patterns similar to the apparent randomness found in the price movements phenomenon in financial markets. Whether one can get the kind of crashes in which financial markets seem to cyclicly fall into depends on whether the generating rule is capable of producing them from time to time. Economists dispute whether crashes reflect the intrinsic instability of the market, or whether they are triggered by external events. Sudden large changes are Wolfram’s proposal for modeling market prices would have a simple program generate the randomness that occurs intrinsically. A plausible, if simple and idealized behavior is shown in the aggregate to produce intrinsically random behavior similar to that seen in price changes.

27

In the figure above, one can see that even in some of the simplest possible rule-based systems, structures emerge from a random-looking initial configuration with low information content. Trends and cycles are to be found amidst apparent randomness.

An example of a simple model of the market, where each cell of a cellular automaton corresponds to an entity buying or selling at each step. The behaviour of a given cell is determined by the behaviour of its two neighbors on the step before according to a rule. A rule like rule 90 is additive, hence reversible, which means that it does not destroy any information and has ‘memory’ unlike the random walk model. Yet, due to its random looking behaviour, it is not trivial shortcut the computation or foresee any successive step. There is some randomness in the initial condition of the cellular automaton rule that comes from outside the model, but the subsequent evolution of the system is fully deterministic.

internally generated suggesting large changes are more predictable – both in magnitude and in direction as the result of various interactions between agents. If Wolfram’s intrinsic randomness is what leads the market one may think one could then easily predict its behaviour if this were the case, but as suggested by Wolfram’s Principle of Computational Equivalence it is reasonable to expect that the overall collective behaviour of the market would look complicated to us, as if it were random, hence quite difficult to predict despite being or having a large deterministic component.

Wolfram’s Principle of Computational Irreducibility says that the only way to determine the answer to a computationally irreducible question is to perform the computation. According to Wolfram, it follows from his Principle of Computational Equivalence (PCE) that

almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication: when a system reaches a threshold of computational sophistication often reached by non-trivial systems, the system will be computationally irreducible.