Gauge Theory of Arbitrage, or Financial Markets Resembling Screening in Electrodynamics

Arbitrage-image

When a mispricing appears in a market, market speculators and arbitrageurs rectify the mistake by obtaining a profit from it. In the case of profitable fluctuations they move into profitable assets, leaving comparably less profitable ones. This affects prices in such a way that all assets of similar risk become equally attractive, i.e. the speculators restore the equilibrium. If this process occurs infinitely rapidly, then the market corrects the mispricing instantly and current prices fully reflect all relevant information. In this case one says that the market is efficient. However, clearly it is an idealization and does not hold for small enough times.

The general picture, sketched above, of the restoration of equilibrium in financial markets resembles screening in electrodynamics. Indeed, in the case of electrodynamics, negative charges move into the region of the positive electric field, positive charges get out of the region and thus screen the field. Comparing this with the financial market we can say that a local virtual arbitrage opportunity with a positive excess return plays a role of the positive electric field, speculators in the long position behave as negative charges, whilst the speculators in the short position behave as positive ones. Movements of positive and negative charges screen out a profitable fluctuation and restore the equilibrium so that there is no arbitrage opportunity any more, i.e. the speculators have eliminated the arbitrage opportunity.

The analogy is apparently superficial, but it is not. The analogy emerges naturally in the framework of the Gauge Theory of Arbitrage (GTA). The theory treats a calculation of net present values and asset buying and selling as a parallel transport of money in some curved space, and interpret the interest rate, exchange rates and prices of asset as proper connection components. This structure is exactly equivalent to the geometrical structure underlying the electrodynamics where the components of the vector-potential are connection components responsible for the parallel transport of the charges. The components of the corresponding curvature tensors are the electromagnetic field in the case of electrodynamics and the excess rate of return in case of GTA. The presence of uncertainty is equivalent to the introduction of noise in the electrodynamics, i.e. quantization of the theory. It allows one to map the theory of the capital market onto the theory of quantized gauge field interacting with matter (money flow) fields. The gauge transformations of the matter field correspond to a change of the par value of the asset units which effect is eliminated by a gauge tuning of the prices and rates. Free quantum gauge field dynamics (in the absence of money flows) is described by a geometrical random walk for the assets prices with the log-normal probability distribution. In general case the consideration maps the capital market onto Quantum Electrodynamics where the price walks are affected by money flows.

Electrodynamical model of quasi-efficient financial market

Advertisement

Portfolio Optimization, When the Underlying Asset is Subject to a Multiplicative Continuous Brownian Motion With Gaussian Price Fluctuations

1469941490053

Imagine that you are an investor with some starting capital, which you can invest in just one risky asset. You decided to use the following simple strategy: you always maintain a given fraction 0 < r < 1 of your total current capital invested in this asset, while the rest (given by the fraction 1 − r) you wisely keep in cash. You select a unit of time (say a week, a month, a quarter, or a year, depending on how closely you follow your investment, and what transaction costs are involved) at which you check the asset’s current price, and sell or buy some shares of this asset. By this transaction you adjust the current money equivalent of your investment to the above pre-selected fraction of your total capital.

The question that is interesting is: which investment fraction provides the optimal typical long-term growth rate of investor’s capital? By typical, it is meant that this growth rate occurs at large-time horizon in majority of realizations of the multiplicative process. By extending time-horizons, one can make this rate to occur with probability arbitrary close to one. Contrary to the traditional economics approach, where the expectation value of an artificial “utility function” of an investor is optimized, the optimization of a typical growth rate does not contain any ambiguity.

Let us assume that during the timescale, at which the investor checks and readjusts his asset’s capital to the selected investment fraction, the asset’s price changes by a random factor, drawn from some probability distribution, and uncorrelated from price dynamics at earlier intervals. In other words, the price of an asset experiences a multiplicative random walk with some known probability distribution of steps. This assumption is known to hold in real financial markets beyond a certain time scale. Contrary to continuum theories popular among economists our approach is not limited to Gaussian distributed returns: indeed, we were able to formulate our strategy for a general probability distribution of returns per capital (elementary steps of the multiplicative random walk).

Thus risk-free interest rate, asset’s dividends, and transaction costs are ignored (when volatility is large they are indeed negligible). However, the task of including these effects in our formalism is rather straightforward. The quest of finding a strategy, which optimizes the long-term growth rate of the capital is by no means new: indeed it was first discussed by Daniel Bernoulli in about 1730 in connection with the St. Petersburg game. In the early days of information sciences, C. F. Shannon has considered the application of the concept of information entropy in designing optimal strategies in such games as gambling. Working from the foundations of Shannon, J. L. Kelly Jr. has specifically designed an optimal gambling strategy in placing bets, when a gambler has some incomplete information about the winning outcome (a “noisy information channel”). In modern day finance, especially the investment in very risky assets is no different from gambling. The point Shannon and Kelly wanted to make is that, given that the odds are slightly in your favor albeit with large uncertainty, the gambler should not bet his whole capital at every time step. On the other hand, he would achieve the biggest long-term capital growth by betting some specially optimized fraction of his whole capital in every game. This cautious approach to investment is recommended in situations when the volatility is very large. For instance, in many emergent markets the volatility is huge, but they are still swarming with investors, since the long-term return rate in some cautious investment strategy is favorable.

Later on Kelly’s approach was expanded and generalized in the works of Breiman. Our results for multi-asset optimal investment are in agreement with his exact but non-constructive equations. In some special cases, Merton has considered the problem of portfolio optimization, when the underlying asset is subject to a multiplicative continuous Brownian motion with Gaussian price fluctuations.

Forward, Futures Contracts and Options: Top Down or bottom Up Modeling?

maxresdefault5

The simulation of financial markets can be modeled, from a theoretical viewpoint, according to two separate approaches: a bottom up approach and (or) a top down approach. For instance, the modeling of financial markets starting from diffusion equations and adding a noise term to the evolution of a function of a stochastic variable is a top down approach. This type of description is, effectively, a statistical one.

A bottom up approach, instead, is the modeling of artificial markets using complex data structures (agent based simulations) using general updating rules to describe the collective state of the market. The number of procedures implemented in the simulations can be quite large, although the computational cost of the simulation becomes forbidding as the size of each agent increases. Readers familiar with Sugarscape Models and the computational strategies based on Growing of Artificial Societies have probably an idea of the enormous potentialities of the field. All Sugarscape models include the agents (inhabitants), the environment (a two-dimensional grid) and the rules governing the interaction of the agents with each other and the environment. The original model presented by J. Epstein & R. Axtell (considered as the first large scale agent model) is based on a 51 x 51 cell grid, where every cell can contain different amounts of sugar (or spice). In every step agents look around, find the closest cell filled with sugar, move and metabolize. They can leave pollution, die, reproduce, inherit sources, transfer information, trade or borrow sugar, generate immunity or transmit diseases – depending on the specific scenario and variables defined at the set-up of the model. Sugar in simulation could be seen as a metaphor for resources in an artificial world through which the examiner can study the effects of social dynamics such as evolution, marital status and inheritance on populations. Exact simulation of the original rules provided by J. Epstein & R. Axtell in their book can be problematic and it is not always possible to recreate the same results as those presented in Growing Artificial Societies. However, one would expect that the bottom up description should become comparable to the top down description for a very large number of simulated agents.

The bottom up approach should also provide a better description of extreme events, such as crashes, collectively conditioned behaviour and market incompleteness, this approach being of purely algorithmic nature. A top down approach is, therefore, a model of reduced complexity and follows a statistical description of the dynamics of complex systems.

Forward, Futures Contracts and Options: Let the price at time t of a security be S(t). A specific good can be traded at time t at the price S(t) between a buyer and a seller. The seller (short position) agrees to sell the goods to the buyer (long position) at some time T in the future at a price F(t,T) (the contract price). Notice that contract prices have a 2-time dependence (actual time t and maturity time T). Their difference τ = T − t is usually called time to maturity. Equivalently, the actual price of the contract is determined by the prevailing actual prices and interest rates and by the time to maturity. Entering into a forward contract requires no money, and the value of the contract for long position holders and strong position holders at maturity T will be

(−1)p (S(T)−F(t,T)) (1)

where p = 0 for long positions and p = 1 for short positions. Futures Contracts are similar, except that after the contract is entered, any changes in the market value of the contract are settled by the parties. Hence, the cashflows occur all the way to expiry unlike in the case of the forward where only one cashflow occurs. They are also highly regulated and involve a third party (a clearing house). Forward, futures contracts and options go under the name of derivative products, since their contract price F(t, T) depend on the value of the underlying security S(T). Options are derivatives that can be written on any security and have a more complicated payoff function than the futures or forwards. For example, a call option gives the buyer (long position) the right (but not the obligation) to buy or sell the security at some predetermined strike-price at maturity. A payoff function is the precise form of the price. Path dependent options are derivative products whose value depends on the actual path followed by the underlying security up to maturity. In the case of path-dependent options, since the payoff may not be directly linked to an explicit right, they must be settled by cash. This is sometimes true for futures and plain options as well as this is more efficient.

Purely Random Correlations of the Matrix, or Studying Noise in Neural Networks

2-Figure1-1

Expressed in the most general form, in essentially all the cases of practical interest, the n × n matrices W used to describe the complex system are by construction designed as

W = XYT —– (1)

where X and Y denote the rectangular n × m matrices. Such, for instance, are the correlation matrices whose standard form corresponds to Y = X. In this case one thinks of n observations or cases, each represented by a m dimensional row vector xi (yi), (i = 1, …, n), and typically m is larger than n. In the limit of purely random correlations the matrix W is then said to be a Wishart matrix. The resulting density ρW(λ) of eigenvalues is here known analytically, with the limits (λmin ≤ λ ≤ λmax) prescribed by

λmaxmin = 1+1/Q±2 1/Q and Q = m/n ≥ 1.

The variance of the elements of xi is here assumed unity.

The more general case, of X and Y different, results in asymmetric correlation matrices with complex eigenvalues λ. In this more general case a limiting distribution corresponding to purely random correlations seems not to be yet known analytically as a function of m/n. It indicates however that in the case of no correlations, quite generically, one may expect a largely uniform distribution of λ bound in an ellipse on the complex plane.

Further examples of matrices of similar structure, of great interest from the point of view of complexity, include the Hamiltonian matrices of strongly interacting quantum many body systems such as atomic nuclei. This holds true on the level of bound states where the problem is described by the Hermitian matrices, as well as for excitations embedded in the continuum. This later case can be formulated in terms of an open quantum system, which is represented by a complex non-Hermitian Hamiltonian matrix. Several neural network models also belong to this category of matrix structure. In this domain the reference is provided by the Gaussian (orthogonal, unitary, symplectic) ensembles of random matrices with the semi-circle law for the eigenvalue distribution. For the irreversible processes there exists their complex version with a special case, the so-called scattering ensemble, which accounts for S-matrix unitarity.

As it has already been expressed above, several variants of ensembles of the random matrices provide an appropriate and natural reference for quantifying various characteristics of complexity. The bulk of such characteristics is expected to be consistent with Random Matrix Theory (RMT), and in fact there exists strong evidence that it is. Once this is established, even more interesting are however deviations, especially those signaling emergence of synchronous or coherent patterns, i.e., the effects connected with the reduction of dimensionality. In the matrix terminology such patterns can thus be associated with a significantly reduced rank k (thus k ≪ n) of a leading component of W. A satisfactory structure of the matrix that would allow some coexistence of chaos or noise and of collectivity thus reads:

W = Wr + Wc —– (2)

Of course, in the absence of Wr, the second term (Wc) of W generates k nonzero eigenvalues, and all the remaining ones (n − k) constitute the zero modes. When Wr enters as a noise (random like matrix) correction, a trace of the above effect is expected to remain, i.e., k large eigenvalues and the bulk composed of n − k small eigenvalues whose distribution and fluctuations are consistent with an appropriate version of random matrix ensemble. One likely mechanism that may lead to such a segregation of eigenspectra is that m in eq. (1) is significantly smaller than n, or that the number of large components makes it effectively small on the level of large entries w of W. Such an effective reduction of m (M = meff) is then expressed by the following distribution P(w) of the large off-diagonal matrix elements in the case they are still generated by the random like processes

P(w) = (|w|(M-1)/2K(M-1)/2(|w|))/(2(M-1)/2Γ(M/2)√π) —– (3)

where K stands for the modified Bessel function. Asymptotically, for large w, this leads to P(w) ∼ e(−|w|) |w|M/2−1, and thus reflects an enhanced probability of appearence of a few large off-diagonal matrix elements as compared to a Gaussian distribution. As consistent with the central limit theorem the distribution quickly converges to a Gaussian with increasing M.

Based on several examples of natural complex dynamical systems, like the strongly interacting Fermi systems, the human brain and the financial markets, one could systematize evidence that such effects are indeed common to all the phenomena that intuitively can be qualified as complex.

What’s a Market Password Anyway? Towards Defining a Financial Market Random Sequence. Note Quote.

From the point of view of cryptanalysis, the algorithmic view based on frequency analysis may be taken as a hacker approach to the financial market. While the goal is clearly to find a sort of password unveiling the rules governing the price changes, what we claim is that the password may not be immune to a frequency analysis attack, because it is not the result of a true random process but rather the consequence of the application of a set of (mostly simple) rules. Yet that doesn’t mean one can crack the market once and for all, since for our system to find the said password it would have to outperform the unfolding processes affecting the market – which, as Wolfram’s PCE suggests, would require at least the same computational sophistication as the market itself, with at least one variable modelling the information being assimilated into prices by the market at any given moment. In other words, the market password is partially safe not because of the complexity of the password itself but because it reacts to the cracking method.

Figure-6-By-Extracting-a-Normal-Distribution-from-the-Market-Distribution-the-Long-Tail

Whichever kind of financial instrument one looks at, the sequences of prices at successive times show some overall trends and varying amounts of apparent randomness. However, despite the fact that there is no contingent necessity of true randomness behind the market, it can certainly look that way to anyone ignoring the generative processes, anyone unable to see what other, non-random signals may be driving market movements.

Von Mises’ approach to the definition of a random sequence, which seemed at the time of its formulation to be quite problematic, contained some of the basics of the modern approach adopted by Per Martin-Löf. It is during this time that the Keynesian kind of induction may have been resorted to as a starting point for Solomonoff’s seminal work (1 and 2) on algorithmic probability.

Per Martin-Löf gave the first suitable definition of a random sequence. Intuitively, an algorithmically random sequence (or random sequence) is an infinite sequence of binary digits that appears random to any algorithm. This contrasts with the idea of randomness in probability. In that theory, no particular element of a sample space can be said to be random. Martin-Löf randomness has since been shown to admit several equivalent characterisations in terms of compression, statistical tests, and gambling strategies.

The predictive aim of economics is actually profoundly related to the concept of predicting and betting. Imagine a random walk that goes up, down, left or right by one, with each step having the same probability. If the expected time at which the walk ends is finite, predicting that the expected stop position is equal to the initial position, it is called a martingale. This is because the chances of going up, down, left or right, are the same, so that one ends up close to one’s starting position,if not exactly at that position. In economics, this can be translated into a trader’s experience. The conditional expected assets of a trader are equal to his present assets if a sequence of events is truly random.

If market price differences accumulated in a normal distribution, a rounding would produce sequences of 0 differences only. The mean and the standard deviation of the market distribution are used to create a normal distribution, which is then subtracted from the market distribution.

Schnorr provided another equivalent definition in terms of martingales. The martingale characterisation of randomness says that no betting strategy implementable by any computer (even in the weak sense of constructive strategies, which are not necessarily computable) can make money betting on a random sequence. In a true random memoryless market, no betting strategy can improve the expected winnings, nor can any option cover the risks in the long term.

Over the last few decades, several systems have shifted towards ever greater levels of complexity and information density. The result has been a shift towards Paretian outcomes, particularly within any event that contains a high percentage of informational content, i.e. when one plots the frequency rank of words contained in a large corpus of text data versus the number of occurrences or actual frequencies, Zipf showed that one obtains a power-law distribution

Departures from normality could be accounted for by the algorithmic component acting in the market, as is consonant with some empirical observations and common assumptions in economics, such as rule-based markets and agents. The paper.

Stephen Wolfram and Stochasticity of Financial Markets. Note Quote.

The most obvious feature of essentially all financial markets is the apparent randomness with which prices tend to fluctuate. Nevertheless, the very idea of chance in financial markets clashes with our intuitive sense of the processes regulating the market. All processes involved seem deterministic. Traders do not only follow hunches but act in accordance with specific rules, and even when they do appear to act on intuition, their decisions are not random but instead follow from the best of their knowledge of the internal and external state of the market. For example, traders copy other traders, or take the same decisions that have previously worked, sometimes reacting against information and sometimes acting in accordance with it. Furthermore, nowadays a greater percentage of the trading volume is handled algorithmically rather than by humans. Computing systems are used for entering trading orders, for deciding on aspects of an order such as the timing, price and quantity, all of which cannot but be algorithmic by definition.

Algorithmic however, does not necessarily mean predictable. Several types of irreducibility, from non-computability to intractability to unpredictability, are entailed in most non-trivial questions about financial markets.

Wolfram asks

whether the market generates its own randomness, starting from deterministic and purely algorithmic rules. Wolfram points out that the fact that apparent randomness seems to emerge even in very short timescales suggests that the randomness (or a source of it) that one sees in the market is likely to be the consequence of internal dynamics rather than of external factors. In economists’ jargon, prices are determined by endogenous effects peculiar to the inner workings of the markets themselves, rather than (solely) by the exogenous effects of outside events.

Wolfram points out that pure speculation, where trading occurs without the possibility of any significant external input, often leads to situations in which prices tend to show more, rather than less, random-looking fluctuations. He also suggests that there is no better way to find the causes of this apparent randomness than by performing an almost step-by-step simulation, with little chance of besting the time it takes for the phenomenon to unfold – the time scales of real world markets being simply too fast to beat. It is important to note that the intrinsic generation of complexity proves the stochastic notion to be a convenient assumption about the market, but not an inherent or essential one.

Economists may argue that the question is irrelevant for practical purposes. They are interested in decomposing time-series into a non-predictable and a presumably predictable signal in which they have an interest, what is traditionally called a trend. Whether one, both or none of the two signals is deterministic may be considered irrelevant as long as there is a part that is random-looking, hence most likely unpredictable and consequently worth leaving out.

What Wolfram’s simplified model show, based on simple rules, is that despite being so simple and completely deterministic, these models are capable of generating great complexity and exhibit (the lack of) patterns similar to the apparent randomness found in the price movements phenomenon in financial markets. Whether one can get the kind of crashes in which financial markets seem to cyclicly fall into depends on whether the generating rule is capable of producing them from time to time. Economists dispute whether crashes reflect the intrinsic instability of the market, or whether they are triggered by external events. Sudden large changes are Wolfram’s proposal for modeling market prices would have a simple program generate the randomness that occurs intrinsically. A plausible, if simple and idealized behavior is shown in the aggregate to produce intrinsically random behavior similar to that seen in price changes.

27

In the figure above, one can see that even in some of the simplest possible rule-based systems, structures emerge from a random-looking initial configuration with low information content. Trends and cycles are to be found amidst apparent randomness.

An example of a simple model of the market, where each cell of a cellular automaton corresponds to an entity buying or selling at each step. The behaviour of a given cell is determined by the behaviour of its two neighbors on the step before according to a rule. A rule like rule 90 is additive, hence reversible, which means that it does not destroy any information and has ‘memory’ unlike the random walk model. Yet, due to its random looking behaviour, it is not trivial shortcut the computation or foresee any successive step. There is some randomness in the initial condition of the cellular automaton rule that comes from outside the model, but the subsequent evolution of the system is fully deterministic.

internally generated suggesting large changes are more predictable – both in magnitude and in direction as the result of various interactions between agents. If Wolfram’s intrinsic randomness is what leads the market one may think one could then easily predict its behaviour if this were the case, but as suggested by Wolfram’s Principle of Computational Equivalence it is reasonable to expect that the overall collective behaviour of the market would look complicated to us, as if it were random, hence quite difficult to predict despite being or having a large deterministic component.

Wolfram’s Principle of Computational Irreducibility says that the only way to determine the answer to a computationally irreducible question is to perform the computation. According to Wolfram, it follows from his Principle of Computational Equivalence (PCE) that

almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication: when a system reaches a threshold of computational sophistication often reached by non-trivial systems, the system will be computationally irreducible.

Complexity Wrapped Uncertainty in the Bazaar

economic_growth_atlas

One could conceive a financial market as a set of N agents each of them taking a binary decision every time step. This is an extremely crude representation, but capture the essential feature that decision could be coded by binary symbols (buy = 0, sell = 1, for example). Although the extreme simplification, the above setup allow a “stylized” definition of price.

Let Nt0, Nt1 be the number of agents taking the decision 0, 1 respectively at the time t. Obviously, N = Nt0 + Nt1 for every t . Then, with the above definition of the binary code the price can be defined as:

pt = f(Nt0/Nt1)

where f is an increasing and convex function which also hold that:

a) f(0)=0

b) limx→∞ f(x) = ∞

c) limx→∞ f'(x) = 0

The above definition perfectly agree with the common believe about how offer and demand work. If Nt0 is small and Nt1 large, then there are few agents willing to buy and a lot of agents willing to sale, hence the price should be low. If on the contrary, Nt0 is large and Nt1 is small, then there are a lot of agents willing to buy and just few agents willing to sale, hence the price should be high. Notice that the winning choice is related with the minority choice. We exploit the above analogy to construct a binary time-series associated to each real time-series of financial markets. Let {pt}t∈N be the original real time-series. Then we construct a binary time-series {at}t∈N by the rule:

at = {1 pt > pt-1

at = {0 pt < pt-1

Physical complexity is defined as the number of binary digits that are explainable (or meaningful) with respect to the environment in a string η. In reference to our problem the only physical record one gets is the binary string built up from the original real time series and we consider it as the environment ε . We study the physical complexity of substrings of ε . The comprehension of their complex features has high practical importance. The amount of data agents take into account in order to elaborate their choice is finite and of short range. For every time step t, the binary digits at-l, at-l+1,…, at-1 carry some information about the behavior of agents. Hence, the complexity of these finite strings is a measure of how complex information agents face. The Kolmogorov – Chaitin complexity is defined as the length of the shortest program π producing the sequence η when run on universal Turing machine T:

K(η) = min {|π|: η = T(π)}

where π represent the length of π in bits, T(π) the result of running π on Turing machine T and K(η) the Kolmogorov-Chaitin complexity of sequence π. In the framework of this theory, a string is said to be regular if K(η) < η . It means that η can be described by a program π with length smaller than η length. The interpretation of a string should be done in the framework of an environment. Hence, let imagine a Turing machine that takes the string ε as input. We can define the conditional complexity K(η / ε) as the length of the smallest program that computes η in a Turing machine having ε as input:

K(η / ε) = min {|π|: η = CT(π, ε)}

We want to stress that K(η / ε) represents those bits in η that are random with respect to ε. Finally, the physical complexity can be defined as the number of bits that are meaningful in η with respect to ε :

K(η : ε) = |η| – K(η / ε)

η also represent the unconditional complexity of string η i.e., the value of complexity if the input would be ε = ∅ . Of course, the measure K (η : ε ) as defined in the above equation has few practical applications, mainly because it is impossible to know the way in which information about ε is encoded in η . However, if a statistical ensemble of strings is available to us, then the determination of complexity becomes an exercise in information theory. It can be proved that the average values C(η) of the physical complexity K(η : ε) taken over an ensemble Σ of strings of length η can be approximated by:

C|(η)| = 〈K(η : ε) ≅  |η| – K(η : ε), where

K(η : ε) = -∑η∈∑p(η / ε) log2p(η / ε)

and the sum is taking over all the strings η in the ensemble Σ. In a population of N strings in environment ε, the quantity n(η)/N, where n(s) denotes the number of strings equal to η in ∑, approximates p(η / ε) as N → ∞.

Let ε = {at}t∈N and l be a positive integer l ≥ 2. Let Σl be the ensemble of sequences of length l built up by a moving window of length l i.e., if η ∈ Σl then η = aiai+1ai+l−1 for some value of i. The selection of strings ε is related to periods before crashes and in contrast, period with low uncertainty in the market…..

Top-down Causation in Financial Markets. Note Quote.

maelstrom

Regulators attempt to act on a financial market based on the intelligent and reasonable formulation of rules. For example, changing the market micro-structure at the lowest level in the hierarchy, can change the way that asset prices assimilate changes in information variables Zk,t or θi,m,t. Similarly, changes in accounting rules could change the meaning and behaviour of bottom-up information variables θi,m,t and changes in economic policy and policy implementation can change the meaning of top-down information variables Zk,t and influence shared risk factors rp,t.

In hierarchical analysis, theories and plans may be embodied in a symbolic system to build effective and robust models to be used for detecting deeper dependencies and emergent phenomena. Mechanisms for the transmission of information and asymmetric information information have impacts on market quality. Thus, Regulators can impact the activity and success of all the other actors, either directly or indirectly through knock-on effects. Examples include the following: Investor behaviour could change the goal selection of Traders; change in the latter could in turn impact variables coupled to Traders activity in such a way that Profiteers are able to benefit from change in liquidity or use leverage as a mean to achieve profit targets and overcome noise.

Idealistically, Regulators may aim for increasing productivity, managing inflation, reducing unemployment and eliminating malfeasance. However, the circumvention of rules, usually in the name of innovation or by claims of greater insight on optimality, is as much part of a complex system in which participants can respond to rules. Tax arbitrages are examples of actions which manipulate reporting to reduce levies paid to a profit- facilitating system. In regulatory arbitrage, rules may be followed technically, but nevertheless use relevant new information which has not been accounted for in system rules. Such activities are consistent with goals of profiteering but are not necessarily in agreement with longer term optimality of reliable and fair markets.

Rulers, i.e. agencies which control populations more generally, also impact markets and economies. Examples of top-down causation here include segregation of workers and differential assignment of economic rights to market participants, as in the evolution of local miners’ rights in the late 1800’s in South Africa and the national Native Land act of 1913 in South Africa, international agreements such as the Bretton Woods system, the Marshall plan of 1948, the lifting of the gold standard in 1973 and the regulation of capital allocations and capital flows between individual and aggregated participants. Ideas on target-based goal selection are already in circulation in the literature on applications of viability theory and stochastic control in economics. Such approaches provide alternatives to the Laplacian ideal of attaining perfect prediction by offering analysable future expectations to regulators and rulers.

Causation in Financial Markets. Note Quote.

business-analytics-wallpaper-9

The algorithmic top-down view of causation in financial markets is essentially a deterministic, dynamical systems view. This can serve as an interpretation of financial markets whereby markets are understood through assets prices, representing information in the market, which can be described by a dynamical system model. This is the ideal encapsulated in the Laplacian vision:

We ought to regard the present state of the universe as the effect of its antecedent state and as the cause of the state that is to follow. An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present to its eyes. The perfection that the human mind has been able to give to astronomy affords but a feeble outline of such an intelligence.

Here boundary and initial conditions of variables uniquely determine the outcome for the effective dynamics at the level in hierarchy where it is being applied. This implies that higher levels in the hierarchy can drive broad macro-economic behavior, for example: at the highest level there could exist some set of differential equations that describe the behavior of adjustable quantities, such as interest rates, and how they impact measurable quantities such as gross domestic product, aggregate consumption.

The literature on the Lucas critique addresses limitations of this approach. Nevertheless, from a completely ad hoc perspective, a dynamical systems model may offer a best approximation to relationships at a particular level in a complex hierarchy.

Predictors: This system actor views causation in terms of uniquely determined outcomes, based on known boundary and initial conditions. Predictors may be successful when mechanistic dependencies in economic realities become pervasive or dominant. An example of a predictive-based argument since the Global Financial Crises (2007-2009) is the bipolar Risk- On/Risk-Off description for preferences, whereby investors shift to higher risk portfolios when global assessment of riskiness is established to be low and shift to low risk portfolios when global riskiness is considered to be high. Mathematically, a simple approximation of the dynamics can be described by a Lotka-Volterra (or predator-prey) model, which in economics, proposed a way to model the dynamics of various industries by introducing trophic functions between various sectors, and ignoring smaller sectors by considering the interactions of only two industrial sectors. The excess-liquidity due to quantitative easing and the prevalence and ease of trading in exchange traded funds and currencies, combined with low interest rates and the increase use of automation, pro- vided a basis for the risk-on/risk-off analogy for analysing large capital flows in the global arena. In Ising-Potts hierarchy, top down causation is filtered down to the rest of the market through all the shared risk factors, and the top-down information variables, which dominate bottom-up information variables. At higher levels, bottom-up variables are effectively noise terms. Nevertheless, the behaviour of the traders in a lower levels can still become driven by correlations across assets, based on perceived global riskiness. Thus, risk-on/risk-off transitions can have amplified effects.

Helicopter Money Drop, or QE for the People or some other Unconventional Macroeconomic Tool…? Hoping the Government of India isn’t Looking to Buy into and Sell this Rhetoric in Defense of Demonetisation.

Let us suppose now that one day a helicopter flies over this community and drops an additional $1,000 in bills from the sky, which is, of course, hastily collected by members of the community. Let us suppose further that everyone is convinced that this is a unique event which will never be repeated.

This famous quote from Milton Friedman’s “The Optimum Quantity of Money” is the underlying principle behind what is termed Helicopter Money, where the basic tenet is if the Central Bank wants to raise inflation and output in the economy, that is below par, potential, the most effective tool would be simply to give everyone direct money transfers. In theory, people would see this as a permanent one-off expansion of the amount of money in circulation and would then start to spend more freely, increasing broader economic activity and pushing inflation back up to the central bank’s target. The notion was taken to a different level thanks to Ben Bernanke, former Chairman of FED, when he said,

A broad-based tax cut, for example, accommodated by a programme of open-market purchases to alleviate any tendency for interest rates to increase, would almost certainly be an effective stimulant to consumption and hence to prices. Even if households decided not to increase consumption but instead rebalanced their portfolios by using their extra cash to acquire real and financial assets, the resulting increase in asset values would lower the cost of capital and improve the balance sheet positions of potential borrowers. A money-financed tax cut is essentially equivalent to Milton Friedman’s famous ‘helicopter drop’ of money.

The last sentence of the quote obviously draws out the resemblances between the positions held by MF and BB, or helicopter money and quantitative easing, respectively. But, there is a difference that majorly lies in asset swaps for the latter, where the government bond gets exchanged for bank reserves. But, what about QE for the People, a dish dished out by two major ingredients in the form of financial excesses and communist manifesto!!! Thats a nice ring to it, and as Bloomberg talked of it almost a couple of years back, if the central bank were to start sending cheques to each and every household (read citizen), then most of this money would be spent, boosting demand and thus echoing MF. But, the downside would be central banks creating liabilities without corresponding assets thus depleting equity. Well, thats for QE for the People, the mix of financial excesses and communist manifesto. This differentiates with QE, as in the process of QE, no doubt liabilities are created but central banks get assets in the form of securities it buys in return. While this alleviates reserve constraints in the banking sector (one possible reason for them to cut back lending) and lowers government borrowing costs, its transmission to the real economy could at best be indirect and underwhelming. As such, it does not provide much bang for your buck. Direct transfers into people’s accounts, or monetary-financed tax breaks or government spending, would offer one way to increase the effectiveness of the policy by directly influencing aggregate demand rather than hoping for a trickle-down effect from financial markets.

geoq416_special-focus_fig1

Assuming Helicopter Money is getting materialized in India. What this in effect brings to the core is a mix of confusion fusion between who enacts the fiscal and who the monetary policies. If the Government of India sends Rs. 15 lac to households, it is termed fiscal policy and of the RBI does the same, then it is termed monetary policy and the macroeconomic mix confusions galore from here on. But, is this QE for the People or Helicopter drop really part of the fiscal policy? It can’t be, unless it starts to be taken notice of the fact that RBI starts carrying out reverse repurchase operations (reverse repo), and plans to expand its multiple fold when it raises its interest rate target in order to put a floor on how far the funds rate could fall. And thats precisely what the RBI undertook in the wake, or rather during the peak of demonetisation. So, here the difference between such drops/QE for the People and QE becomes all the more stark, for if the RBI were to undertake such drops or QE for the People, then it would end up selling securities thus self-driving the rates down, or even reach where it has yet to, ZIRP. Thus, what would happen if the Government does the drops? It’d appear they spend money and sell securities, too. But in that case, people would say the security sales are financing the spending. And in their minds this is the fiscal policy, while the RBI’s helicopter money is monetary policy. If the confusions are still murky, the result is probably due to the fact that it is hybrid in nature, or taxonomically deviant.

It seems much clearer to simply say that (a) the act of creating a deficit—raising the net financial wealth of the non-government sector is fiscal policy, and (b) the act of announcing and then supporting an interest rate target with security sales (or purchases, or interest on reserves), which has no effect on the net financial wealth of the non-government sector is monetary policy. In the case of (a), whether the RBI cuts the cheques, it’s fiscal policy, and with (b), whether the RBI sells securities, it’s monetary policy. In other words, fiscal policy is about managing the net financial assets of the non-government sector relative to the state of the economy, and monetary policy is about managing interest rates (and through it, to the best of its abilities, bank lending and deposit creation) relative to the state of the economy.

So, how does this helicopter drop pan with India’s DBT, Direct Benefit Transfers or getting back the back money to be put into accounts of every Indian? What rhetoric to begin and end with? Let us go back to the words of Raghuram Rajan, when he was the Governor of the RBI. “It is not absolutely clear that throwing the money out of the window, or targeted cheques to beneficiaries… will be politically feasible in many countries, or produce economically the desired effect,” he said because the fiscal spending hasn’t achieved much elevated growth. So, bury the hatchet here, or the government might get this, import this rhetoric to defend its botched-up move on demonetisation. Gear up, figure out.