Knowledge Limited for Dummies….Didactics.

header_Pipes

Bertrand Russell with Alfred North Whitehead, in the Principia Mathematica aimed to demonstrate that “all pure mathematics follows from purely logical premises and uses only concepts defined in logical terms.” Its goal was to provide a formalized logic for all mathematics, to develop the full structure of mathematics where every premise could be proved from a clear set of initial axioms.

Russell observed of the dense and demanding work, “I used to know of only six people who had read the later parts of the book. Three of those were Poles, subsequently (I believe) liquidated by Hitler. The other three were Texans, subsequently successfully assimilated.” The complex mathematical symbols of the manuscript required it to be written by hand, and its sheer size – when it was finally ready for the publisher, Russell had to hire a panel truck to send it off – made it impossible to copy. Russell recounted that “every time that I went out for a walk I used to be afraid that the house would catch fire and the manuscript get burnt up.”

Momentous though it was, the greatest achievement of Principia Mathematica was realized two decades after its completion when it provided the fodder for the metamathematical enterprises of an Austrian, Kurt Gödel. Although Gödel did face the risk of being liquidated by Hitler (therefore fleeing to the Institute of Advanced Studies at Princeton), he was neither a Pole nor a Texan. In 1931, he wrote a treatise entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which demonstrated that the goal Russell and Whitehead had so single-mindedly pursued was unattainable.

The flavor of Gödel’s basic argument can be captured in the contradictions contained in a schoolboy’s brainteaser. A sheet of paper has the words “The statement on the other side of this paper is true” written on one side and “The statement on the other side of this paper is false” on the reverse. The conflict isn’t resolvable. Or, even more trivially, a statement like; “This statement is unprovable.” You cannot prove the statement is true, because doing so would contradict it. If you prove the statement is false, then that means its converse is true – it is provable – which again is a contradiction.

The key point of contradiction for these two examples is that they are self-referential. This same sort of self-referentiality is the keystone of Gödel’s proof, where he uses statements that imbed other statements within them. This problem did not totally escape Russell and Whitehead. By the end of 1901, Russell had completed the first round of writing Principia Mathematica and thought he was in the homestretch, but was increasingly beset by these sorts of apparently simple-minded contradictions falling in the path of his goal. He wrote that “it seemed unworthy of a grown man to spend his time on such trivialities, but . . . trivial or not, the matter was a challenge.” Attempts to address the challenge extended the development of Principia Mathematica by nearly a decade.

Yet Russell and Whitehead had, after all that effort, missed the central point. Like granite outcroppings piercing through a bed of moss, these apparently trivial contradictions were rooted in the core of mathematics and logic, and were only the most readily manifest examples of a limit to our ability to structure formal mathematical systems. Just four years before Gödel had defined the limits of our ability to conquer the intellectual world of mathematics and logic with the publication of his Undecidability Theorem, the German physicist Werner Heisenberg’s celebrated Uncertainty Principle had delineated the limits of inquiry into the physical world, thereby undoing the efforts of another celebrated intellect, the great mathematician Pierre-Simon Laplace. In the early 1800s Laplace had worked extensively to demonstrate the purely mechanical and predictable nature of planetary motion. He later extended this theory to the interaction of molecules. In the Laplacean view, molecules are just as subject to the laws of physical mechanics as the planets are. In theory, if we knew the position and velocity of each molecule, we could trace its path as it interacted with other molecules, and trace the course of the physical universe at the most fundamental level. Laplace envisioned a world of ever more precise prediction, where the laws of physical mechanics would be able to forecast nature in increasing detail and ever further into the future, a world where “the phenomena of nature can be reduced in the last analysis to actions at a distance between molecule and molecule.”

What Gödel did to the work of Russell and Whitehead, Heisenberg did to Laplace’s concept of causality. The Uncertainty Principle, though broadly applied and draped in metaphysical context, is a well-defined and elegantly simple statement of physical reality – namely, the combined accuracy of a measurement of an electron’s location and its momentum cannot vary far from a fixed value. The reason for this, viewed from the standpoint of classical physics, is that accurately measuring the position of an electron requires illuminating the electron with light of a very short wavelength. But the shorter the wavelength the greater the amount of energy that hits the electron, and the greater the energy hitting the electron the greater the impact on its velocity.

What is true in the subatomic sphere ends up being true – though with rapidly diminishing significance – for the macroscopic. Nothing can be measured with complete precision as to both location and velocity because the act of measuring alters the physical properties. The idea that if we know the present we can calculate the future was proven invalid – not because of a shortcoming in our knowledge of mechanics, but because the premise that we can perfectly know the present was proven wrong. These limits to measurement imply limits to prediction. After all, if we cannot know even the present with complete certainty, we cannot unfailingly predict the future. It was with this in mind that Heisenberg, ecstatic about his yet-to-be-published paper, exclaimed, “I think I have refuted the law of causality.”

The epistemological extrapolation of Heisenberg’s work was that the root of the problem was man – or, more precisely, man’s examination of nature, which inevitably impacts the natural phenomena under examination so that the phenomena cannot be objectively understood. Heisenberg’s principle was not something that was inherent in nature; it came from man’s examination of nature, from man becoming part of the experiment. (So in a way the Uncertainty Principle, like Gödel’s Undecidability Proposition, rested on self-referentiality.) While it did not directly refute Einstein’s assertion against the statistical nature of the predictions of quantum mechanics that “God does not play dice with the universe,” it did show that if there were a law of causality in nature, no one but God would ever be able to apply it. The implications of Heisenberg’s Uncertainty Principle were recognized immediately, and it became a simple metaphor reaching beyond quantum mechanics to the broader world.

This metaphor extends neatly into the world of financial markets. In the purely mechanistic universe of classical physics, we could apply Newtonian laws to project the future course of nature, if only we knew the location and velocity of every particle. In the world of finance, the elementary particles are the financial assets. In a purely mechanistic financial world, if we knew the position each investor has in each asset and the ability and willingness of liquidity providers to take on those assets in the event of a forced liquidation, we would be able to understand the market’s vulnerability. We would have an early-warning system for crises. We would know which firms are subject to a liquidity cycle, and which events might trigger that cycle. We would know which markets are being overrun by speculative traders, and thereby anticipate tactical correlations and shifts in the financial habitat. The randomness of nature and economic cycles might remain beyond our grasp, but the primary cause of market crisis, and the part of market crisis that is of our own making, would be firmly in hand.

The first step toward the Laplacean goal of complete knowledge is the advocacy by certain financial market regulators to increase the transparency of positions. Politically, that would be a difficult sell – as would any kind of increase in regulatory control. Practically, it wouldn’t work. Just as the atomic world turned out to be more complex than Laplace conceived, the financial world may be similarly complex and not reducible to a simple causality. The problems with position disclosure are many. Some financial instruments are complex and difficult to price, so it is impossible to measure precisely the risk exposure. Similarly, in hedge positions a slight error in the transmission of one part, or asynchronous pricing of the various legs of the strategy, will grossly misstate the total exposure. Indeed, the problems and inaccuracies in using position information to assess risk are exemplified by the fact that major investment banking firms choose to use summary statistics rather than position-by-position analysis for their firmwide risk management despite having enormous resources and computational power at their disposal.

Perhaps more importantly, position transparency also has implications for the efficient functioning of the financial markets beyond the practical problems involved in its implementation. The problems in the examination of elementary particles in the financial world are the same as in the physical world: Beyond the inherent randomness and complexity of the systems, there are simply limits to what we can know. To say that we do not know something is as much a challenge as it is a statement of the state of our knowledge. If we do not know something, that presumes that either it is not worth knowing or it is something that will be studied and eventually revealed. It is the hubris of man that all things are discoverable. But for all the progress that has been made, perhaps even more exciting than the rolling back of the boundaries of our knowledge is the identification of realms that can never be explored. A sign in Einstein’s Princeton office read, “Not everything that counts can be counted, and not everything that can be counted counts.”

The behavioral analogue to the Uncertainty Principle is obvious. There are many psychological inhibitions that lead people to behave differently when they are observed than when they are not. For traders it is a simple matter of dollars and cents that will lead them to behave differently when their trades are open to scrutiny. Beneficial though it may be for the liquidity demander and the investor, for the liquidity supplier trans- parency is bad. The liquidity supplier does not intend to hold the position for a long time, like the typical liquidity demander might. Like a market maker, the liquidity supplier will come back to the market to sell off the position – ideally when there is another investor who needs liquidity on the other side of the market. If other traders know the liquidity supplier’s positions, they will logically infer that there is a good likelihood these positions shortly will be put into the market. The other traders will be loath to be the first ones on the other side of these trades, or will demand more of a price concession if they do trade, knowing the overhang that remains in the market.

This means that increased transparency will reduce the amount of liquidity provided for any given change in prices. This is by no means a hypothetical argument. Frequently, even in the most liquid markets, broker-dealer market makers (liquidity providers) use brokers to enter their market bids rather than entering the market directly in order to preserve their anonymity.

The more information we extract to divine the behavior of traders and the resulting implications for the markets, the more the traders will alter their behavior. The paradox is that to understand and anticipate market crises, we must know positions, but knowing and acting on positions will itself generate a feedback into the market. This feedback often will reduce liquidity, making our observations less valuable and possibly contributing to a market crisis. Or, in rare instances, the observer/feedback loop could be manipulated to amass fortunes.

One might argue that the physical limits of knowledge asserted by Heisenberg’s Uncertainty Principle are critical for subatomic physics, but perhaps they are really just a curiosity for those dwelling in the macroscopic realm of the financial markets. We cannot measure an electron precisely, but certainly we still can “kind of know” the present, and if so, then we should be able to “pretty much” predict the future. Causality might be approximate, but if we can get it right to within a few wavelengths of light, that still ought to do the trick. The mathematical system may be demonstrably incomplete, and the world might not be pinned down on the fringes, but for all practical purposes the world can be known.

Unfortunately, while “almost” might work for horseshoes and hand grenades, 30 years after Gödel and Heisenberg yet a third limitation of our knowledge was in the wings, a limitation that would close the door on any attempt to block out the implications of microscopic uncertainty on predictability in our macroscopic world. Based on observations made by Edward Lorenz in the early 1960s and popularized by the so-called butterfly effect – the fanciful notion that the beating wings of a butterfly could change the predictions of an otherwise perfect weather forecasting system – this limitation arises because in some important cases immeasurably small errors can compound over time to limit prediction in the larger scale. Half a century after the limits of measurement and thus of physical knowledge were demonstrated by Heisenberg in the world of quantum mechanics, Lorenz piled on a result that showed how microscopic errors could propagate to have a stultifying impact in nonlinear dynamic systems. This limitation could come into the forefront only with the dawning of the computer age, because it is manifested in the subtle errors of computational accuracy.

The essence of the butterfly effect is that small perturbations can have large repercussions in massive, random forces such as weather. Edward Lorenz was testing and tweaking a model of weather dynamics on a rudimentary vacuum-tube computer. The program was based on a small system of simultaneous equations, but seemed to provide an inkling into the variability of weather patterns. At one point in his work, Lorenz decided to examine in more detail one of the solutions he had generated. To save time, rather than starting the run over from the beginning, he picked some intermediate conditions that had been printed out by the computer and used those as the new starting point. The values he typed in were the same as the values held in the original simulation at that point, so the results the simulation generated from that point forward should have been the same as in the original; after all, the computer was doing exactly the same operations. What he found was that as the simulated weather pattern progressed, the results of the new run diverged, first very slightly and then more and more markedly, from those of the first run. After a point, the new path followed a course that appeared totally unrelated to the original one, even though they had started at the same place.

Lorenz at first thought there was a computer glitch, but as he investigated further, he discovered the basis of a limit to knowledge that rivaled that of Heisenberg and Gödel. The problem was that the numbers he had used to restart the simulation had been reentered based on his printout from the earlier run, and the printout rounded the values to three decimal places while the computer carried the values to six decimal places. This rounding, clearly insignificant at first, promulgated a slight error in the next-round results, and this error grew with each new iteration of the program as it moved the simulation of the weather forward in time. The error doubled every four simulated days, so that after a few months the solutions were going their own separate ways. The slightest of changes in the initial conditions had traced out a wholly different pattern of weather.

Intrigued by his chance observation, Lorenz wrote an article entitled “Deterministic Nonperiodic Flow,” which stated that “nonperiodic solutions are ordinarily unstable with respect to small modifications, so that slightly differing initial states can evolve into considerably different states.” Translation: Long-range weather forecasting is worthless. For his application in the narrow scientific discipline of weather prediction, this meant that no matter how precise the starting measurements of weather conditions, there was a limit after which the residual imprecision would lead to unpredictable results, so that “long-range forecasting of specific weather conditions would be impossible.” And since this occurred in a very simple laboratory model of weather dynamics, it could only be worse in the more complex equations that would be needed to properly reflect the weather. Lorenz discovered the principle that would emerge over time into the field of chaos theory, where a deterministic system generated with simple nonlinear dynamics unravels into an unrepeated and apparently random path.

The simplicity of the dynamic system Lorenz had used suggests a far-reaching result: Because we cannot measure without some error (harking back to Heisenberg), for many dynamic systems our forecast errors will grow to the point that even an approximation will be out of our hands. We can run a purely mechanistic system that is designed with well-defined and apparently well-behaved equations, and it will move over time in ways that cannot be predicted and, indeed, that appear to be random. This gets us to Santa Fe.

The principal conceptual thread running through the Santa Fe research asks how apparently simple systems, like that discovered by Lorenz, can produce rich and complex results. Its method of analysis in some respects runs in the opposite direction of the usual path of scientific inquiry. Rather than taking the complexity of the world and distilling simplifying truths from it, the Santa Fe Institute builds a virtual world governed by simple equations that when unleashed explode into results that generate unexpected levels of complexity.

In economics and finance, institute’s agenda was to create artificial markets with traders and investors who followed simple and reasonable rules of behavior and to see what would happen. Some of the traders built into the model were trend followers, others bought or sold based on the difference between the market price and perceived value, and yet others traded at random times in response to liquidity needs. The simulations then printed out the paths of prices for the various market instruments. Qualitatively, these paths displayed all the richness and variation we observe in actual markets, replete with occasional bubbles and crashes. The exercises did not produce positive results for predicting or explaining market behavior, but they did illustrate that it is not hard to create a market that looks on the surface an awful lot like a real one, and to do so with actors who are following very simple rules. The mantra is that simple systems can give rise to complex, even unpredictable dynamics, an interesting converse to the point that much of the complexity of our world can – with suitable assumptions – be made to appear simple, summarized with concise physical laws and equations.

The systems explored by Lorenz were deterministic. They were governed definitively and exclusively by a set of equations where the value in every period could be unambiguously and precisely determined based on the values of the previous period. And the systems were not very complex. By contrast, whatever the set of equations are that might be divined to govern the financial world, they are not simple and, furthermore, they are not deterministic. There are random shocks from political and economic events and from the shifting preferences and attitudes of the actors. If we cannot hope to know the course of the deterministic systems like fluid mechanics, then no level of detail will allow us to forecast the long-term course of the financial world, buffeted as it is by the vagaries of the economy and the whims of psychology.

Advertisements

Statistical Arbitrage. Thought of the Day 123.0

eg_arb_usd_hedge

In the perfect market paradigm, assets can be bought and sold instantaneously with no transaction costs. For many financial markets, such as listed stocks and futures contracts, the reality of the market comes close to this ideal – at least most of the time. The commission for most stock transactions by an institutional trader is just a few cents a share, and the bid/offer spread is between one and five cents. Also implicit in the perfect market paradigm is a level of liquidity where the act of buying or selling does not affect the price. The market is composed of participants who are so small relative to the market that they can execute their trades, extracting liquidity from the market as they demand, without moving the price.

That’s where the perfect market vision starts to break down. Not only does the demand for liquidity move prices, but it also is the primary driver of the day-by-day movement in prices – and the primary driver of crashes and price bubbles as well. The relationship between liquidity and the prices of related stocks also became the primary driver of one of the most powerful trading models in the past 20 years – statistical arbitrage.

If you spend any time at all on a trading floor, it becomes obvious that something more than information moves prices. Throughout the day, the 10-year bond trader gets orders from the derivatives desk to hedge a swap position, from the mortgage desk to hedge mortgage exposure, from insurance clients who need to sell bonds to meet liabilities, and from bond mutual funds that need to invest the proceeds of new accounts. None of these orders has anything to do with information; each one has everything to do with a need for liquidity. The resulting price changes give the market no signal concerning information; the price changes are only the result of the need for liquidity. And the party on the other side of the trade who provides this liquidity will on average make money for doing so. For the liquidity demander, time is more important than price; he is willing to make a price concession to get his need fulfilled.

Liquidity needs will be manifest in the bond traders’ own activities. If their inventory grows too large and they feel overexposed, they will aggressively hedge or liquidate a portion of the position. And they will do so in a way that respects the liquidity constraints of the market. A trader who needs to sell 2,000 bond futures to reduce exposure does not say, “The market is efficient and competitive, and my actions are not based on any information about prices, so I will just put those contracts in the market and everybody will pay the fair price for them.” If the trader dumps 2,000 contracts into the market, that offer obviously will affect the price even though the trader does not have any new information. Indeed, the trade would affect the market price even if the market knew the selling was not based on an informational edge.

So the principal reason for intraday price movement is the demand for liquidity. This view of the market – a liquidity view rather than an informational view – replaces the conventional academic perspective of the role of the market, in which the market is efficient and exists solely for conveying information. Why the change in roles? For one thing, it’s harder to get an information advantage, what with the globalization of markets and the widespread dissemination of real-time information. At the same time, the growth in the number of market participants means there are more incidents of liquidity demand. They want it, and they want it now.

Investors or traders who are uncomfortable with their level of exposure will be willing to pay up to get someone to take the position. The more uncomfortable the traders are, the more they will pay. And well they should, because someone else is getting saddled with the risk of the position, someone who most likely did not want to take on that position at the existing market price. Thus the demand for liquidity not only is the source of most price movement; it is at the root of most trading strategies. It is this liquidity-oriented, tectonic market shift that has made statistical arbitrage so powerful.

Statistical arbitrage originated in the 1980s from the hedging demand of Morgan Stanley’s equity block-trading desk, which at the time was the center of risk taking on the equity trading floor. Like other broker-dealers, Morgan Stanley continually faced the problem of how to execute large block trades efficiently without suffering a price penalty. Often, major institutions discover they can clear a large block trade only at a large discount to the posted price. The reason is simple: Other traders will not know if there is more stock to follow, and the large size will leave them uncertain about the reason for the trade. It could be that someone knows something they don’t and they will end up on the wrong side of the trade once the news hits the street. The institution can break the block into a number of smaller trades and put them into the market one at a time. Though that’s a step in the right direction, after a while it will become clear that there is persistent demand on one side of the market, and other traders, uncertain who it is and how long it will continue, will hesitate.

The solution to this problem is to execute the trade through a broker-dealer’s block-trading desk. The block-trading desk gives the institution a price for the entire trade, and then acts as an intermediary in executing the trade on the exchange floor. Because the block traders know the client, they have a pretty good idea if the trade is a stand-alone trade or the first trickle of a larger flow. For example, if the institution is a pension fund, it is likely it does not have any special information, but it simply needs to sell the stock to meet some liability or to buy stock to invest a new inflow of funds. The desk adjusts the spread it demands to execute the block accordingly. The block desk has many transactions from many clients, so it is in a good position to mask the trade within its normal business flow. And it also might have clients who would be interested in taking the other side of the transaction.

The block desk could end up having to sit on the stock because there is simply no demand and because throwing the entire position onto the floor will cause prices to run against it. Or some news could suddenly break, causing the market to move against the position held by the desk. Or, in yet a third scenario, another big position could hit the exchange floor that moves prices away from the desk’s position and completely fills existing demand. A strategy evolved at some block desks to reduce this risk by hedging the block with a position in another stock. For example, if the desk received an order to buy 100,000 shares of General Motors, it might immediately go out and buy 10,000 or 20,000 shares of Ford Motor Company against that position. If news moved the stock price prior to the GM block being acquired, Ford would also likely be similarly affected. So if GM rose, making it more expensive to fill the customer’s order, a position in Ford would also likely rise, partially offsetting this increase in cost.

This was the case at Morgan Stanley, where there were maintained a list of pairs of stocks – stocks that were closely related, especially in the short term, with other stocks – in order to have at the ready a solution for partially hedging positions. By reducing risk, the pairs trade also gave the desk more time to work out of the trade. This helped to lessen the liquidity-related movement of a stock price during a big block trade. As a result, this strategy increased the profit for the desk.

The pairs increased profits. Somehow that lightbulb didn’t go on in the world of equity trading, which was largely devoid of principal transactions and systematic risk taking. Instead, the block traders epitomized the image of cigar-chewing gamblers, playing market poker with millions of dollars of capital at a clip while working the phones from one deal to the next, riding in a cloud of trading mayhem. They were too busy to exploit the fact, or it never occurred to them, that the pairs hedging they routinely used held the secret to a revolutionary trading strategy that would dwarf their desk’s operations and make a fortune for a generation of less flamboyant, more analytical traders. Used on a different scale and applied for profit making rather than hedging, their pairwise hedges became the genesis of statistical arbitrage trading. The pairwise stock trades that form the elements of statistical arbitrage trading in the equity market are just one more flavor of spread trades. On an individual basis, they’re not very good spread trades. It is the diversification that comes from holding many pairs that makes this strategy a success. But even then, although its name suggests otherwise, statistical arbitrage is a spread trade, not a true arbitrage trade.

Fragmentation – Lit and Dark Electronic Exchanges. Thought of the Day 116.0

Untitled

Exchanges also control the amount and degree of granularity of the information you receive (e.g., you can use the consolidated/public feed at a low cost or pay a relatively much larger cost for direct/proprietary feeds from the exchanges). They also monetise the need for speed by renting out computer/server space next to their matching engines, a process called colocation. Through coloca­tion, exchanges can provide uniform service to trading clients at competitive rates. Having the traders’ trading engines at a common location owned by the exchange simplifies the exchange’s ability to provide uniform service as it can control the hardware connecting each client to the trading engine, the cable (so all have the same cable of the same length), and the network. This ensures that all traders in colocation have the same fast access, and are not disadvantaged (at least in terms of exchange-provided hardware). Naturally, this imposes a clear distinction between traders who are colocated and those who are not. Those not colocated will always have a speed disadvantage. It then becomes an issue for reg­ulators who have to ensure that exchanges keep access to colocation sufficiently competitive.

The issue of distance from the trading engine brings us to another key dimen­sion of trading nowadays, especially in US equity markets, namely fragmentation. A trader in US equities markets has to be aware that there are up to 13 lit electronic exchanges and more than 40 dark ones. Together with this wide range of trading options, there is also specific regulation (the so-called ‘trade-through’ rules) which affects what happens to market orders sent to one exchange if there are better execution prices at other exchanges. The interaction of multiple trading venues, latency when moving be­tween these venues, and regulation introduces additional dimensions to keep in mind when designing success l trading strategies.

The role of time is fundamental in the usual price-time priority electronic ex­change, and in a fragmented market, the issue becomes even more important. Traders need to be able to adjust their trading positions fast in response to or in anticipation of changes in market circumstances, not just at the local exchange but at other markets as well. The race to be the first in or out of a certain position is one of the focal points of the debate on the benefits and costs of ‘high-frequency trading’.

The importance of speed permeates the whole process of designing trading algorithms, from the actual code, to the choice of programming language, to the hardware it is implemented on, to the characteristics of the connection to the matching engine, and the way orders are routed within an exchange and between exchanges. Exchanges, being aware of the importance of speed, have adapted and, amongst other things, moved well beyond the basic two types of orders (Market Orders and Limit Orders). Any trader should be very well-informed regarding all the different order types available at the exchanges, what they are and how they may be used.

When coding an algorithm one should be very aware of all the possible types of orders allowed, not just in one exchange, but in all competing exchanges where one’s asset of interest is traded. Being uninformed about the variety of order types can lead to significant losses. Since some of these order types allow changes and adjustments at the trading engine level, they cannot be beaten in terms of latency by the trader’s engine, regardless of how efficiently your algorithms are coded and hardwired.

Untitled

Another important issue to be aware of is that trading in an exchange is not free, but the cost is not the same for all traders. For example, many exchanges run what is referred to as a maker-taker system of fees whereby a trader sending an MO (and hence taking liquidity away from the market) pays a trading fee, while a trader whose posted LO is filled by the MO (that is, the LO with which the MO is matched) will a pay much lower trading fee, or even receive a payment (a rebate) from the exchange for providing liquidity (making the market). On the other hand, there are markets with an inverted fee schedule, a taker-maker system where the fee structure is the reverse: those providing liquidity pay a higher fee than those taking liquidity (who may even get a rebate). The issue of exchange fees is quite important as fees distort observed market prices (when you make a transaction the relevant price for you is the net price you pay/receive, which is the published price net of fees).

Conjuncted: Bank Recapitalization – Some Scattered Thoughts on Efficacies.

chart-636451248874330188

In response to this article by Joe.

Some scattered thoughts could be found here.

With demonetization, banks got a surplus liquidity to the tune of Rs. 4 trillion which was largely responsible for call rates becoming tepid. However, there was no commensurate demand for credit as most corporates with a good credit rating managed to raise funds in the bond market at much lower yields. The result was that banks ended up investing most of this liquidity in government securities resulting in the Statutory Liquidity Ratio (SLR) bond holdings of banks exceeding the minimum requirement by up to 700 basis points. This combination of a surfeit of liquidity and weak credit demand can be used to design a recapitalization bond to address the capital problem. Since the banks are anyways sitting on surplus liquidity and investing in G-Secs, recapitalization bonds can be used to convert the bank liquidity to actually recapitalize the banks. Firstly, the government of India, through the RBI, will issue Recapitalization Bonds. Banks, who are sitting on surplus liquidity, will use their resources to invest in these recapitalization bonds. With the funds raised by the government through the issue of recapitalization bonds, the government will infuse capital into the stressed banks. This way, the surplus liquidity of the banks will be used more effectively and in the process the banks will also be better capitalized and now become capable of expanding their asset books as well as negotiating with stressed clients for haircuts. Recapitalization bonds are nothing new and have been used by the RBI in the past. In fact, the former RBI governor, Dr. Y V Reddy, continues to be one of the major proponents of recapitalization bonds in the current juncture. More so, considering that the capital adequacy ratio of Indian banks could dip as low as 11% by March 2018 if the macroeconomic conditions worsen, the motivation for going in for recap bonds has no logical counters. As I have often said this in many a fora, when banks talk numbers, transparency and accountability the way it is perceived isn’t how it is perceived by them, and moreover this argument gets diluted a bit in the wake of demonetization, which has still been haunted by lack of credit demand. As far as the NPAs are concerned, these were lying dormant and thanks to RBI’s AQR, these would not even have surfaced if let be made decisions about by the banks’ free hands. So, RBI’s intervention was a must to recognize NPAs rather than the political will of merely considering them as stressed assets. The real problem with recap bonds lie in the fact that the earlier such exercise in the 90s has still resulted in bonds maturing, and unless, these bonds are made tradable, these would be confined to further immaturities.

Demonetization – One Year Of A Rudderless Cacophony (A Booklet of Compilation of Blog Posts)

Untitled

On the midnight of 8/11/16, in a single pronouncement, the Prime Minister of India made higher denominations of Rs. 500 and Rs. 1000 illegal tender under the pretense of curbing black money, arresting tax evasion, stopping funding of terrorist activities and counterfeiting of currency. Those who had these notes were given a time frame of less than 2 months to deposit them and withdraw new denominations in different slabs of limits set by the RBI. The Indian economy, which is predominantly cash based and the Indian people, a great section of who are financially excluded, existing solely on hard currency, would somehow have to manage through this ‘temporary crisis’ for the greater good of the nation. This was the call of the Prime Minister to undergo ‘temporary hardships’ to root out the ills of the Indian Economy.

And so what happened? The country panicked and people rushed to banks to deposit their cash savings, exchange high denominations and lines formed. Long lines, winding unending lines full of people waiting to deposit and get new notes. People died in those lines, many patients could not get timely medical help, many social functions – marriages and burials got drowned in questions of “why cant you suffer a little for the country, when soldiers are giving their blood in the borders to protect you”. But what about people who never had a bank account? Or those too far away from a branch or ATM to withdraw or exchange? Or those whose earnings were so marginal that they could not spare losing a day’s work waiting in lines? Or women who had painstakingly collected money for emergency over many years? What about those crores of rupees that was saved through co-operative banking system, still far away from the mainstream banking operations – but was safeguarding the money of crores of people in many states? Modi’s solution for those suffering was clearly evident on the morning of the 9th, plastered on almost every major newspaper “abhi ATM nahin, Paytm Karo.”…..

Demonetization – One Year Of A Rudderless Cacophony

 

Econophysics: Financial White Noise Switch. Thought of the Day 115.0

circle24

What is the cause of large market fluctuation? Some economists blame irrationality behind the fat-tail distribution. Some economists observed that social psychology might create market fad and panic, which can be modeled by collective behavior in statistical mechanics. For example, the bi-modular distribution was discovered from empirical data in option prices. One possible mechanism of polarized behavior is collective action studied in physics and social psychology. Sudden regime switch or phase transition may occur between uni-modular and bi-modular distribution when field parameter changes across some threshold. The Ising model in equilibrium statistical mechanics was borrowed to study social psychology. Its phase transition from uni-modular to bi-modular distribution describes statistical features when a stable society turns into a divided society. The problem of the Ising model is that its key parameter, the social temperature, has no operational definition in social system. A better alternative parameter is the intensity of social interaction in collective action.

A difficult issue in business cycle theory is how to explain the recurrent feature of business cycles that is widely observed from macro and financial indexes. The problem is: business cycles are not strictly periodic and not truly random. Their correlations are not short like random walk and have multiple frequencies that changing over time. Therefore, all kinds of math models are tried in business cycle theory, including deterministic, stochastic, linear and nonlinear models. We outline economic models in terms of their base function, including white noise with short correlations, persistent cycles with long correlations, and color chaos model with erratic amplitude and narrow frequency band like biological clock.

 

Untitled

The steady state of probability distribution function in the Ising Model of Collective Behavior with h = 0 (without central propaganda field). a. Uni-modular distribution with low social stress (k = 0). Moderate stable behavior with weak interaction and high social temperature. b. Marginal distribution at the phase transition with medium social stress (k = 2). Behavioral phase transition occurs between stable and unstable society induced by collective behavior. c. Bi-modular distribution with high social stress (k = 2.5). The society splits into two opposing groups under low social temperature and strong social interactions in unstable society. 

Deterministic models are used by Keynesian economists for endogenous mechanism of business cycles, such as the case of the accelerator-multiplier model. The stochastic models are used by the Frisch model of noise-driven cycles that attributes external shocks as the driving force of business fluctuations. Since 1980s, the discovery of economic chaos and the application of statistical mechanics provide more advanced models for describing business cycles. Graphically,

Untitled

The steady state of probability distribution function in socio-psychological model of collective choice. Here, “a” is the independent parameter; “b” is the interaction parameter. a Centered distribution with b < a (denoted by short dashed curve). It happens when independent decision rooted in individualistic orientation overcomes social pressure through mutual communication. b Horizontal flat distribution with b = a (denoted by long dashed line). Marginal case when individualistic orientation balances the social pressure. c Polarized distribution with b > a (denoted by solid line). It occurs when social pressure through mutual communication is stronger than independent judgment. 

Untitled

Numerical 1 autocorrelations from time series generated by random noise and harmonic wave. The solid line is white noise. The broken line is a sine wave with period P = 1. 

Linear harmonic cycles with unique frequency are introduced in business cycle theory. The auto-correlations from harmonic cycle and white noise are shown in the above figure. Auto-correlation function from harmonic cycles is a cosine wave. The amplitude of cosine wave is slightly decayed because of limited data points in numerical experiment. Auto-correlations from a random series are an erratic series with rapid decade from one to residual fluctuations in numerical calculation. The auto-regressive (AR) model in discrete time is a combination of white noise term for simulating short-term auto-correlations from empirical data.

The deterministic model of chaos can be classified into white chaos and color chaos. White chaos is generated by nonlinear difference equation in discrete-time, such as one-dimensional logistic map and two-dimensional Henon map. Its autocorrelations and power spectra look like white noise. Its correlation dimension can be less than one. White noise model is simple in mathematical analysis but rarely used in empirical analysis, since it needs intrinsic time unit.

Color chaos is generated by nonlinear differential equations in continuous-time, such as three-dimensional Lorenz model and one-dimensional model with delay-differential model in biology and economics. Its autocorrelations looks like a decayed cosine wave, and its power spectra seem a combination of harmonic cycles and white noise. The correlation dimension is between one and two for 3D differential equations, and varying for delay-differential equation.

Untitled

History shows the remarkable resilience of a market that experienced a series of wars and crises. The related issue is why the economy can recover from severe damage and out of equilibrium? Mathematically speaking, we may exam the regime stability under parameter change. One major weakness of the linear oscillator model is that the regime of periodic cycle is fragile or marginally stable under changing parameter. Only nonlinear oscillator model is capable of generating resilient cycles within a finite area under changing parameters. The typical example of linear models is the Samuelson model of multiplier-accelerator. Linear stochastic models have similar problem like linear deterministic models. For example, the so-called unit root solution occurs only at the borderline of the unit root. If a small parameter change leads to cross the unit circle, the stochastic solution will fall into damped (inside the unit circle) or explosive (outside the unit circle) solution.

Financial Fragility in the Margins. Thought of the Day 114.0

F1.large

If micro-economic crisis is caused by the draining of liquidity from an individual company (or household), macro-economic crisis or instability, in the sense of a reduction in the level of activity in the economy as a whole, is usually associated with an involuntary outflow of funds from companies (or households) as a whole. Macro-economic instability is a ‘real’ economic phenomenon, rather than a monetary contrivance, the sense in which it is used, for example, by the International Monetary Fund to mean price inflation in the non-financial economy. Neo-classical economics has a methodological predilection for attributing all changes in economic activity to relative price changes, specifically the price changes that undoubtedly accompany economic fluctuations. But there is sufficient evidence to indicate that falls in economic activity follow outflows of liquidity from the industrial and commercial company sector. Such outflows then lead to the deflation of economic activity that is the signal feature of economic recession and depression.

Let us start with a consideration of how vulnerable financial futures market themselves are to illiquidity, since this would indicate whether the firms operating in the market are ever likely to need to realize claims elsewhere in order to meet their liabilities to the market. Paradoxically, the very high level of intra-broker trading is a safety mechanism for the market, since it raises the velocity of circulation of whatever liquidity there is in the market: traders with liabilities outside the market are much more likely to have claims against other traders to set against those claims. This may be illustrated by considering the most extreme case of a futures market dominated by intra-broker trading, namely a market in which there are only two dealers who buy and sell financial futures contracts only between each other as rentiers, in other words for a profit which may include their premium or commission. On the expiry date of the contracts, conventionally set at three-monthly intervals in actual financial futures markets, some of these contracts will be profitable, some will be loss-making. Margin trading, however, requires all the profitable contracts to be fully paid up in order for their profit to be realized. The trader whose contracts are on balance profitable therefore cannot realize his profits until he has paid up his contracts with the other broker. The other broker will return the money in paying up his contracts, leaving only his losses to be raised by an inflow of money. Thus the only net inflow of money that is required is the amount of profit (or loss) made by the traders. However, an accommodating gross inflow is needed in the first instance in order to make the initial margin payments and settle contracts so that the net profit or loss may be realized.

The existence of more traders, and the system for avoiding counterparty risk commonly found in most futures market, whereby contracts are made with a central clearing house, introduce sequencing complications which may cause problems: having a central clearing house avoids the possibility that one trader’s default will cause other traders to default on their obligations. But it also denies traders the facility of giving each other credit, and thereby reduces the velocity of circulation of whatever liquidity is in the market. Having to pay all obligations in full to the central clearing house increases the money (or gross inflow) that broking firms and investors have to put into the market as margin payments or on settlement days. This increases the risk that a firm with large net liabilities in the financial futures market will be obliged to realize assets in other markets to meet those liabilities. In this way, the integrity of the market is protected by increasing the effective obligations of all traders, at the expense of potentially unsettling claims on other markets.

This risk is enhanced by the trading of rentiers, or banks and entrepreneurs operating as rentiers, hedging their futures contracts in other financial markets. However, while such incidents generate considerable excitement around the markets at the time of their occurrence, there is little evidence that they could cause involuntary outflows from the corporate sector on such a scale as to produce recession in the real economy. This is because financial futures are still used by few industrial and commercial companies, and their demand for financial derivatives instruments is limited by the relative expense of these instruments and their own exposure to changes in financial parameters (which may more easily be accommodated by holding appropriate stocks of liquid assets, i.e., liquidity preference). Therefore, the future of financial futures depends largely on the interest in them of the contemporary rentiers in pension, insurance and various other forms of investment funds. Their interest, in turn, depends on how those funds approach their ‘maturity’.

However, the decline of pension fund surpluses poses important problems for the main securities markets of the world where insurance and pension funds are now the dominant investors, as well as for more peripheral markets like emerging markets, venture capital and financial futures. A contraction in the net cash inflow of investment funds will be reflected in a reduction in the funds that they are investing, and a greater need to realize assets when a change in investment strategy is undertaken. In the main securities markets of the world, a reduction in the ‘new money’ that pension and insurance funds are putting into those securities markets will slow down the rate of growth of the prices in those markets. How such a fall in the institutions’ net cash inflow will affect the more marginal markets, such as emerging markets, venture capital and financial futures, depends on how institutional portfolios are managed in the period of declining net contributions inflows.

In general, investment managers in their own firms, or as employees of merchant or investment banks, compete to manage institutions’ funds. Such competition is likely to increase as investment funds approach ‘maturity’, i.e., as their cash outflows to investors, pensioners or insurance policyholders, rises faster than their cash inflow from contributions and premiums, so that there are less additional funds to be managed. In principle, this should not affect financial futures markets, in the first instance, since, as argued above, the short-term nature of their instruments and the large proportion in their business of intra-market trade makes them much less dependent on institutional cash inflows. However, this does not mean that they would be unaffected by changes in the portfolio preferences of investment funds in response to lower returns from the main securities markets. Such lower returns make financial investments like financial futures, venture capital and emerging markets, which are more marginal because they are so hazardous, more attractive to normally conservative fund managers. Investment funds typically put out sections of portfolios to specialist fund managers who are awarded contracts to manage a section according to the soundness of their reputation and the returns that they have made hitherto in portfolios under their management. A specialist fund manager reporting high, but not abnormal, profits in a fund devoted to financial futures, is likely to attract correspondingly more funds to manage when returns are lower in the main markets’ securities, even if other investors in financial futures experienced large losses. In this way, the maturing of investment funds could cause an increased inflow of rentier funds into financial futures markets.

An inflow of funds into a financial market entails an increase in liabilities to the rentiers outside the market supplying those funds. Even if profits made in the market as a whole also increase, so too will losses. While brokers commonly seek to hedge their positions within the futures market, rentiers have much greater possibilities of hedging their contracts in another market, where they have assets. An inflow into futures markets means that on any settlement day there will therefore be larger net outstanding claims against individual banks or investment funds in respect of their financial derivatives contracts. With margin trading, much larger gross financial inflows into financial futures markets will be required to settle maturing contracts. Some proportion of this will require the sale of securities in other markets. But if liquidity in integrated cash markets for securities is reduced by declining net inflows into pension funds, a failure to meet settlement obligations in futures markets is the alternative to forced liquidation of other assets. In this way futures markets will become more fragile.

Moreover, because of the hazardous nature of financial futures, high returns for an individual firm are difficult to sustain. Disappointment is more likely to be followed by the transfer of funds to management in some other peripheral market that shows a temporary high profit. While this should not affect capacity utilization in the futures market, because of intra-market trade, it is likely to cause much more volatile trading, and an increase in the pace at which new instruments are introduced (to attract investors) and fall into disuse. Pension funds whose returns fall below those required to meet future liabilities because of such instability would normally be required to obtain additional contributions from employers and employees. The resulting drain on the liquidity of the companies affected would cause a reduction in their fixed capital investment. This would be a plausible mechanism for transmitting fragility in the financial system into full-scale decline in the real economy.

The proliferation of financial futures markets has only had been marginally successful in substituting futures contracts for Keynesian liquidity preference as a means of accommodating uncertainty. A closer look at the agents in those markets and their market mechanisms indicates that the price system in them is flawed and trading hazardous risks in them adds to uncertainty rather than reducing it. The hedging of financial futures contracts in other financial markets means that the resulting forced liquidations elsewhere in the financial system are a real source of financial instability that is likely to worsen as slower growth in stock markets makes speculative financial investments appear more attractive. Capital-adequacy regulations are unlikely to reduce such instability, and may even increase it by increasing the capital committed to trading in financial futures. Such regulations can also create an atmosphere of financial security around these markets that may increase unstable speculative flows of liquidity into the markets. For the economy as a whole, the real problems are posed by the involvement of non-financial companies in financial futures markets. With the exception of a few spectacular scandals, non-financial companies have been wary of using financial futures, and it is important that they should continue to limit their interest in financial futures markets. Industrial and commercial companies, which generate their own liquidity through trade and production and hence have more limited financial assets to realize in order to meet financial futures liabilities in times of distress, are more vulnerable to unexpected outflows of liquidity in proportion to their increased exposure to financial markets. The liquidity which they need to set aside to meet such unexpected liabilities inevitably means a reduced commitment to investment in fixed capital and new technology.