Fascism’s Incognito – Conjuncted

“Being asked to define fascism is probably the scariest moment for any expert of fascism,” Montague said.
Communism-vs-Fascism
Brecht’s circular circuitry is here.
Allow me to make cross-sectional (both historically and geographically) references. I start with Mussolini, who talked of what use fascism could be put to by stating that capitalism throws itself into the protection of the state when it is in crisis, and he illustrated this point by referring to the Great Depression as a failure of laissez-faire capitalism and thus creating an opportunity for fascist state to provide an alternative to this failure. This in a way points to the fact that fascism springs to life economically in the event of capitalism’s deterioration. To highlight this point of fascism springing to life as a reaction to capitalism’s failure, let me take recourse to Samir Amin, who calls the fascist choice for managing a capitalist society in crisis as a categorial rejection of democracy, despite having reached that stage democratically. The masses are subjected to values of submission to a unity of socio-economic, political and/or religious ideological discourses. This is one reason why I call fascism not as a derivative category of capitalism in the sense of former being the historic phase of the latter, but rather as a coterminous tendency waiting in dormancy for capitalism to deteriorate, so that fascism could then detonate. But, are fascism and capitalism related in a multiple of ways is as good as how socialism is related with fascism, albeit only differently categorically.
It is imperative for me to add by way of what I perceive as financial capitalism and bureaucracy and where exactly art gets sandwiched in between the two, for more than anything else, I would firmly believe in Brecht as continuing the artistic practices of Marxian sociology and political-economy.
The financial capitalism combined with the impersonal bureaucracy has inverted the traditional schematic forcing us to live in a totalitarian system of financial governance divorced from democratic polity. It’s not even fascism in the older sense of the term, by being a collusion of state and corporate power, since the political is bankrupt and has become a mediatainment system of control and buffer against the fact of Plutocracies. The state will remain only as long as the police systems are needed to fend off people claiming rights to their rights. Politicians are dramaturgists and media personalities rather than workers in law.  If one were to just study the literature and paintings of the last 3-4 decades, it is fathomable where it is all going. Arts still continue to speak what we do not want to hear. Most of our academics are idiots clinging on to the ideological culture of the left that has put on its blinkers and has only one enemy, which is the right (whatever the hell that is). Instead of moving outside their straightjackets and embracing the world of the present, they still seem to be ensconced in 19th century utopianism with the only addition to their arsenal being the dramatic affects of mass media. Remember Thomas Pynchon of Gravity’s Rainbow fame (I prefer calling him the illegitimate cousin of James Joyce for his craftiness and smoothly sailing contrite plots: there goes off my first of paroxysms!!), who likened the system of techno-politics as an extension of our inhuman core, at best autonomous, intelligent and ever willing to exist outside the control of politics altogether. This befits the operational closure and echoing time and time again that technology isn’t an alien thing, but rather a manifestation of our inhuman core, a mutation of our shared fragments sieved together in ungodly ways. This is alien technologies in gratitude.
We have never been natural, and purportedly so by building defence systems against the natural both intrinsically and extrinsically. Take for example, Civilisation, the most artificial construct of all humans had busied themselves building and now busying themselves upholding. what is it? A Human Security System staving off entropy of existence through the self-perpetuation of a cultural complex of temporal immortalisation, if nothing less and vulnerable to editions by scores of pundits claiming to a larger schemata often overlooked by parochiality. Haven’t we become accustomed to hibernating in an artificial time now exposed by inhabiting the infosphere, creating dividualities by reckoning to data we intake, partake and outtake. Isn’t analysing the part/whole dividuality really scoring our worthiness? I know the answer is yes, but merely refusing to jump off the tongue. Democracies have made us indolent with extremities ever so flirting with electronic knowledge waiting to be turned to digital ash when confronted with the existential threat to our locus standi.
But, we always think of a secret cabal conspiring to dehumanise us. But we also forget the impersonality of the dataverse, the infosphere, the carnival we simply cannot avoid being a part of. Our mistaken beliefs lie in reductionism, and this is a serious detriment to causes created ex nihilo, for a fight is inevitably diluted if we pay insignificance to the global meshwork of complex systems of economics and control, for these far outstrip our ability to pin down to a critical apparatus. This apparatus needs to be different from ones based on criticism, for the latter is prone to sciolist tendencies. Maybe, one needs to admit allegiance to perils of our position and go along in a Socratic irony before turning in against the admittance at opportune times. Right deserves tackling through the Socratic irony, lest taking offences become platitudinous. Let us not forget that the modern state is nothing but a PR firm to keep the children asleep and unthinking and believing in the dramaturgy of the political as real. And this is where Brecht comes right back in, for he considered creation of bureaucracies as affronting not just fascist states, but even communist ones. The above aside, or digression is just a reality check on how much complex capitalism has become and with it, its derivatives of fascism as these are too intertwined within bureaucratic spaces. Even when Brecht was writing in his heydays, he took a deviation from his culinary-as-ever epic theatre to found a new form of what he called theatre as learning to play that resembled his political seminars modeled on the rejection of the concept of bureaucratic elitism in partisan politics where the theorists and functionaries issued directives and controlled activities on behalf of the masses to the point of submission of the latter to the former. This point is highlighted not just for fascist states, but equally well for socialist/communist regimes reiterating the fact that fascism is potent enough to develop in societies other than capitalistic ones.
Moving on to the point when mentions of democracy as bourgeois democracy is done in the same breath as regards equality only for those who are holders of capital are turning platitudinous. Well, structurally yes, this is what it seems like, but reality goes a bit deeper and thereafter fissures itself into looking at if capital indeed is what it is perceived as in general, or is there more to it than meets the eye. I quip this to confront two theorists of equality with one another: Piketty and Sally Goerner. Piketty misses a great opportunity to tie the “r > g” idea (after tax returns on capital r > growth rate of economy g) to the “limits to growth”. With a careful look at history, there are several quite important choice points along the path from the initial hope it won’t work out that way… to the inevitable distressing end he describes, and sees, and regrets. It’s what seduces us into so foolishly believing we can maintain “g > r”, despite the very clear and hard evidence of that faiIing all the time… that sometimes it doesn’t. The real “central contradiction of capitalism” then, is that it promises “g > r”, and then we inevitably find it is only temporary. Growth is actually nature’s universal start-up process, used to initially build every life, including the lives of every business, and the lives of every society. Nature begins building things with growth. She’s then also happy to destroy them with more of the same, those lives that began with healthy growth that make the fateful choice of continuing to devote their resources to driving their internal and external strains to the breaking point, trying to make g > r perpetual. It can’t be. So the secret to the puzzle seems to be: Once you’ve taken growth from “g > r” to spoiling its promise in its “r > g” you’ve missed the real opportunity it presented. Sally Goerner writes about how systems need to find new ways to grow through a process of rising intricacy that literally reorganizes the system into a higher level of complexity. Systems that fail to do that collapse. So smart growth is possible (a cell divides into multiple cells that then form an organ of higher complexity and greater intricacy through working cooperatively). Such smart growth is regenerative in that it manifests new potential. How different that feels than conventional scaling up of a business, often at the expense of intricacy (in order to achieve so called economies of scale). Leaps of complexity do satisfy growing demands for productivity, but only temporarily, as continually rising demands of productivity inevitably require ever bigger leaps of complexity. Reorganizing the system by adopting ever higher levels of intricacy eventually makes things ever more unmanageable, naturally becoming organizationally unstable, to collapse for that reason. So seeking the rise in productivity in exchange for a rising risk of disorderly collapse is like jumping out of the fry pan right into the fire! As a path to system longevity, then, it is tempting but risky, indeed appearing to be regenerative temporarily, until the same impossible challenge of keeping up with ever increasing demands for new productivity drives to abandon the next level of complexity too! The more intricacy (tight, small-scale weave) grows horizontally, the more unmanageable it becomes. That’s why all sorts of systems develop what we would call hierarchical structures. Here, however, hierarchal structures serve primarily as connective tissue that helps coordinate, facilitate and communicate across scales. One of the reasons human societies are falling apart is because many of our hierarchical structures no longer serve this connective tissue role, but rather fuel processes of draining and self-destruction by creating sinks where refuse could be regenerated. Capitalism, in its present financial form is precisely this sink, whereas capitalism wedded to fascism as an historical alliance doesn’t fit the purpose and thus proving once more that the collateral damage would be lent out to fascist states if that were to be the case, which would indeed materialize that way.
That democracy is bourgeois democracy is an idea associated with Swedish political theorist Goran Therborn, who as recent as the 2016 US elections proved his point by questioning the whole edifice of inclusive-exclusive aspects of democracy, when he said,
Even if capitalist markets do have an inclusive aspect, open to exchange with anyone…as long as it is profitable, capitalism as a whole is predominantly and inherently a system of social exclusion, dividing people by property and excluding the non-profitable. a system of this kind is, of course, incapable of allowing the capabilities of all humankind to be realized. and currently the the system looks well fortified, even though new critical currents are hitting against it.
Democracy did take on a positive meaning, and ironically enough, it was through rise of nation-states, consolidation of popular sovereignty championed by the west that it met its two most vociferous challenges in the form of communism and fascism, of which the latter was a reactionary response to the discontents of capitalist modernity. Its radically lay in racism and populism. A degree of deference toward the privileged and propertied, rather than radical opposition as in populism, went along with elite concessions affecting the welfare, social security, and improvement of the working masses. This was countered by, even in the programs of moderate and conservative parties by using state power to curtail the most malign effects of unfettered market dynamics. It was only in the works of Hayek that such interventions were beginning to represent the road to serfdom thus paving way to modern-day right-wing economies, of which state had absolutely no role to play as regards markets fundamentals and dynamics. The counter to bourgeois democracy was rooted in social democratic movements and is still is, one that is based on negotiation, compromise, give and take a a grudgingly given respect for the others (whether ideologically or individually). The point again is just to reiterate that fascism, in my opinion is not to be seen as a nakedest form of capitalism, but is generally seen to be floundering on the shoals of an economic slowdown or crisis of stagflation.
On ideal categories, I am not a Weberian at heart. I am a bit ambiguous or even ambivalent to the role of social science as a discipline that could draft a resolution to ideal types and interactions between those generating efficacies of real life. Though, it does form one aspect of it. My ontologies would lie in classificatory and constructive forms from more logical grounds that leave ample room for deviations and order-disorder dichotomies. Complexity is basically an offspring of entropy.
And here is where my student-days of philosophical pessimism surface, or were they ever dead, as the real way out is a dark path through the world we too long pretended did not exist.
Advertisement

Skeletal of the Presentation on AIIB and Blue Economy in Mumbai during the Peoples’ Convention on 22nd June 2018

Main features in AIIB Financing

  1. investments in regional members
  2. supports longer tenors and appropriate grace period
  3. mobilize funding through insurance, banks, funds and sovereign wealth (like the China Investment Corporation (CIC) in the case of China)
  4. funds on economic/financial considerations and on project benefits, eg. global climate, energy security, productivity improvement etc.

Public Sector:

  1. sovereign-backed financing (sovereign guarantee)
  2. loan/guarantee

Private Sector:

  1. non-sovereign-backed financing (private sector, State Owned Enterprises (SOEs), sub-sovereign and municipalities)
  2. loans and equity
  3. bonds, credit enhancement, funds etc.

—— portfolio is expected to grow steadily with increasing share of standalone projects from 27% in 2016 to 39% in 2017 and 42% in 2018 (projected)

—— share of non-sovereign-backed projects has increased from 1% in 2016 to 36% of portfolio in 2017. share of non-sovereign-backed projects is projected to account for about 30% in 2018

Untitled

Why would AIIB be interested in the Blue Economy?

  1. To appropriate (expropriate) the potential of hinterlands
  2. increasing industrialization
  3. increasing GDP
  4. increasing trade
  5. infrastructure development
  6. Energy and Minerals in order to bring about a changing landscape
  7. Container: regional collaboration and competition

AIIB wishes to change the landscape of infrastructure funding across its partner countries, laying emphasis on cross-country and cross-sectoral investments in the shipping sector — Yee Ean Pang, Director General, Investment Operations, AIIB.

He also opined that in the shipping sector there is a need for private players to step in, with 40-45 per cent of stake in partnership being offered to private players.

Untitled

Projects aligned with Sagarmala are being considered for financial assistance by the Ministry of Shipping under two main headings:

1. Budgetary Allocations from the Ministry of Shipping

    a. up to 50% of the project cost in the form of budgetary grant

    b. Projects having high social impact but low/no Internal Rate of Return (IRR) may be provided funding, in convergence with schemes of other central line ministries. IRR is a metric used in capital budgeting to estimate the profitability of potential investments. It is a discount rate that makes the net present value (NPV) of all cash flows from a particular project equal to zero. NPV is the difference between the present value of cash inflows and present value of cash outflows over a period of time. IRR is sometimes referred to as “economic rate of return” or “discounted cash flow rate of return.” The use of “internal” refers to the omission of external factors, such as the cost of capital or inflation, from the calculation.

2. Funding in the form of equity by Sagarmala Development Co. Ltd.

    a. SDCL to provide 49% equity funding to residual projects

    b. monitoring is to be jointly done by SDCL and implementing agency at the SPV level

    c.  project proponent to bear operation and maintenance costs of the project

     i. importantly, expenses incurred for project development to be treated as part of SDCL’s equity contribution

     ii. preferences to be given to projects where land is being contributed by the project proponent

What are the main financing issues?

  1. Role of MDBs and BDBs for promotion of shipping sector in the country
  2. provision of long-term low-cost loans to shipping companies for procurement of vessels
  3. PPPs (coastal employment zones, port connectivity projects), EPCs, ECBs (port expansion and new port development), FDI in Make in India 2.0 of which shipping is a major sector identified, and conventional bank financing for port modernization and port connectivity

the major constraining factors, however, are:

  1. uncertainty in the shipping sector, cyclical business nature
  2. immature financial markets

Knowledge Limited for Dummies….Didactics.

header_Pipes

Bertrand Russell with Alfred North Whitehead, in the Principia Mathematica aimed to demonstrate that “all pure mathematics follows from purely logical premises and uses only concepts defined in logical terms.” Its goal was to provide a formalized logic for all mathematics, to develop the full structure of mathematics where every premise could be proved from a clear set of initial axioms.

Russell observed of the dense and demanding work, “I used to know of only six people who had read the later parts of the book. Three of those were Poles, subsequently (I believe) liquidated by Hitler. The other three were Texans, subsequently successfully assimilated.” The complex mathematical symbols of the manuscript required it to be written by hand, and its sheer size – when it was finally ready for the publisher, Russell had to hire a panel truck to send it off – made it impossible to copy. Russell recounted that “every time that I went out for a walk I used to be afraid that the house would catch fire and the manuscript get burnt up.”

Momentous though it was, the greatest achievement of Principia Mathematica was realized two decades after its completion when it provided the fodder for the metamathematical enterprises of an Austrian, Kurt Gödel. Although Gödel did face the risk of being liquidated by Hitler (therefore fleeing to the Institute of Advanced Studies at Princeton), he was neither a Pole nor a Texan. In 1931, he wrote a treatise entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which demonstrated that the goal Russell and Whitehead had so single-mindedly pursued was unattainable.

The flavor of Gödel’s basic argument can be captured in the contradictions contained in a schoolboy’s brainteaser. A sheet of paper has the words “The statement on the other side of this paper is true” written on one side and “The statement on the other side of this paper is false” on the reverse. The conflict isn’t resolvable. Or, even more trivially, a statement like; “This statement is unprovable.” You cannot prove the statement is true, because doing so would contradict it. If you prove the statement is false, then that means its converse is true – it is provable – which again is a contradiction.

The key point of contradiction for these two examples is that they are self-referential. This same sort of self-referentiality is the keystone of Gödel’s proof, where he uses statements that imbed other statements within them. This problem did not totally escape Russell and Whitehead. By the end of 1901, Russell had completed the first round of writing Principia Mathematica and thought he was in the homestretch, but was increasingly beset by these sorts of apparently simple-minded contradictions falling in the path of his goal. He wrote that “it seemed unworthy of a grown man to spend his time on such trivialities, but . . . trivial or not, the matter was a challenge.” Attempts to address the challenge extended the development of Principia Mathematica by nearly a decade.

Yet Russell and Whitehead had, after all that effort, missed the central point. Like granite outcroppings piercing through a bed of moss, these apparently trivial contradictions were rooted in the core of mathematics and logic, and were only the most readily manifest examples of a limit to our ability to structure formal mathematical systems. Just four years before Gödel had defined the limits of our ability to conquer the intellectual world of mathematics and logic with the publication of his Undecidability Theorem, the German physicist Werner Heisenberg’s celebrated Uncertainty Principle had delineated the limits of inquiry into the physical world, thereby undoing the efforts of another celebrated intellect, the great mathematician Pierre-Simon Laplace. In the early 1800s Laplace had worked extensively to demonstrate the purely mechanical and predictable nature of planetary motion. He later extended this theory to the interaction of molecules. In the Laplacean view, molecules are just as subject to the laws of physical mechanics as the planets are. In theory, if we knew the position and velocity of each molecule, we could trace its path as it interacted with other molecules, and trace the course of the physical universe at the most fundamental level. Laplace envisioned a world of ever more precise prediction, where the laws of physical mechanics would be able to forecast nature in increasing detail and ever further into the future, a world where “the phenomena of nature can be reduced in the last analysis to actions at a distance between molecule and molecule.”

What Gödel did to the work of Russell and Whitehead, Heisenberg did to Laplace’s concept of causality. The Uncertainty Principle, though broadly applied and draped in metaphysical context, is a well-defined and elegantly simple statement of physical reality – namely, the combined accuracy of a measurement of an electron’s location and its momentum cannot vary far from a fixed value. The reason for this, viewed from the standpoint of classical physics, is that accurately measuring the position of an electron requires illuminating the electron with light of a very short wavelength. But the shorter the wavelength the greater the amount of energy that hits the electron, and the greater the energy hitting the electron the greater the impact on its velocity.

What is true in the subatomic sphere ends up being true – though with rapidly diminishing significance – for the macroscopic. Nothing can be measured with complete precision as to both location and velocity because the act of measuring alters the physical properties. The idea that if we know the present we can calculate the future was proven invalid – not because of a shortcoming in our knowledge of mechanics, but because the premise that we can perfectly know the present was proven wrong. These limits to measurement imply limits to prediction. After all, if we cannot know even the present with complete certainty, we cannot unfailingly predict the future. It was with this in mind that Heisenberg, ecstatic about his yet-to-be-published paper, exclaimed, “I think I have refuted the law of causality.”

The epistemological extrapolation of Heisenberg’s work was that the root of the problem was man – or, more precisely, man’s examination of nature, which inevitably impacts the natural phenomena under examination so that the phenomena cannot be objectively understood. Heisenberg’s principle was not something that was inherent in nature; it came from man’s examination of nature, from man becoming part of the experiment. (So in a way the Uncertainty Principle, like Gödel’s Undecidability Proposition, rested on self-referentiality.) While it did not directly refute Einstein’s assertion against the statistical nature of the predictions of quantum mechanics that “God does not play dice with the universe,” it did show that if there were a law of causality in nature, no one but God would ever be able to apply it. The implications of Heisenberg’s Uncertainty Principle were recognized immediately, and it became a simple metaphor reaching beyond quantum mechanics to the broader world.

This metaphor extends neatly into the world of financial markets. In the purely mechanistic universe of classical physics, we could apply Newtonian laws to project the future course of nature, if only we knew the location and velocity of every particle. In the world of finance, the elementary particles are the financial assets. In a purely mechanistic financial world, if we knew the position each investor has in each asset and the ability and willingness of liquidity providers to take on those assets in the event of a forced liquidation, we would be able to understand the market’s vulnerability. We would have an early-warning system for crises. We would know which firms are subject to a liquidity cycle, and which events might trigger that cycle. We would know which markets are being overrun by speculative traders, and thereby anticipate tactical correlations and shifts in the financial habitat. The randomness of nature and economic cycles might remain beyond our grasp, but the primary cause of market crisis, and the part of market crisis that is of our own making, would be firmly in hand.

The first step toward the Laplacean goal of complete knowledge is the advocacy by certain financial market regulators to increase the transparency of positions. Politically, that would be a difficult sell – as would any kind of increase in regulatory control. Practically, it wouldn’t work. Just as the atomic world turned out to be more complex than Laplace conceived, the financial world may be similarly complex and not reducible to a simple causality. The problems with position disclosure are many. Some financial instruments are complex and difficult to price, so it is impossible to measure precisely the risk exposure. Similarly, in hedge positions a slight error in the transmission of one part, or asynchronous pricing of the various legs of the strategy, will grossly misstate the total exposure. Indeed, the problems and inaccuracies in using position information to assess risk are exemplified by the fact that major investment banking firms choose to use summary statistics rather than position-by-position analysis for their firmwide risk management despite having enormous resources and computational power at their disposal.

Perhaps more importantly, position transparency also has implications for the efficient functioning of the financial markets beyond the practical problems involved in its implementation. The problems in the examination of elementary particles in the financial world are the same as in the physical world: Beyond the inherent randomness and complexity of the systems, there are simply limits to what we can know. To say that we do not know something is as much a challenge as it is a statement of the state of our knowledge. If we do not know something, that presumes that either it is not worth knowing or it is something that will be studied and eventually revealed. It is the hubris of man that all things are discoverable. But for all the progress that has been made, perhaps even more exciting than the rolling back of the boundaries of our knowledge is the identification of realms that can never be explored. A sign in Einstein’s Princeton office read, “Not everything that counts can be counted, and not everything that can be counted counts.”

The behavioral analogue to the Uncertainty Principle is obvious. There are many psychological inhibitions that lead people to behave differently when they are observed than when they are not. For traders it is a simple matter of dollars and cents that will lead them to behave differently when their trades are open to scrutiny. Beneficial though it may be for the liquidity demander and the investor, for the liquidity supplier trans- parency is bad. The liquidity supplier does not intend to hold the position for a long time, like the typical liquidity demander might. Like a market maker, the liquidity supplier will come back to the market to sell off the position – ideally when there is another investor who needs liquidity on the other side of the market. If other traders know the liquidity supplier’s positions, they will logically infer that there is a good likelihood these positions shortly will be put into the market. The other traders will be loath to be the first ones on the other side of these trades, or will demand more of a price concession if they do trade, knowing the overhang that remains in the market.

This means that increased transparency will reduce the amount of liquidity provided for any given change in prices. This is by no means a hypothetical argument. Frequently, even in the most liquid markets, broker-dealer market makers (liquidity providers) use brokers to enter their market bids rather than entering the market directly in order to preserve their anonymity.

The more information we extract to divine the behavior of traders and the resulting implications for the markets, the more the traders will alter their behavior. The paradox is that to understand and anticipate market crises, we must know positions, but knowing and acting on positions will itself generate a feedback into the market. This feedback often will reduce liquidity, making our observations less valuable and possibly contributing to a market crisis. Or, in rare instances, the observer/feedback loop could be manipulated to amass fortunes.

One might argue that the physical limits of knowledge asserted by Heisenberg’s Uncertainty Principle are critical for subatomic physics, but perhaps they are really just a curiosity for those dwelling in the macroscopic realm of the financial markets. We cannot measure an electron precisely, but certainly we still can “kind of know” the present, and if so, then we should be able to “pretty much” predict the future. Causality might be approximate, but if we can get it right to within a few wavelengths of light, that still ought to do the trick. The mathematical system may be demonstrably incomplete, and the world might not be pinned down on the fringes, but for all practical purposes the world can be known.

Unfortunately, while “almost” might work for horseshoes and hand grenades, 30 years after Gödel and Heisenberg yet a third limitation of our knowledge was in the wings, a limitation that would close the door on any attempt to block out the implications of microscopic uncertainty on predictability in our macroscopic world. Based on observations made by Edward Lorenz in the early 1960s and popularized by the so-called butterfly effect – the fanciful notion that the beating wings of a butterfly could change the predictions of an otherwise perfect weather forecasting system – this limitation arises because in some important cases immeasurably small errors can compound over time to limit prediction in the larger scale. Half a century after the limits of measurement and thus of physical knowledge were demonstrated by Heisenberg in the world of quantum mechanics, Lorenz piled on a result that showed how microscopic errors could propagate to have a stultifying impact in nonlinear dynamic systems. This limitation could come into the forefront only with the dawning of the computer age, because it is manifested in the subtle errors of computational accuracy.

The essence of the butterfly effect is that small perturbations can have large repercussions in massive, random forces such as weather. Edward Lorenz was testing and tweaking a model of weather dynamics on a rudimentary vacuum-tube computer. The program was based on a small system of simultaneous equations, but seemed to provide an inkling into the variability of weather patterns. At one point in his work, Lorenz decided to examine in more detail one of the solutions he had generated. To save time, rather than starting the run over from the beginning, he picked some intermediate conditions that had been printed out by the computer and used those as the new starting point. The values he typed in were the same as the values held in the original simulation at that point, so the results the simulation generated from that point forward should have been the same as in the original; after all, the computer was doing exactly the same operations. What he found was that as the simulated weather pattern progressed, the results of the new run diverged, first very slightly and then more and more markedly, from those of the first run. After a point, the new path followed a course that appeared totally unrelated to the original one, even though they had started at the same place.

Lorenz at first thought there was a computer glitch, but as he investigated further, he discovered the basis of a limit to knowledge that rivaled that of Heisenberg and Gödel. The problem was that the numbers he had used to restart the simulation had been reentered based on his printout from the earlier run, and the printout rounded the values to three decimal places while the computer carried the values to six decimal places. This rounding, clearly insignificant at first, promulgated a slight error in the next-round results, and this error grew with each new iteration of the program as it moved the simulation of the weather forward in time. The error doubled every four simulated days, so that after a few months the solutions were going their own separate ways. The slightest of changes in the initial conditions had traced out a wholly different pattern of weather.

Intrigued by his chance observation, Lorenz wrote an article entitled “Deterministic Nonperiodic Flow,” which stated that “nonperiodic solutions are ordinarily unstable with respect to small modifications, so that slightly differing initial states can evolve into considerably different states.” Translation: Long-range weather forecasting is worthless. For his application in the narrow scientific discipline of weather prediction, this meant that no matter how precise the starting measurements of weather conditions, there was a limit after which the residual imprecision would lead to unpredictable results, so that “long-range forecasting of specific weather conditions would be impossible.” And since this occurred in a very simple laboratory model of weather dynamics, it could only be worse in the more complex equations that would be needed to properly reflect the weather. Lorenz discovered the principle that would emerge over time into the field of chaos theory, where a deterministic system generated with simple nonlinear dynamics unravels into an unrepeated and apparently random path.

The simplicity of the dynamic system Lorenz had used suggests a far-reaching result: Because we cannot measure without some error (harking back to Heisenberg), for many dynamic systems our forecast errors will grow to the point that even an approximation will be out of our hands. We can run a purely mechanistic system that is designed with well-defined and apparently well-behaved equations, and it will move over time in ways that cannot be predicted and, indeed, that appear to be random. This gets us to Santa Fe.

The principal conceptual thread running through the Santa Fe research asks how apparently simple systems, like that discovered by Lorenz, can produce rich and complex results. Its method of analysis in some respects runs in the opposite direction of the usual path of scientific inquiry. Rather than taking the complexity of the world and distilling simplifying truths from it, the Santa Fe Institute builds a virtual world governed by simple equations that when unleashed explode into results that generate unexpected levels of complexity.

In economics and finance, institute’s agenda was to create artificial markets with traders and investors who followed simple and reasonable rules of behavior and to see what would happen. Some of the traders built into the model were trend followers, others bought or sold based on the difference between the market price and perceived value, and yet others traded at random times in response to liquidity needs. The simulations then printed out the paths of prices for the various market instruments. Qualitatively, these paths displayed all the richness and variation we observe in actual markets, replete with occasional bubbles and crashes. The exercises did not produce positive results for predicting or explaining market behavior, but they did illustrate that it is not hard to create a market that looks on the surface an awful lot like a real one, and to do so with actors who are following very simple rules. The mantra is that simple systems can give rise to complex, even unpredictable dynamics, an interesting converse to the point that much of the complexity of our world can – with suitable assumptions – be made to appear simple, summarized with concise physical laws and equations.

The systems explored by Lorenz were deterministic. They were governed definitively and exclusively by a set of equations where the value in every period could be unambiguously and precisely determined based on the values of the previous period. And the systems were not very complex. By contrast, whatever the set of equations are that might be divined to govern the financial world, they are not simple and, furthermore, they are not deterministic. There are random shocks from political and economic events and from the shifting preferences and attitudes of the actors. If we cannot hope to know the course of the deterministic systems like fluid mechanics, then no level of detail will allow us to forecast the long-term course of the financial world, buffeted as it is by the vagaries of the economy and the whims of psychology.

Statistical Arbitrage. Thought of the Day 123.0

eg_arb_usd_hedge

In the perfect market paradigm, assets can be bought and sold instantaneously with no transaction costs. For many financial markets, such as listed stocks and futures contracts, the reality of the market comes close to this ideal – at least most of the time. The commission for most stock transactions by an institutional trader is just a few cents a share, and the bid/offer spread is between one and five cents. Also implicit in the perfect market paradigm is a level of liquidity where the act of buying or selling does not affect the price. The market is composed of participants who are so small relative to the market that they can execute their trades, extracting liquidity from the market as they demand, without moving the price.

That’s where the perfect market vision starts to break down. Not only does the demand for liquidity move prices, but it also is the primary driver of the day-by-day movement in prices – and the primary driver of crashes and price bubbles as well. The relationship between liquidity and the prices of related stocks also became the primary driver of one of the most powerful trading models in the past 20 years – statistical arbitrage.

If you spend any time at all on a trading floor, it becomes obvious that something more than information moves prices. Throughout the day, the 10-year bond trader gets orders from the derivatives desk to hedge a swap position, from the mortgage desk to hedge mortgage exposure, from insurance clients who need to sell bonds to meet liabilities, and from bond mutual funds that need to invest the proceeds of new accounts. None of these orders has anything to do with information; each one has everything to do with a need for liquidity. The resulting price changes give the market no signal concerning information; the price changes are only the result of the need for liquidity. And the party on the other side of the trade who provides this liquidity will on average make money for doing so. For the liquidity demander, time is more important than price; he is willing to make a price concession to get his need fulfilled.

Liquidity needs will be manifest in the bond traders’ own activities. If their inventory grows too large and they feel overexposed, they will aggressively hedge or liquidate a portion of the position. And they will do so in a way that respects the liquidity constraints of the market. A trader who needs to sell 2,000 bond futures to reduce exposure does not say, “The market is efficient and competitive, and my actions are not based on any information about prices, so I will just put those contracts in the market and everybody will pay the fair price for them.” If the trader dumps 2,000 contracts into the market, that offer obviously will affect the price even though the trader does not have any new information. Indeed, the trade would affect the market price even if the market knew the selling was not based on an informational edge.

So the principal reason for intraday price movement is the demand for liquidity. This view of the market – a liquidity view rather than an informational view – replaces the conventional academic perspective of the role of the market, in which the market is efficient and exists solely for conveying information. Why the change in roles? For one thing, it’s harder to get an information advantage, what with the globalization of markets and the widespread dissemination of real-time information. At the same time, the growth in the number of market participants means there are more incidents of liquidity demand. They want it, and they want it now.

Investors or traders who are uncomfortable with their level of exposure will be willing to pay up to get someone to take the position. The more uncomfortable the traders are, the more they will pay. And well they should, because someone else is getting saddled with the risk of the position, someone who most likely did not want to take on that position at the existing market price. Thus the demand for liquidity not only is the source of most price movement; it is at the root of most trading strategies. It is this liquidity-oriented, tectonic market shift that has made statistical arbitrage so powerful.

Statistical arbitrage originated in the 1980s from the hedging demand of Morgan Stanley’s equity block-trading desk, which at the time was the center of risk taking on the equity trading floor. Like other broker-dealers, Morgan Stanley continually faced the problem of how to execute large block trades efficiently without suffering a price penalty. Often, major institutions discover they can clear a large block trade only at a large discount to the posted price. The reason is simple: Other traders will not know if there is more stock to follow, and the large size will leave them uncertain about the reason for the trade. It could be that someone knows something they don’t and they will end up on the wrong side of the trade once the news hits the street. The institution can break the block into a number of smaller trades and put them into the market one at a time. Though that’s a step in the right direction, after a while it will become clear that there is persistent demand on one side of the market, and other traders, uncertain who it is and how long it will continue, will hesitate.

The solution to this problem is to execute the trade through a broker-dealer’s block-trading desk. The block-trading desk gives the institution a price for the entire trade, and then acts as an intermediary in executing the trade on the exchange floor. Because the block traders know the client, they have a pretty good idea if the trade is a stand-alone trade or the first trickle of a larger flow. For example, if the institution is a pension fund, it is likely it does not have any special information, but it simply needs to sell the stock to meet some liability or to buy stock to invest a new inflow of funds. The desk adjusts the spread it demands to execute the block accordingly. The block desk has many transactions from many clients, so it is in a good position to mask the trade within its normal business flow. And it also might have clients who would be interested in taking the other side of the transaction.

The block desk could end up having to sit on the stock because there is simply no demand and because throwing the entire position onto the floor will cause prices to run against it. Or some news could suddenly break, causing the market to move against the position held by the desk. Or, in yet a third scenario, another big position could hit the exchange floor that moves prices away from the desk’s position and completely fills existing demand. A strategy evolved at some block desks to reduce this risk by hedging the block with a position in another stock. For example, if the desk received an order to buy 100,000 shares of General Motors, it might immediately go out and buy 10,000 or 20,000 shares of Ford Motor Company against that position. If news moved the stock price prior to the GM block being acquired, Ford would also likely be similarly affected. So if GM rose, making it more expensive to fill the customer’s order, a position in Ford would also likely rise, partially offsetting this increase in cost.

This was the case at Morgan Stanley, where there were maintained a list of pairs of stocks – stocks that were closely related, especially in the short term, with other stocks – in order to have at the ready a solution for partially hedging positions. By reducing risk, the pairs trade also gave the desk more time to work out of the trade. This helped to lessen the liquidity-related movement of a stock price during a big block trade. As a result, this strategy increased the profit for the desk.

The pairs increased profits. Somehow that lightbulb didn’t go on in the world of equity trading, which was largely devoid of principal transactions and systematic risk taking. Instead, the block traders epitomized the image of cigar-chewing gamblers, playing market poker with millions of dollars of capital at a clip while working the phones from one deal to the next, riding in a cloud of trading mayhem. They were too busy to exploit the fact, or it never occurred to them, that the pairs hedging they routinely used held the secret to a revolutionary trading strategy that would dwarf their desk’s operations and make a fortune for a generation of less flamboyant, more analytical traders. Used on a different scale and applied for profit making rather than hedging, their pairwise hedges became the genesis of statistical arbitrage trading. The pairwise stock trades that form the elements of statistical arbitrage trading in the equity market are just one more flavor of spread trades. On an individual basis, they’re not very good spread trades. It is the diversification that comes from holding many pairs that makes this strategy a success. But even then, although its name suggests otherwise, statistical arbitrage is a spread trade, not a true arbitrage trade.

Orthodoxy of the Neoclassical Synthesis: Minsky’s Capitalism Without Capitalists, Capital Assets, and Financial Markets

econschools

During the very years when orthodoxy turned Keynesianism on its head, extolling Reaganomics and Thatcherism as adequate for achieving stabilisation in the epoch of global capitalism, Minsky (Stabilizing an Unstable Economy) pointed to the destabilising consequences of this approach. The view that instability is the result of the internal processes of a capitalist economy, he wrote, stands in sharp contrast to neoclassical theory, whether Keynesian or monetarist, which holds that instability is due to events that are outside the working of the economy. The neoclassical synthesis and the Keynes theories are different because the focus of the neoclassical synthesis is on how a decentralized market economy achieves coherence and coordination in production and distribution, whereas the focus of the Keynes theory is upon the capital development of an economy. The neoclassical synthesis emphasizes equilibrium and equilibrating tendencies, whereas Keynes‘s theory revolves around bankers and businessmen making deals on Wall Street. The neoclassical synthesis ignores the capitalist nature of the economy, a fact that the Keynes theory is always aware of.

Minsky here identifies the main flaw of the neoclassical synthesis, which is that it ignores the capitalist nature of the economy, while authentic Keynesianism proceeds from precisely this nature. Minsky lays bare the preconceived approach of orthodoxy, which has mainstream economics concentrating all its focus on an equilibrium which is called upon to confirm the orthodox belief in the stability of capitalism. At the same time, orthodoxy fails to devote sufficient attention to the speculation in the area of finance and banking that is the precise cause of the instability of the capitalist economy.

Elsewhere, Minsky stresses still more firmly that from the theory of Keynes, the neoclassical standard included in its arsenal only those earlier-mentioned elements which could be interpreted as confirming its preconceived position that capitalism was so perfect that it could not have innate flaws. In this connection Minsky writes:

Whereas Keynes in The General Theory proposed that economists look at the economy in quite a different way from the way they had, only those parts of The General Theory that could be readily integrated into the old way of looking at things survive in today‘s standard theory. What was lost was a view of an economy always in transit because it accumulates in response to disequilibrating forces that are internal to the economy. As a result of the way accumulation takes place in a capitalist economy, Keynes‘s theory showed that success in operating the economy can only be transitory; instability is an inherent and inescapable flaw of capitalism. 

The view that survived is that a number of special things went wrong, which led the economy into the Great Depression. In this view, apt policy can assure that cannot happen again. The standard theory of the 1950s and 1960s seemed to assert that if policy were apt, then full employment at stable prices could be attained and sustained. The existence of internally disruptive forces was ignored; the neoclassical synthesis became the economics of capitalism without capitalists, capital assets, and financial markets. As a result, very little of Keynes has survived today in standard economics.

Here, resting on Keynes‘s analysis, one can find the central idea of Minsky‘s book: the innate instability of capitalism, which in time will lead the system to a new Great Depression. This forecast has now been brilliantly confirmed, but previously there were few who accepted it. Economic science was orchestrated by proponents of neoclassical orthodoxy under the direction of Nobel prizewinners, authors of popular economics textbooks, and other authorities recognized by the mainstream. These people argued that the main problems which capitalism had encountered in earlier times had already been overcome, and that before it lay a direct, sunny road to an even better future.

Robed in complex theoretical constructs, and underpinned by an abundance of mathematical formulae, these ideas of a cloudless future for capitalism interpreted the economic situation, it then seemed, in thoroughly convincing fashion. These analyses were balm for the souls of the people who had come to believe that capitalism had attained perfection. In this respect, capitalism has come to bear an uncanny resemblance to communism. There is, however, something beyond the preconceptions and prejudices innate to people in all social systems, and that is the reality of historical and economic development. This provides a filter for our ideas, and over time makes it easier to separate truth from error. The present financial and economic crisis is an example of such reality. While the mainstream was still euphoric about the future of capitalism, the post-Keynesians saw the approaching outlines of a new Great Depression. The fate of Post Keynesianism will depend very heavily on the future development of the world capitalist economy. If the business cycle has indeed been abolished (this time), so that stable, non-inflationary growth continues indefinitely under something approximating to the present neoclassical (or pseudo-monetarist) policy consensus, then there is unlikely to be a significant market for Post Keynesian ideas. Things would be very different in the event of a new Great Depression, to think one last time in terms of extreme possibilities. If it happened again, to quote Hyman Minsky, the appeal of both a radical interventionist programme and the analysis from which it was derived would be very greatly enhanced.

Neoclassical orthodoxy, that is, today‘s mainstream economic thinking proceeds from the position that capitalism is so good and perfect that an alternative to it does not and cannot exist. Post-Keynesianism takes a different standpoint. Unlike Marxism it is not so revolutionary a theory as to call for a complete rejection of capitalism. At the same time, it does not consider capitalism so perfect that there is nothing in it that needs to be changed. To the contrary, Post-Keynesianism maintains that capitalism has definite flaws, and requires changes of such scope as to allow alternative ways of running the economy to be fully effective. To the prejudices of the mainstream, post-Keynesianism counterposes an approach based on an objective analysis of the real situation. Its economic and philosophical approach – the methodology of critical realism – has been developed accordingly, the methodological import of which helps post-Keynesianism answer a broad range of questions, providing an alternative both to market fundamentalism, and to bureaucratic centralism within a planned economy. This is the source of its attraction for us….

Regulating the Velocities of Dark Pools. Thought of the Day 72.0

hft-robots630

On 22 September 2010 the SEC chair Mary Schapiro signaled US authorities were considering the introduction of regulations targeted at HFT:

…High frequency trading firms have a tremendous capacity to affect the stability and integrity of the equity markets. Currently, however, high frequency trading firms are subject to very little in the way of obligations either to protect that stability by promoting reasonable price continuity in tough times, or to refrain from exacerbating price volatility.

However regulating an industry working towards moving as fast as the speed of light is no ordinary administrative task: – Modern finance is undergoing a fundamental transformation. Artificial intelligence, mathematical models, and supercomputers have replaced human intelligence, human deliberation, and human execution…. Modern finance is becoming cyborg finance – an industry that is faster, larger, more complex, more global, more interconnected, and less human. C W Lin proposes a number of principles for regulating this cyber finance industry:

  1. Update antiquated paradigms of reasonable investors and compartmentalised institutions, and confront the emerging institutional realities, and realise the old paradigms of governance of markets may be ill-suited for the new finance industry;
  2. Enhance disclosure which recognises the complexity and technological capacities of the new finance industry;
  3. Adopt regulations to moderate the velocities of finance realising that as these approach the speed of light they may contain more risks than rewards for the new financial industry;
  4. Introduce smarter coordination harmonising financial regulation beyond traditional spaces of jurisdiction.

Electronic markets will require international coordination, surveillance and regulation. The high-frequency trading environment has the potential to generate errors and losses at a speed and magnitude far greater than that in a floor or screen-based trading environment… Moreover, issues related to risk management of these technology-dependent trading systems are numerous and complex and cannot be addressed in isolation within domestic financial markets. For example, placing limits on high-frequency algorithmic trading or restricting Un-filtered sponsored access and co-location within one jurisdiction might only drive trading firms to another jurisdiction where controls are less stringent.

In these regulatory endeavours it will be vital to remember that all innovation is not intrinsically good and might be inherently dangerous, and the objective is to make a more efficient and equitable financial system, not simply a faster system: Despite its fast computers and credit derivatives, the current financial system does not seem better at transferring funds from savers to borrowers than the financial system of 1910. Furthermore as Thomas Piketty‘s Capital in the Twenty-First Century amply demonstrates any thought of the democratisation of finance induced by the huge expansion of superannuation funds together with the increased access to finance afforded by credit cards and ATM machines, is something of a fantasy, since levels of structural inequality have endured through these technological transformations. The tragedy is that under the guise of technological advance and sophistication we could be destroying the capacity of financial markets to fulfil their essential purpose, as Haldane eloquently states:

An efficient capital market transfers savings today into investment tomorrow and growth the day after. In that way, it boosts welfare. Short-termism in capital markets could interrupt this transfer. If promised returns the day after tomorrow fail to induce saving today, there will be no investment tomorrow. If so, long-term growth and welfare would be the casualty.

Abrupt rise of new machine ecology beyond human response time

Untitled

Figure: Empirical transition in size distribution for UEEs with duration above threshold t, as function of t. (A) Scale of times. 650 ms is the time for chess grandmaster to discern King is in checkmate. Plots show results of the best-fit power-law exponent (black) and goodness-of-fit (blue) to the distributions for size of (B) crashes, and (C) spikes, as shown in the inset schematic.

Society’s techno-social systems are becoming ever faster and more computer-orientated. However, far from simply generating faster versions of existing behaviour, we show that this speed-up can generate a new behavioural regime as humans lose the ability to intervene in real time. Analyzing millisecond-scale data for the world’s largest and most powerful techno-social system, the global financial market, we uncover an abrupt transition to a new all-machine phase characterized by large numbers of subsecond extreme events. The proliferation of these subsecond events shows an intriguing correlation with the onset of the system-wide financial collapse in 2008. Findings are consistent with an emerging ecology of competitive machines featuring ‘crowds’ of predatory algorithms, and highlight the need for a new scientific theory of subsecond financial phenomena.

Abrupt rise of new machine ecology beyond human response time

Financial Entanglement and Complexity Theory. An Adumbration on Financial Crisis.

entanglement

The complex system approach in finance could be described through the concept of entanglement. The concept of entanglement bears the same features as a definition of a complex system given by a group of physicists working in a field of finance (Stanley et al,). As they defined it – in a complex system all depends upon everything. Just as in the complex system the notion of entanglement is a statement acknowledging interdependence of all the counterparties in financial markets including financial and non-financial corporations, the government and the central bank. How to identify entanglement empirically? Stanley H.E. et al formulated the process of scientific study in finance as a search for patterns. Such a search, going on under the auspices of “econophysics”, could exemplify a thorough analysis of a complex and unstructured assemblage of actual data being finalized in the discovery and experimental validation of an appropriate pattern. On the other side of a spectrum, some patterns underlying the actual processes might be discovered due to synthesizing a vast amount of historical and anecdotal information by applying appropriate reasoning and logical deliberations. The Austrian School of Economic Thought which, in its extreme form, rejects application of any formalized systems, or modeling of any kind, could be viewed as an example. A logical question follows out this comparison: Does there exist any intermediate way of searching for regular patters in finance and economics?

Importantly, patterns could be discovered by developing rather simple models of money and debt interrelationships. Debt cycles were studied extensively by many schools of economic thought (Shiller, Robert J._ Akerlof, George A – Animal Spirits: How Human Psychology Drives the Economy, and Why It Matters for Global Capitalism). The modern financial system worked by spreading risk, promoting economic efficiency and providing cheap capital. It had been formed during the years as bull markets in shares and bonds originated in the early 1990s. These markets were propelled by abundance of money, falling interest rates and new information technology. Financial markets, by combining debt and derivatives, could originate and distribute huge quantities of risky structurized products and sell them to different investors. Meanwhile, financial sector debt, only a tenth of the size of non-financial-sector debt in 1980, became half as big by the beginning of the credit crunch in 2007. As liquidity grew, banks could buy more assets, borrow more against them, and enjoy their value rose. By 2007 financial services were making 40% of America’s corporate profits while employing only 5% of its private sector workers. Thanks to cheap money, banks could have taken on more debt and, by designing complex structurized products, they were able to make their investment more profitable and risky. Securitization facilitating the emergence of the “shadow banking” system foments, simultaneously, bubbles on different segments of a global financial market.

Yet over the past decade this system, or a big part of it, began to lose touch with its ultimate purpose: to reallocate deficit resources in accordance with the social priorities. Instead of writing, managing and trading claims on future cashflows for the rest of the economy, finance became increasingly a game for fees and speculation. Due to disastrously lax regulation, investment banks did not lay aside enough capital in case something went wrong, and, as the crisis began in the middle of 2007, credit markets started to freeze up. Qualitatively, after the spectacular Lehman Brothers disaster in September 2008, laminar flows of financial activity came to an end. Banks began to suffer losses on their holdings of toxic securities and were reluctant to lend to one another that led to shortages of funding system. This only intensified in late 2007 when Nothern Rock, a British mortgage lender, experienced a bank run that started in the money markets. All of a sudden, liquidity became in a short supply, debt was unwound, and investors were forced to sell and write down the assets. For several years, up to now, the market counterparties no longer trust each other. As Walter Bagehot, an authority on bank runs, once wrote:

Every banker knows that if he has to prove that he is worth of credit, however good may be his arguments, in fact his credit is gone.

In an entangled financial system, his axiom should be stretched out to the whole market. And it means, precisely, financial meltdown or the crisis. The most fascinating feature of the post-crisis era on financial markets was the continuation of a ubiquitous liquidity expansion. To fight the market squeeze, all the major central banks have greatly expanded their balance sheets. The latter rose, roughly, from about 10 percent to 25-30 percent of GDP for the appropriate economies. For several years after the credit crunch 2007-09, central banks bought trillions of dollars of toxic and government debts thus increasing, without any precedent in modern history, money issuance. Paradoxically, this enormous credit expansion, though accelerating for several years, has been accompanied by a stagnating and depressed real economy. Yet, until now, central bankers are worried with downside risks and threats of price deflation, mainly. Otherwise, a hectic financial activity that is going on along unbounded credit expansion could be transformed by herding into autocatalytic process that, if being subject to accumulation of a new debt, might drive the entire system at a total collapse. From a financial point of view, this systemic collapse appears to be a natural result of unbounded credit expansion which is ‘supported’ with the zero real resources. Since the wealth of investors, as a whole, becomes nothing but the ‘fool’s gold’, financial process becomes a singular one, and the entire system collapses. In particular, three phases of investors’ behavior – hedge finance, speculation, and the Ponzi game, could be easily identified as a sequence of sub-cycles that unwound ultimately in the total collapse.

US Stock Market Interaction Network as Learned by the Boltzmann Machine

Untitled

Price formation on a financial market is a complex problem: It reflects opinion of investors about true value of the asset in question, policies of the producers, external regulation and many other factors. Given the big number of factors influencing price, many of which unknown to us, describing price formation essentially requires probabilistic approaches. In the last decades, synergy of methods from various scientific areas has opened new horizons in understanding the mechanisms that underlie related problems. One of the popular approaches is to consider a financial market as a complex system, where not only a great number of constituents plays crucial role but also non-trivial interaction properties between them. For example, related interdisciplinary studies of complex financial systems have revealed their enhanced sensitivity to fluctuations and external factors near critical events with overall change of internal structure. This can be complemented by the research devoted to equilibrium and non-equilibrium phase transitions.

In general, statistical modeling of the state space of a complex system requires writing down the probability distribution over this space using real data. In a simple version of modeling, the probability of an observable configuration (state of a system) described by a vector of variables s can be given in the exponential form

p(s) = Z−1 exp {−βH(s)} —– (1)

where H is the Hamiltonian of a system, β is inverse temperature (further β ≡ 1 is assumed) and Z is a statistical sum. Physical meaning of the model’s components depends on the context and, for instance, in the case of financial systems, s can represent a vector of stock returns and H can be interpreted as the inverse utility function. Generally, H has parameters defined by its series expansion in s. Basing on the maximum entropy principle, expansion up to the quadratic terms is usually used, leading to the pairwise interaction models. In the equilibrium case, the Hamiltonian has form

H(s) = −hTs − sTJs —– (2)

where h is a vector of size N of external fields and J is a symmetric N × N matrix of couplings (T denotes transpose). The energy-based models represented by (1) play essential role not only in statistical physics but also in neuroscience (models of neural networks) and machine learning (generative models, also known as Boltzmann machines). Given topological similarities between neural and financial networks, these systems can be considered as examples of complex adaptive systems, which are characterized by the adaptation ability to changing environment, trying to stay in equilibrium with it. From this point of view, market structural properties, e.g. clustering and networks, play important role for modeling of the distribution of stock prices. Adaptation (or learning) in these systems implies change of the parameters of H as financial and economic systems evolve. Using statistical inference for the model’s parameters, the main goal is to have a model capable of reproducing the same statistical observables given time series for a particular historical period. In the pairwise case, the objective is to have

⟨sidata = ⟨simodel —– (3a)

⟨sisjdata = ⟨sisjmodel —– (3b)

where angular brackets denote statistical averaging over time. Having specified general mathematical model, one can also discuss similarities between financial and infinite- range magnetic systems in terms of phenomena related, e.g. extensivity, order parameters and phase transitions, etc. These features can be captured even in the simplified case, when si is a binary variable taking only two discrete values. Effect of the mapping to a binarized system, when the values si = +1 and si = −1 correspond to profit and loss respectively. In this case, diagonal elements of the coupling matrix, Jii, are zero because s2i = 1 terms do not contribute to the Hamiltonian….

US stock market interaction network as learned by the Boltzmann Machine

Market Liquidity

market-liquidity-graphic-1_0

The notion of market liquidity is nowadays almost ubiquitous. It quantifies the ability of a financial market to match buyers and sellers in an efficient way, without causing a significant movement in the price, thus delivering low transaction costs. It is the lifeblood of financial markets without which market dislocations can show as in the recent well documented crisis: 2007 Yen carry trade unwind, 2008 Credit Crunch, May 6th 2010 Flash Crash or the numerous Mini Flash Crashes occurring in US equity markets, but also in many others cases that go unnoticed but are potent candidates to become more important. While omnipresent, liquidity is an elusive concept. Several reasons may account for this ambiguity; some markets, such as the foreign exchange (FX) market with the daily turnover of $5.3 trillion (let’s say), are mistakenly assumed to be extremely liquid, whereas the generated volume is equated with liquidity. Secondly, the structure of modern markets with its high degree of decentralization generates fragmentation and low transparency of transactions which complicates the way to define market liquidity as a whole. Aggregating liquidity from all trading sources can be quite daunting and even with all of the market fragmentation, as new venues with different market structure continue to be launched. Furthermore, the landscape is continuously changing as new players emerge, such as high frequency traders that have taken over the role of liquidity intermediation in many markets, accounting between 50% and 70%  (and ever rising) of all trading. Last, but not least, important participants influencing the markets are the central banks with their myriad of market interventions, whereas it is indirectly through monetization of substantial amount of sovereign and mortgage debt with various quantitative easing programs, or in a direct manner as with Swiss National Bank setting the floor on EUR/CHF exchange rate, providing plenty of arguments they have overstepped their role of last resort liquidity providers and at this stage they hamper market liquidity, potentially exposing themselves to massive losses in the near future.

Despite the obvious importance of liquidity there is little agreement on the best way to measure and define market liquidity. Liquidity measures can be classified into different categories. Volume-based measures: liquidity ratio, Martin index, Hui and Heubel ratio, turnover ratio, market adjusted liquidity index, where, over a fixed period of time, the exchanged volume is compared to price changes. This class implies that non-trivial assumptions are made about the relation between volume and price movements. Other classes of measures include price based measures: Marsh and Rock ratio, variance ratio, vector autoregressive models; transaction costs based measures: spread, implied spread, absolute spread or relative spread see; or time based measures: number of transactions or orders per time unit. The aforementioned approaches suffer from many drawbacks. They provide a top-down approach of analysing a complex system, where the impact of the variation of liquidity is analysed rather than providing a bottom-up approach where liquidity lacking times are identified and quantified. These approaches also suffer from a specific choice of physical time, that does not reflect the correct and multi-scale nature of any financial market. Liquidity is defined as an information theoretic measurement that characterises the unlikeliness of price trajectories and argue that this new metric has the ability to detect and predict stress in financial markets and show examples within the FX market, so that the optimal choice of scales is derived using the Maximum Entropy Principle.