Revisiting Catastrophes. Thought of the Day 134.0

The most explicit influence from mathematics in semiotics is probably René Thom’s controversial theory of catastrophes (here and here), with philosophical and semiotic support from Jean Petitot. Catastrophe theory is but one of several formalisms in the broad field of qualitative dynamics (comprising also chaos theory, complexity theory, self-organized criticality, etc.). In all these cases, the theories in question are in a certain sense phenomenological because the focus is different types of qualitative behavior of dynamic systems grasped on a purely formal level bracketing their causal determination on the deeper level. A widespread tool in these disciplines is phase space – a space defined by the variables governing the development of the system so that this development may be mapped as a trajectory through phase space, each point on the trajectory mapping one global state of the system. This space may be inhabited by different types of attractors (attracting trajectories), repellors (repelling them), attractor basins around attractors, and borders between such basins characterized by different types of topological saddles which may have a complicated topology.

Catastrophe theory has its basis in differential topology, that is, the branch of topology keeping various differential properties in a function invariant under transformation. It is, more specifically, the so-called Whitney topology whose invariants are points where the nth derivative of a function takes the value 0, graphically corresponding to minima, maxima, turning tangents, and, in higher dimensions, different complicated saddles. Catastrophe theory takes its point of departure in singularity theory whose object is the shift between types of such functions. It thus erects a distinction between an inner space – where the function varies – and an outer space of control variables charting the variation of that function including where it changes type – where, e.g. it goes from having one minimum to having two minima, via a singular case with turning tangent. The continuous variation of control parameters thus corresponds to a continuous variation within one subtype of the function, until it reaches a singular point where it discontinuously, ‘catastrophically’, changes subtype. The philosophy-of-science interpretation of this formalism now conceives the stable subtype of function as representing the stable state of a system, and the passage of the critical point as the sudden shift to a new stable state. The configuration of control parameters thus provides a sort of map of the shift between continuous development and discontinuous ‘jump’. Thom’s semiotic interpretation of this formalism entails that typical catastrophic trajectories of this kind may be interpreted as stable process types phenomenologically salient for perception and giving rise to basic verbal categories.

Untitled

One of the simpler catastrophes is the so-called cusp (a). It constitutes a meta-diagram, namely a diagram of the possible type-shifts of a simpler diagram (b), that of the equation ax4 + bx2 + cx = 0. The upper part of (a) shows the so-called fold, charting the manifold of solutions to the equation in the three dimensions a, b and c. By the projection of the fold on the a, b-plane, the pointed figure of the cusp (lower a) is obtained. The cusp now charts the type-shift of the function: Inside the cusp, the function has two minima, outside it only one minimum. Different paths through the cusp thus corresponds to different variations of the equation by the variation of the external variables a and b. One such typical path is the path indicated by the left-right arrow on all four diagrams which crosses the cusp from inside out, giving rise to a diagram of the further level (c) – depending on the interpretation of the minima as simultaneous states. Here, thus, we find diagram transformations on three different, nested levels.

The concept of transformation plays several roles in this formalism. The most spectacular one refers, of course, to the change in external control variables, determining a trajectory through phase space where the function controlled changes type. This transformation thus searches the possibility for a change of the subtypes of the function in question, that is, it plays the role of eidetic variation mapping how the function is ‘unfolded’ (the basic theorem of catastrophe theory refers to such unfolding of simple functions). Another transformation finds stable classes of such local trajectory pieces including such shifts – making possible the recognition of such types of shifts in different empirical phenomena. On the most empirical level, finally, one running of such a trajectory piece provides, in itself, a transformation of one state into another, whereby the two states are rationally interconnected. Generally, it is possible to make a given transformation the object of a higher order transformation which by abstraction may investigate aspects of the lower one’s type and conditions. Thus, the central unfolding of a function germ in Catastrophe Theory constitutes a transformation having the character of an eidetic variation making clear which possibilities lie in the function germ in question. As an abstract formalism, the higher of these transformations may determine the lower one as invariant in a series of empirical cases.

Complexity theory is a broader and more inclusive term covering the general study of the macro-behavior of composite systems, also using phase space representation. The theoretical biologist Stuart Kauffman (intro) argues that in a phase space of all possible genotypes, biological evolution must unfold in a rather small and specifically qualified sub-space characterized by many, closely located and stable states (corresponding to the possibility of a species to ‘jump’ to another and better genotype in the face of environmental change) – as opposed to phase space areas with few, very stable states (which will only be optimal in certain, very stable environments and thus fragile when exposed to change), and also opposed, on the other hand, to sub-spaces with a high plurality of only metastable states (here, the species will tend to merge into neighboring species and hence never stabilize). On the base of this argument, only a small subset of the set of virtual genotypes possesses ‘evolvability’ as this special combination between plasticity and stability. The overall argument thus goes that order in biology is not a pure product of evolution; the possibility of order must be present in certain types of organized matter before selection begins – conversely, selection requires already organized material on which to work. The identification of a species with a co-localized group of stable states in genome space thus provides a (local) invariance for the transformation taking a trajectory through space, and larger groups of neighboring stabilities – lineages – again provide invariants defined by various more or less general transformations. Species, in this view, are in a certain limited sense ‘natural kinds’ and thus naturally signifying entities. Kauffman’s speculations over genotypical phase space have a crucial bearing on a transformation concept central to biology, namely mutation. On this basis far from all virtual mutations are really possible – even apart from their degree of environmental relevance. A mutation into a stable but remotely placed species in phase space will be impossible (evolution cannot cross the distance in phase space), just like a mutation in an area with many, unstable proto-species will not allow for any stabilization of species at all and will thus fall prey to arbitrary small environment variations. Kauffman takes a spontaneous and non-formalized transformation concept (mutation) and attempts a formalization by investigating its condition of possibility as movement between stable genomes in genotype phase space. A series of constraints turn out to determine type formation on a higher level (the three different types of local geography in phase space). If the trajectory of mutations must obey the possibility of walking between stable species, then the space of possibility of trajectories is highly limited. Self-organized criticality as developed by Per Bak (How Nature Works the science of self-organized criticality) belongs to the same type of theories. Criticality is here defined as that state of a complicated system where sudden developments in all sizes spontaneously occur.

Advertisement

Long Term Capital Management. Note Quote.

Long Term Capital Management, or LTCM, was a hedge fund founded in 1994 by John Meriwether, the former head of Salomon Brothers’s domestic fixed-income arbitrage group. Meriwether had grown the arbitrage group to become Salomon’s most profitable group by 1991, when it was revealed that one of the traders under his purview had astonishingly submitted a false bid in a U.S. Treasury bond auction. Despite reporting the trade immediately to CEO John Gutfreund, the outcry from the scandal forced Meriwether to resign.

Meriwether revived his career several years later with the founding of LTCM. Amidst the beginning of one of the greatest bull markets the global markets had ever seen, Meriwether assembled a team of some of the world’s most respected economic theorists to join other refugees from the arbitrage group at Salomon. The board of directors included Myron Scholes, a coauthor of the famous Black-Scholes formula used to price option contracts, and MIT Sloan professor Robert Merton, both of whom would later share the 1997 Nobel Prize for Economics. The firm’s impressive brain trust, collectively considered geniuses by most of the financial world, set out to raise a $1 billion fund by explaining to investors that their profoundly complex computer models allowed them to price securities according to risk more accurately than the rest of the market, in effect “vacuuming up nickels that others couldn’t see.”

One typical LTCM trade concerned the divergence in price between long-term U.S. Treasury bonds. Despite offering fundamentally the same (minimal) default risk, those issued more recently – known as “on-the-run” securities – traded more heavily than those “off-the-run” securities issued just months previously. Heavier trading meant greater liquidity, which in turn resulted in ever-so-slightly higher prices. As “on-the-run” securities become “off-the-run” upon the issuance of a new tranche of Treasury bonds, the price discrepancy generally disappears with time. LTCM sought to exploit that price convergence by shorting the more expensive “on-the-run” bond while purchasing the “off- the-run” security.

By early 1998 the intellectual firepower of its board members and the aggressive trading practices that had made the arbitrage group at Salomon so successful had allowed LTCM to flourish, growing its initial $1 billion of investor equity to $4.72 billion. However, the miniscule spreads earned on arbitrage trades could not provide the type of returns sought by hedge fund investors. In order to make transactions such as these worth their while, LTCM had to employ massive leverage in order to magnify its returns. Ultimately, the fund’s equity component sat atop more than $124.5 billion in borrowings for total assets of more than $129 billion. These borrowings were merely the tip of the ice-berg; LTCM also held off-balance-sheet derivative positions with a notional value of more than $1.25 trillion.

Untitled

The fund’s success began to pose its own problems. The market lacked sufficient capacity to absorb LTCM’s bloated size, as trades that had been profitable initially became impossible to conduct on a massive scale. Moreover, a flood of arbitrage imitators tightened the spreads on LTCM’s “bread-and-butter” trades even further. The pressure to continue delivering returns forced LTCM to find new arbitrage opportunities, and the fund diversified into areas where it could not pair its theoretical insights with trading experience. Soon LTCM had made large bets in Russia and in other emerging markets, on S&P futures, and in yield curve, junk bond, merger, and dual-listed securities arbitrage.

Combined with its style drift, the fund’s more than 26 leverage put LTCM in an increasingly precarious bubble, which was eventually burst by a combination of factors that forced the fund into a liquidity crisis. In contrast to Scholes’s comments about plucking invisible, riskless nickels from the sky, financial theorist Nassim Taleb later compared the fund’s aggressive risk taking to “picking up pennies in front of a steamroller,” a steamroller that finally came in the form of 1998’s market panic. The departure of frequent LTCM counterparty Salomon Brothers from the arbitrage market that summer put downward pressure on many of the fund’s positions, and Russia’s default on its government-issued bonds threw international credit markets into a downward spiral. Panicked investors around the globe demonstrated a “flight to quality,” selling the risky securities in which LTCM traded and purchasing U.S. Treasury securities, further driving up their price and preventing a price convergence upon which the fund had bet so heavily.

None of LTCM’s sophisticated theoretical models had contemplated such an internationally correlated credit market collapse, and the fund began hemorrhaging money, losing nearly 20% of its equity in May and June alone. Day after day, every market in which LTCM traded turned against it. Its powerless brain trust watched in horror as its equity shrank to $600 million in early September without any reduction in borrowing, resulting in an unfathomable 200 leverage ratio. Sensing the fund’s liquidity crunch, Bear Stearns refused to continue acting as a clearinghouse for the fund’s trades, throwing LTCM into a panic. Without the short-term credit that enabled its entire trading operations, the fund could not continue and its longer-term securities grew more illiquid by the day.

Obstinate in their refusal to unwind what they still considered profitable trades hammered by short-term market irrationality, LTCM’s partners refused a buyout offer of $250 million by Goldman Sachs, ING Barings, and Warren Buffet’s Berkshire Hathaway. However, LTCM’s role as a counterparty in thousands of derivatives trades that touched investment firms around the world threatened to provoke a wider collapse in international securities markets if the fund went under, so the U.S. Federal Reserve stepped in to maintain order. Wishing to avoid the precedent of a government bailout of a hedge fund and the moral hazard it could subsequently encourage, the Fed invited every major investment bank on Wall Street to an emergency meeting in New York and dictated the terms of the $3.625 billion bailout that would preserve market liquidity. The Fed convinced Bankers Trust, Barclays, Chase, Credit Suisse First Boston, Deutsche Bank, Goldman Sachs, Merrill Lynch, J.P. Morgan, Morgan Stanley, Salomon Smith Barney, and UBS – many of whom were investors in the fund – to contribute $300 million apiece, with $125 million coming from Société Générale and $100 million from Lehman Brothers and Paribas. Eventually the market crisis passed, and each bank managed to liquidate its position at a slight profit. Only one bank contacted by the Fed refused to join the syndicate and share the burden in the name of preserving market integrity.

That bank was Bear Stearns.

Bear’s dominant trading position in bonds and derivatives had won it the profitable business of acting as a settlement house for nearly all of LTCM’s trading in those markets. On September 22, 1998, just days before the Fed-organized bailout, Bear put the final nail in the LTCM coffin by calling in a short-term debt in the amount of $500 million in an attempt to limit its own exposure to the failing hedge fund, rendering it insolvent in the process. Ever the maverick in investment banking circles, Bear stubbornly refused to contribute to the eventual buyout, even in the face of a potentially apocalyptic market crash and despite the millions in profits it had earned as LTCM’s prime broker. In typical Bear fashion, James Cayne ignored the howls from other banks that failure to preserve confidence in the markets through a bailout would bring them all down in flames, famously growling through a chewed cigar as the Fed solicited contributions for the emergency financing, “Don’t go alphabetically if you want this to work.”

Market analysts were nearly unanimous in describing the lessons learned from LTCM’s implosion; in effect, the fund’s profound leverage had placed it in such a precarious position that it could not wait for its positions to turn profitable. While its trades were sound in principal, LTCM’s predicted price convergence was not realized until long after its equity had been wiped out completely. A less leveraged firm, they explained, might have realized lower profits than the 40% annual return LTCM had offered investors up until the 1998 crisis, but could have weathered the storm once the market turned against it. In the words of economist John Maynard Keynes, the market had remained irrational longer than LTCM could remain solvent. The crisis further illustrated the importance not merely of liquidity but of perception in the less regulated derivatives markets. Once LTCM’s ability to meet its obligations was called into question, its demise became inevitable, as it could no longer find counterparties with whom to trade and from whom it could borrow to continue operating.

The thornier question of the Fed’s role in bailing out an overly aggressive investment fund in the name of market stability remained unresolved, despite the Fed’s insistence on private funding for the actual buyout. Though impossible to foresee at the time, the issue would be revisited anew less than ten years later, and it would haunt Bear Stearns. With negative publicity from Bear’s $38.5 million settlement with the SEC regarding charges that it had ignored fraudulent behavior by a client for whom it cleared trades and LTCM’s collapse behind it, Bear Stearns continued to grow under Cayne’s leadership, with its stock price appreciating some 600% from his assumption of control in 1993 until 2008. However, a rapid-fire sequence of negative events began to unfurl in the summer of 2007 that would push Bear into a liquidity crunch eerily similar to the one that felled LTCM.

Regulating the Velocities of Dark Pools. Thought of the Day 72.0

hft-robots630

On 22 September 2010 the SEC chair Mary Schapiro signaled US authorities were considering the introduction of regulations targeted at HFT:

…High frequency trading firms have a tremendous capacity to affect the stability and integrity of the equity markets. Currently, however, high frequency trading firms are subject to very little in the way of obligations either to protect that stability by promoting reasonable price continuity in tough times, or to refrain from exacerbating price volatility.

However regulating an industry working towards moving as fast as the speed of light is no ordinary administrative task: – Modern finance is undergoing a fundamental transformation. Artificial intelligence, mathematical models, and supercomputers have replaced human intelligence, human deliberation, and human execution…. Modern finance is becoming cyborg finance – an industry that is faster, larger, more complex, more global, more interconnected, and less human. C W Lin proposes a number of principles for regulating this cyber finance industry:

  1. Update antiquated paradigms of reasonable investors and compartmentalised institutions, and confront the emerging institutional realities, and realise the old paradigms of governance of markets may be ill-suited for the new finance industry;
  2. Enhance disclosure which recognises the complexity and technological capacities of the new finance industry;
  3. Adopt regulations to moderate the velocities of finance realising that as these approach the speed of light they may contain more risks than rewards for the new financial industry;
  4. Introduce smarter coordination harmonising financial regulation beyond traditional spaces of jurisdiction.

Electronic markets will require international coordination, surveillance and regulation. The high-frequency trading environment has the potential to generate errors and losses at a speed and magnitude far greater than that in a floor or screen-based trading environment… Moreover, issues related to risk management of these technology-dependent trading systems are numerous and complex and cannot be addressed in isolation within domestic financial markets. For example, placing limits on high-frequency algorithmic trading or restricting Un-filtered sponsored access and co-location within one jurisdiction might only drive trading firms to another jurisdiction where controls are less stringent.

In these regulatory endeavours it will be vital to remember that all innovation is not intrinsically good and might be inherently dangerous, and the objective is to make a more efficient and equitable financial system, not simply a faster system: Despite its fast computers and credit derivatives, the current financial system does not seem better at transferring funds from savers to borrowers than the financial system of 1910. Furthermore as Thomas Piketty‘s Capital in the Twenty-First Century amply demonstrates any thought of the democratisation of finance induced by the huge expansion of superannuation funds together with the increased access to finance afforded by credit cards and ATM machines, is something of a fantasy, since levels of structural inequality have endured through these technological transformations. The tragedy is that under the guise of technological advance and sophistication we could be destroying the capacity of financial markets to fulfil their essential purpose, as Haldane eloquently states:

An efficient capital market transfers savings today into investment tomorrow and growth the day after. In that way, it boosts welfare. Short-termism in capital markets could interrupt this transfer. If promised returns the day after tomorrow fail to induce saving today, there will be no investment tomorrow. If so, long-term growth and welfare would be the casualty.