Credit Bubbles. Thought of the Day 90.0

creditbubble

At the macro-economic level of the gross statistics of money and loan supply to the economy, the reserve banking system creates a complex interplay between money, debt, supply and demand for goods, and the general price level. Rather than being constant, as implied by theoretical descriptions, money and loan supplies are constantly changing at a rate dependent on the average loan period, and a complex of details buried in the implementation and regulation of any given banking system.

Since the majority of loans are made for years at a time, the results of these interactions play out over a long enough time scale that gross monetary features of regulatory failure, such as continuous asset price inflation, have come to be regarded as normal, e.g. ”House prices always go up”. The price level however is not only dependent on purely monetary factors, but also on the supply and demand for goods and services, including financial assets such as shares, which requires that estimates of the real price level versus production be used. As a simplification, if constant demand for goods and services is assumed as shown in the table below, then there are two possible causes of price inflation, either the money supply available to purchase the good in question has increased, or the supply of the good has been reduced. Critically, the former is simply a mathematical effect, whilst the latter is a useful signal, providing economic information on relative supply and demand levels that can be used locally by consumers and producers to adapt their behaviour. Purely arbitrary changes in both the money and the loan supply that are induced by the mechanical operation of the banking system fail to provide any economic benefit, and by distorting the actual supply and demand signal can be actively harmful.

Untitled

Credit bubbles are often explained as a phenomena of irrational demand, and crowd behaviour. However, this explanation ignores the question of why they aren’t throttled by limits on the loan supply? An alternate explanation which can be offered is that their root cause is periodic failures in the regulation of the loan and money supply within the commercial banking system. The introduction of widespread securitized lending allows a rapid increase in the total amount of lending available from the banking system and an accompanying if somewhat smaller growth in the money supply. Channeled predominantly into property lending, the increased availability of money from lending sources, acted to increase house prices creating rational speculation on their increase, and over time a sizeable disruption in the market pricing mechanisms for all goods and services purchased through loans. Monetary statistics of this effect such as the Consumer Price Index (CPI) for example, are however at least partially masked by production deflation from the sizeable productivity increases over decades. Absent any limit on the total amount of credit being supplied, the only practical limit on borrowing is the availability of borrowers and their ability to sustain the capital and interest repayments demanded for their loans.

Owing to the asymmetric nature of long term debt flows there is a tendency for money to become concentrated in the lending centres, which then causes liquidity problems for the rest of the economy. Eventually repayment problems surface, especially if the practice of further borrowing to repay existing loans is allowed, since the underlying mathematical process is exponential. As general insolvency as well as a consequent debt deflation occurs, the money and loan supply contracts as the banking system removes loan capacity from the economy either from loan repayment, or as a result of bank failure. This leads to a domino effect as businesses that have become dependent on continuously rolling over debt fail and trigger further defaults. Monetary expansion and further lending is also constrained by the absence of qualified borrowers, and by the general unwillingness to either lend or borrow that results from the ensuing economic collapse. Further complications, as described by Ben Bernanke and Harold James, can occur when interactions between currencies are considered, in particular in conjunction with gold-based capital regulation, because of the difficulties in establishing the correct ratio of gold for each individual currency and maintaining it, in a system where lending and the associated money supply are continually fluctuating and gold is also being used at a national level for international debt repayments.

The debt to money imbalance created by the widespread, and global, sale of Asset Backed securities may be unique to this particular crisis. Although asset backed security issuance dropped considerably in 2008, as the resale markets were temporarily frozen, current stated policy in several countries, including the USA and the United Kingdom, is to encourage further securitization to assist the recovery of the banking sector. Unfortunately this appears to be succeeding.

Advertisement

Monetary Policy Divergence: Cross Currency + FX Swaps — Negative Basis. Thought of the Day 83.0

pic1221432

Cross currency swaps and FX swaps encompass structures which allow investors to raise funds in a particular currency, e.g. the dollar from other funding currencies such as the euro. For example an institution which has dollar funding needs can raise euros in euro funding markets and convert the proceeds into dollar funding obligations via an FX swap. The only difference between cross currency swaps and FX swaps is that the former involves the exchange of floating rates during the contract term. Since a cross currency swap involves the exchange of two floating currencies, the two legs of the swap should be valued at par and thus the basis should be theoretically zero. But in periods when perceptions about credit risk or supply and demand imbalances in funding markets make the demand for one currency (e.g. the dollar) high vs. another currency (e.g. the euro), then the basis can be negative as a substantial premium is needed to convince an investor to exchange dollars against a foreign currency, i.e. to enter a swap where he receives USD Libor flat, an investor will want to pay Euribor minus a spread (because the basis is negative).

Both cross currency and FX swaps are subjected to counterparty and credit risk by a lot more than interest rate swaps due to the exchange of notional amounts. As such the pricing of these contracts is affected by perceptions about the creditworthiness of the banking system. The Japanese banking crisis of the 1990s caused a structurally negative basis in USD/JPY cross currency swaps. Similarly the European debt crisis of 2010/2012 was associated with a sustained period of very negative basis in USD/EUR cross currency swaps.

What had caused these dollar funding shortages? Financial globalization meant that Japanese banks had accumulated a large amount of dollar assets during the 1980s and 1990s. Similarly European banks accumulating a large amount of dollar assets during 2000s created structural US dollar funding needs. The Japanese banking crisis of 1990s made Japanese banks less creditworthy in dollar funding markets and they had to pay a premium to convert yen funding into dollar funding. Similarly the Euro debt crisis created a banking crisis making Euro area banks less worthy from a counterparty/credit risk point of view in dollar funding markets. As dollar funding markets including FX swap markets dried up, these funding needs took the form of an acute dollar shortage.

What then is causing the negative basis currently? The answer is monetary policy divergence. The ECB’s and BoJ’s QE coupled with a chorus of rate cuts across DM and EM central banks has created an imbalance between supply and demand across funding markets. Funding conditions have become a lot easier outside the US with QE-driven liquidity injections and rate cuts raising the supply of euro and other currency funding vs. dollar funding. This divergence manifested itself as one-sided order flow in cross currency swap markets causing a decline in the basis.

A Monetary Drain due to Excess Liquidity. Why is the RBI Playing Along

3limit

And so we thought demonetization was not a success. Let me begin with the Socratic irony to assume that it was indeed a success, albeit not in arresting black money for sure. Yes, the tax net has widened and the cruelty of smashing down the informal sector to smithereens to be replaceable with a formal economy, more in the manner of sucking the former into the latter has been achieved. As far as terror funding is concerned, it is anybody’s guess and so let them be with their imaginations. What none can deny is the surge in deposits and liquidity in the wake of demonetization. But, what one has been consciously, or through an ideological-driven standpoint denying is the fact that demonetization clubbed with the governmental red carpet for foreign direct investment has been an utter failure to attract money into the country. And the reason attributed for the same has been a dip in the economy as a result of the idiosyncratic decision of November 8 added with the conjuring acts of mathematics and statistics in tweaking base years to let go off the reality behind a depleting GDP and project the country as the fastest growing emerging economy in the world. The irony I started off with is defeated here, for none of the claims that the government propaganda machine churns out on the assembly line are in fact anywhere near the truth. But, thats what a propaganda is supposed to doing, else why even call it that, or even call for a successful governance and so on and on (sorry for the Žižekian interjections here).

Assuming the irony still has traces and isn’t vanquished, it is time to move on and look into the effects of what calls for a financial reality-check. Abruptly going vertically through the tiers here, it is recently been talked about in the corridors of financial power that the Reserve Bank of India (RBI) is all set to drain close to 1.5 lakh crore in excess liquidity from the financial system as surging foreign investments forces the central bank to absorb the dollar inflows and sell rupees to cap gains in the local currency. This is really interesting, for the narrative or the discourse is again symptomatic of what the government wants us to believe, and so believe we shall, or shall we? After this brief stopover, chugging off again…Foreign investments into debt and shares have reached a net $31 billion this year, compared with $2.7 billion in sales last year, due to factors including India’s low inflation and improving economic growth. This is not merely a leap, but a leap of faith, in this case numerically. Yes, India is suffering from low inflation, but it ain’t deflation, but rather disinflation. There is a method to this maddening reason, if one needs to counter what gets prime time economic news in the media or passes on as Chinese Whispers amongst activists hell-bent on proving the futility of the governmental narrative. There is nothing wrong in the procedure as long as this hell-bent-ness is cooked in proper proportions of reason. But, why call it disinflation and not deflation? A sharp drop in inflation below the Reserve Bank of India’s (RBI’s) 4% target has been driven by only two items – pulses and vegetables. the consumer price index (CPI), excluding pulses and vegetables, rose at the rate of 3.8% in July, much higher than the official headline figure of 2.4% inflation for the month. The re-calculated CPI is based on adjusted weights after excluding pulses and vegetables from the basket of goods and services. The two farm items – pulses and vegetables – have a combined weight of only 8.4% in the consumer price index (CPI) basket. However, they have wielded disproportionate influence over the headline inflation number for more than a year now owing to the sharp volatility in their prices. So, how does it all add up? Prices of pulses and vegetables have fallen significantly this year owing to increased supply amid a normal monsoon last year, as noted by the Economic Survey. The high prices of pulses in the year before and the government’s promises of more effective procurement may have encouraged farmers to produce more last year, resulting in a glut. Demonetisation may have added to farmers’ woes by turning farm markets into buyers’ markets. Thus, there does not seem to be any imminent threat of deflation in India. A more apt characterization of the recent trends in prices may be ‘disinflation’ (a fall in the inflation rate) rather than deflation (falling prices) given that overall inflation, excluding pulses and vegetables, is close to the RBI target of 4%. On the topicality of improving economic growth in the country, this is the bone of contention either weakening or otherwise depending on how the marrow is key up.

Moving on…The strong inflows have sent the rupee up nearly 7 per cent against the dollar and forced the RBI to buy more than $10 billion in spot market and $10 billion in forwards this year – which has meant an equivalent infusion in rupees. Those rupee sales have added liquidity into a financial system already flush with cash after a ban on higher-denomination currency in November sparked a surge in bank deposits. Average daily liquidity has risen to around Rs 3 lakh crore, well above the RBI’s goal of around Rs 1 lakh crore, according to traders. That will force the RBI to step up debt sales to remove liquidity and avoid any inflationary impact. Traders estimate the RBI will need to drain Rs 1 lakh crore to Rs 1.4 lakh crore ($15.7 billion to $22 billion) after taking into account factors such as festival-related consumer spending that naturally reduce cash in the system. How the RBI drains the cash will thus become an impact factor for bond traders, who have benefitted from a rally in debt markets. The RBI has already drained about Rs 1 lakh crore via one-year bills under a special market stabilisation scheme (MSS), as well as Rs 30,000 crore in longer debt through open market sales. MSS (Market Stabilisation Scheme) securities are issued with the objective of providing the RBI with a stock of securities with which it can intervene in the market for managing liquidity. These securities are issued not to meet the government’s expenditure. The MSS scheme was launched in April 2004 to strengthen the RBI’s ability to conduct exchange rate and monetary management. The bills/bonds issued under MSS have all the attributes of the existing treasury bills and dated securities. These securities will be issued by way of auctions to be conducted by the RBI. The timing of issuance, amount and tenure of such securities will be decided by the RBI. The securities issued under the MSS scheme are matched by an equivalent cash balance held by the government with the RBI. As a result, their issuance will have a negligible impact on the fiscal deficit of the government. It is hoped that the procedure would continue, noting staggered sales in bills, combined with daily reverse repo operations and some long-end sales, would be easily absorbable in markets. The most disruptive fashion would be stepping up open market sales, which tend to focus on longer-ended debt. That may send yields higher and blunt the impact of the central bank’s 25 basis point rate cut in August. The RBI does not provide a timetable of its special debt sales for the year. and if the RBI drains the cash largely through MSS bonds then markets wont get too much impacted. This brings us to close in proving the success story of demonetization as a false beacon, in that with a surge in liquidity, the impact on the market would be negligible if MSS are resorted to culminating in establishing the fact that demonetization clubbed with red-carpeted FDI has had absolutely no nexus in the influx of dollars and thus any propaganda of this resulting as a success story of demonetization is to be seen as purely rhetoric. QED.

Momentum of Accelerated Capital. Note Quote.

high-frequency-trading

Distinct types of high frequency trading firms include independent proprietary firms, which use private funds and specific strategies which remain secretive, and may act as market makers generating automatic buy and sell orders continuously throughout the day. Broker-dealer proprietary desks are part of traditional broker-dealer firms but are not related to their client business, and are operated by the largest investment banks. Thirdly hedge funds focus on complex statistical arbitrage, taking advantage of pricing inefficiencies between asset classes and securities.

Today strategies using algorithmic trading and High Frequency Trading play a central role on financial exchanges, alternative markets, and banks‘ internalized (over-the-counter) dealings:

High frequency traders typically act in a proprietary capacity, making use of a number of strategies and generating a very large number of trades every single day. They leverage technology and algorithms from end-to-end of the investment chain – from market data analysis and the operation of a specific trading strategy to the generation, routing, and execution of orders and trades. What differentiates HFT from algorithmic trading is the high frequency turnover of positions as well as its implicit reliance on ultra-low latency connection and speed of the system.

The use of algorithms in computerised exchange trading has experienced a long evolution with the increasing digitalisation of exchanges:

Over time, algorithms have continuously evolved: while initial first-generation algorithms – fairly simple in their goals and logic – were pure trade execution algos, second-generation algorithms – strategy implementation algos – have become much more sophisticated and are typically used to produce own trading signals which are then executed by trade execution algos. Third-generation algorithms include intelligent logic that learns from market activity and adjusts the trading strategy of the order based on what the algorithm perceives is happening in the market. HFT is not a strategy per se, but rather a technologically more advanced method of implementing particular trading strategies. The objective of HFT strategies is to seek to benefit from market liquidity imbalances or other short-term pricing inefficiencies.

While algorithms are employed by most traders in contemporary markets, the intense focus on speed and the momentary holding periods are the unique practices of the high frequency traders. As the defence of high frequency trading is built around the principles that it increases liquidity, narrows spreads, and improves market efficiency, the high number of trades made by HFT traders results in greater liquidity in the market. Algorithmic trading has resulted in the prices of securities being updated more quickly with more competitive bid-ask prices, and narrowing spreads. Finally HFT enables prices to reflect information more quickly and accurately, ensuring accurate pricing at smaller time intervals. But there are critical differences between high frequency traders and traditional market makers:

  1. HFT do not have an affirmative market making obligation, that is they are not obliged to provide liquidity by constantly displaying two sides quotes, which may translate into a lack of liquidity during volatile conditions.
  2. HFT contribute little market depth due to the marginal size of their quotes, which may result in larger orders having to transact with many small orders, and this may impact on overall transaction costs.
  3. HFT quotes are barely accessible due to the extremely short duration for which the liquidity is available when orders are cancelled within milliseconds.

Besides the shallowness of the HFT contribution to liquidity, are the real fears of how HFT can compound and magnify risk by the rapidity of its actions:

There is evidence that high-frequency algorithmic trading also has some positive benefits for investors by narrowing spreads – the difference between the price at which a buyer is willing to purchase a financial instrument and the price at which a seller is willing to sell it – and by increasing liquidity at each decimal point. However, a major issue for regulators and policymakers is the extent to which high-frequency trading, unfiltered sponsored access, and co-location amplify risks, including systemic risk, by increasing the speed at which trading errors or fraudulent trades can occur.

Although there have always been occasional trading errors and episodic volatility spikes in markets, the speed, automation and interconnectedness of today‘s markets create a different scale of risk. These risks demand that exchanges and market participants employ effective quality management systems and sophisticated risk mitigation controls adapted to these new dynamics in order to protect against potential threats to market stability arising from technology malfunctions or episodic illiquidity. However, there are more deliberate aspects of HFT strategies which may present serious problems for market structure and functioning, and where conduct may be illegal, for example in order anticipation seeks to ascertain the existence of large buyers or sellers in the marketplace and then to trade ahead of those buyers and sellers in anticipation that their large orders will move market prices. A momentum strategy involves initiating a series of orders and trades in an attempt to ignite a rapid price move. HFT strategies can resemble traditional forms of market manipulation that violate the Exchange Act:

  1. Spoofing and layering occurs when traders create a false appearance of market activity by entering multiple non-bona fide orders on one side of the market at increasing or decreasing prices in order to induce others to buy or sell the stock at a price altered by the bogus orders.
  2. Painting the tape involves placing successive small amount of buy orders at increasing prices in order to stimulate increased demand.

  3. Quote Stuffing and price fade are additional HFT dubious practices: quote stuffing is a practice that floods the market with huge numbers of orders and cancellations in rapid succession which may generate buying or selling interest, or compromise the trading position of other market participants. Order or price fade involves the rapid cancellation of orders in response to other trades.

The World Federation of Exchanges insists: ― Exchanges are committed to protecting market stability and promoting orderly markets, and understand that a robust and resilient risk control framework adapted to today‘s high speed markets, is a cornerstone of enhancing investor confidence. However this robust and resilient risk control framework‘ seems lacking, including in the dark pools now established for trading that were initially proposed as safer than the open market.

Financial Entanglement and Complexity Theory. An Adumbration on Financial Crisis.

entanglement

The complex system approach in finance could be described through the concept of entanglement. The concept of entanglement bears the same features as a definition of a complex system given by a group of physicists working in a field of finance (Stanley et al,). As they defined it – in a complex system all depends upon everything. Just as in the complex system the notion of entanglement is a statement acknowledging interdependence of all the counterparties in financial markets including financial and non-financial corporations, the government and the central bank. How to identify entanglement empirically? Stanley H.E. et al formulated the process of scientific study in finance as a search for patterns. Such a search, going on under the auspices of “econophysics”, could exemplify a thorough analysis of a complex and unstructured assemblage of actual data being finalized in the discovery and experimental validation of an appropriate pattern. On the other side of a spectrum, some patterns underlying the actual processes might be discovered due to synthesizing a vast amount of historical and anecdotal information by applying appropriate reasoning and logical deliberations. The Austrian School of Economic Thought which, in its extreme form, rejects application of any formalized systems, or modeling of any kind, could be viewed as an example. A logical question follows out this comparison: Does there exist any intermediate way of searching for regular patters in finance and economics?

Importantly, patterns could be discovered by developing rather simple models of money and debt interrelationships. Debt cycles were studied extensively by many schools of economic thought (Shiller, Robert J._ Akerlof, George A – Animal Spirits: How Human Psychology Drives the Economy, and Why It Matters for Global Capitalism). The modern financial system worked by spreading risk, promoting economic efficiency and providing cheap capital. It had been formed during the years as bull markets in shares and bonds originated in the early 1990s. These markets were propelled by abundance of money, falling interest rates and new information technology. Financial markets, by combining debt and derivatives, could originate and distribute huge quantities of risky structurized products and sell them to different investors. Meanwhile, financial sector debt, only a tenth of the size of non-financial-sector debt in 1980, became half as big by the beginning of the credit crunch in 2007. As liquidity grew, banks could buy more assets, borrow more against them, and enjoy their value rose. By 2007 financial services were making 40% of America’s corporate profits while employing only 5% of its private sector workers. Thanks to cheap money, banks could have taken on more debt and, by designing complex structurized products, they were able to make their investment more profitable and risky. Securitization facilitating the emergence of the “shadow banking” system foments, simultaneously, bubbles on different segments of a global financial market.

Yet over the past decade this system, or a big part of it, began to lose touch with its ultimate purpose: to reallocate deficit resources in accordance with the social priorities. Instead of writing, managing and trading claims on future cashflows for the rest of the economy, finance became increasingly a game for fees and speculation. Due to disastrously lax regulation, investment banks did not lay aside enough capital in case something went wrong, and, as the crisis began in the middle of 2007, credit markets started to freeze up. Qualitatively, after the spectacular Lehman Brothers disaster in September 2008, laminar flows of financial activity came to an end. Banks began to suffer losses on their holdings of toxic securities and were reluctant to lend to one another that led to shortages of funding system. This only intensified in late 2007 when Nothern Rock, a British mortgage lender, experienced a bank run that started in the money markets. All of a sudden, liquidity became in a short supply, debt was unwound, and investors were forced to sell and write down the assets. For several years, up to now, the market counterparties no longer trust each other. As Walter Bagehot, an authority on bank runs, once wrote:

Every banker knows that if he has to prove that he is worth of credit, however good may be his arguments, in fact his credit is gone.

In an entangled financial system, his axiom should be stretched out to the whole market. And it means, precisely, financial meltdown or the crisis. The most fascinating feature of the post-crisis era on financial markets was the continuation of a ubiquitous liquidity expansion. To fight the market squeeze, all the major central banks have greatly expanded their balance sheets. The latter rose, roughly, from about 10 percent to 25-30 percent of GDP for the appropriate economies. For several years after the credit crunch 2007-09, central banks bought trillions of dollars of toxic and government debts thus increasing, without any precedent in modern history, money issuance. Paradoxically, this enormous credit expansion, though accelerating for several years, has been accompanied by a stagnating and depressed real economy. Yet, until now, central bankers are worried with downside risks and threats of price deflation, mainly. Otherwise, a hectic financial activity that is going on along unbounded credit expansion could be transformed by herding into autocatalytic process that, if being subject to accumulation of a new debt, might drive the entire system at a total collapse. From a financial point of view, this systemic collapse appears to be a natural result of unbounded credit expansion which is ‘supported’ with the zero real resources. Since the wealth of investors, as a whole, becomes nothing but the ‘fool’s gold’, financial process becomes a singular one, and the entire system collapses. In particular, three phases of investors’ behavior – hedge finance, speculation, and the Ponzi game, could be easily identified as a sequence of sub-cycles that unwound ultimately in the total collapse.

The Sibyl’s Prophecy/Nordic Creation. Note Quote.

6a00d8341c464853ef01b8d179f465970c-500wi

6a00d8341c464853ef01b8d179f46e970c-500wi

The Prophecy of the Tenth Sibyl, a medieval best-seller, surviving in over 100 manuscripts from the 11th to the 16th century, predicts, among other things, the reign of evil despots, the return of the Antichrist and the sun turning to blood.

The Tenth or Tiburtine Sibyl was a pagan prophetess perhaps of Etruscan origin. To quote Lactantus in his general account of the ten sibyls in the introduction, ‘The Tiburtine Sibyl, by name Albunea, is worshiped at Tibur as a goddess, near the banks of the Anio in which stream her image is said to have been found, holding a book in her hand’.

The work interprets the Sibyl’s dream in which she foresees the downfall and apocalyptic end of the world; 9 suns appear in the sky, each one more ugly and bloodstained than the last, representing the 9 generations of mankind and ending with Judgment Day. The original Greek version dates from the end of the 4th century and the earliest surviving manuscript in Latin is dated 1047. The Tiburtine Sibyl is often depicted with Emperor Augustus, who asks her if he should be worshipped as a god.

The foremost lay of the Elder Edda is called Voluspa (The Sibyl’s Prophecy). The volva, or sibyl, represents the indelible imprint of the past, wherein lie the seeds of the future. Odin, the Allfather, consults this record to learn of the beginning, life, and end of the world. In her response, she addresses Odin as a plurality of “holy beings,” indicating the omnipresence of the divine principle in all forms of life. This also hints at the growth of awareness gained by all living, learning entities during their evolutionary pilgrimage through spheres of existence.

Hear me, all ye holy beings, greater as lesser sons of Heimdal! You wish me to tell of Allfather’s works, tales of the origin, the oldest I know. Giants I remember, born in the foretime, they who long ago nurtured me. Nine worlds I remember, nine trees of life, before this world tree grew from the ground.

Paraphrased, this could be rendered as:

Learn, all ye living entities, imbued with the divine essence of Odin, ye more and less evolved sons of the solar divinity (Heimdal) who stands as guardian between the manifest worlds of the solar system and the realm of divine consciousness. You wish to learn of what has gone before. I am the record of long ages past (giants), that imprinted their experience on me. I remember nine periods of manifestation that preceded the present system of worlds.

Time being inextricably a phenomenon of manifestation, the giant ages refer to the matter-side of creation. Giants represent ages of such vast duration that, although their extent in space and time is limited, it is of a scope that can only be illustrated as gigantic. Smaller cycles within the greater are referred to in the Norse myths as daughters of their father-giant. Heimdal is the solar deity in the sign of Aries – of beginnings for our system – whose “sons” inhabit, in fact compose, his domain.

Before a new manifestation of a world, whether a cosmos or a lesser system, all its matter is frozen in a state of immobility, referred to in the Edda as a frost giant. The gods – consciousnesses – are withdrawn into their supernal, unimaginable abstraction of Nonbeing, called in Sanskrit “paranirvana.” Without a divine activating principle, space itself – the great container – is a purely theoretical abstraction where, for lack of any organizing energic impulse of consciousness, matter cannot exist.

This was the origin of ages when Ymer built. No soil was there, no sea, no cool waves. Earth was not, nor heaven above; Gaping Void alone, no growth. Until the sons of Bur raised the tables; they who created beautiful Midgard. The sun shone southerly on the stones of the court; then grew green herbs in fertile soil.

To paraphrase again:

Before time began, the frost giant (Ymer) prevailed. No elements existed for there were ‘no waves,’ no motion, hence no organized form nor any temporal events, until the creative divine forces emanated from Space (Bur — a principle, not a locality) and organized latent protosubstance into the celestial bodies (tables at which the gods feast on the mead of life-experience). Among these tables is Middle Court (Midgard), our own beautiful planet. The life-giving sun sheds its radiant energies to activate into life all the kingdoms of nature which compose it.

The Gaping Void (Ginnungagap) holds “no cool waves” throughout illimitable depths during the age of the frost giant. Substance has yet to be created. Utter wavelessness negates it, for all matter is the effect of organized, undulating motion. As the cosmic hour strikes for a new manifestation, the ice of Home of Nebulosity (Niflhem) is melted by the heat from Home of Fire (Muspellshem), resulting in vapor in the void. This is Ymer, protosubstance as yet unformed, the nebulae from which will evolve the matter components of a new universe, as the vital heat of the gods melts and vivifies the formless immobile “ice.”

When the great age of Ymer has run its course, the cow Audhumla, symbol of fertility, “licking the salt from the ice blocks,” uncovers the head of Buri, first divine principle. From this infinite, primal source emanates Bur, whose “sons” are the creative trinity: Divine Allfather, Will, and Sanctity (Odin, Vile, and Vi). This triune power “kills” the frost giant by transforming it into First Sound (Orgalmer), or keynote, whose overtones vibrate through the planes of sleeping space and organize latent protosubstance into the multifarious forms which will be used by all “holy beings” as vehicles for gaining experience in worlds of matter.

Beautiful Midgard, our physical globe earth, is but one of the “tables” raised by the creative trinity, whereat the gods shall feast. The name Middle Court is suggestive, for the ancient traditions place our globe in a central position in the series of spheres that comprise the terrestrial being’s totality. All living entities, man included, comprise besides the visible body a number of principles and characteristics not cognized by the gross physical senses. In the Lay of Grimner (Grimnismal), wherein Odin in the guise of a tormented prisoner on earth instructs a human disciple, he enumerates twelve spheres or worlds, all but one of which are unseen by our organs of sight. As to the formation of Midgard, he relates:

Of Ymer’s flesh was the earth formed, the billows of his blood, the mountains of his bones, bushes of his hair, and of his brainpan heaven. With his eyebrows beneficent powers enclosed Midgard for man; but of his brain were surely all dark skies created.

The trinity of immanent powers organize Ymer into the forms wherein they dwell, shaping the chaos or frost giant into living globes on many planes of being. The “eyebrows” that gird the earth and protect it suggest the Van Allen belts that shield the planet from inimical radiation. The brain of Ymer – material thinking – is surely all too evident in the thought atmosphere wherein man participates.

The formation of the physical globe is described as the creation of “dwarfs” – elemental forces which shape the body of the earth-being and which include the mineral. vegetable, and animal kingdoms.

The mighty drew to their judgment seats, all holy gods to hold counsel: who should create a host of dwarfs from the blood of Brimer and the limbs of the dead. Modsogne there was, mightiest of all the dwarfs, Durin the next; there were created many humanoid dwarfs from the earth, as Durin said.

Brimer is the slain Ymer, a kenning for the waters of space. Modsogne is the Force-sucker, Durin the Sleeper, and later comes Dvalin the Entranced. They are “dwarf”-consciousnesses, beings that are miðr than human – the Icelandic miðr meaning both “smaller” and “less.” By selecting the former meaning, popular concepts have come to regard them as undersized mannikins, rather than as less evolved natural species that have not yet reached the human condition of intelligence and self-consciousness.

During the life period or manifestation of a universe, the governing giant or age is named Sound of Thor (Trudgalmer), the vital force which sustains activity throughout the cycle of existence. At the end of this age the worlds become Sound of Fruition (Bargalmer). This giant is “placed on a boat-keel and saved,” or “ground on the mill.” Either version suggests the karmic end product as the seed of future manifestation, which remains dormant throughout the ensuing frost giant of universal dissolution, when cosmic matter is ground into a formless condition of wavelessness, dissolved in the waters of space.

There is an inescapable duality of gods-giants in all phases of manifestation: gods seek experience in worlds of substance and feast on the mead at stellar and planetary tables; giants, formed into vehicles inspired with the divine impetus, rise through cycles of this association on the ladder of conscious awareness. All states being relative and bipolar, there is in endless evolution an inescapable link between the subjective and objective progress of beings. Odin as the “Opener” is paired with Orgalmer, the keynote on which a cosmos is constructed; Odin as the “Closer” is equally linked with Bargalmer, the fruitage of a life cycle. During the manifesting universe, Odin-Allfather corresponds to Trudgalmer, the sustainer of life.

A creative trinity plays an analogical part in the appearance of humanity. Odin remains the all-permeant divine essence, while on this level his brother-creators are named Honer and Lodur, divine counterparts of water or liquidity, and fire or vital heat and motion. They “find by the shore, of little power” the Ash and the Elm and infuse into these earth-beings their respective characteristics, making a human image or reflection of themselves. These protohumans, miniatures of the world tree, the cosmic Ash, Yggdrasil, in addition to their earth-born qualities of growth force and substance, receive the divine attributes of the gods. By Odin man is endowed with spirit, from Honer comes his mind, while Lodur gives him will and godlike form. The essentially human qualities are thus potentially divine. Man is capable of blending with the earth, whose substances form his body, yet is able to encompass in his consciousness the vision native to his divine source. He is in fact a minor world tree, part of the universal tree of life, Yggdrasil.

Ygg in conjunction with other words has been variously translated as Eternal, Awesome or Terrible, and Old. Sometimes Odin is named Yggjung, meaning the Ever-Young, or Old-Young. Like the biblical “Ancient of Days” it is a concept that mind can grasp only in the wake of intuition. Yggdrasil is the “steed” or the “gallows” of Ygg, whereon Odin is mounted or crucified during any period of manifested life. The world tree is rooted in Nonbeing and ramifies through the planes of space, its branches adorned with globes wherein the gods imbody. The sibyl spoke of ours as the tenth in a series of such world trees, and Odin confirms this in The Song of the High One (Den Hoges Sang):

I know that I hung in the windtorn tree nine whole nights, spear-pierced, given to Odin, my self to my Self above me in the tree, whose root none knows whence it sprang. None brought me bread, none served me drink; I searched the depths, spied runes of wisdom, raised them with song, and fell once more from the tree. Nine powerful songs I learned from the wise son of Boltorn, Bestla’s father; a draught I drank of precious mead ladled from Odrorer. I began to grow, to grow wise, to grow greater and enjoy; for me words from words led to new words, for me deeds from deeds led to new deeds.

Numerous ancient tales relate the divine sacrifice and crucifixion of the Silent Watcher whose realm or protectorate is a world in manifestation. Each tree of life, of whatever scope, constitutes the cross whereon the compassionate deity inherent in that hierarchy remains transfixed for the duration of the cycle of life in matter. The pattern of repeated imbodiments for the purpose of gaining the precious mead is clear, as also the karmic law of cause and effect as words and deeds bring their results in new words and deeds.

Yggdrasil is said to have three roots. One extends into the land of the frost giants, whence flow twelve rivers of lives or twelve classes of beings; another springs from and is watered by the well of Origin (Urd), where the three Norns, or fates, spin the threads of destiny for all lives. “One is named Origin, the second Becoming. These two fashion the third, named Debt.” They represent the inescapable law of cause and effect. Though they have usually been roughly translated as Past, Present, and Future, the dynamic concept in the Edda is more complete and philosophically exact. The third root of the world tree reaches to the well of the “wise giant Mimer,” owner of the well of wisdom. Mimer represents material existence and supplies the wisdom gained from experience of life. Odin forfeited one eye for the privilege of partaking of these waters of life, hence he is represented in manifestation as one-eyed and named Half-Blind. Mimer, the matter-counterpart, at the same time receives partial access to divine vision.

The lays make it very clear that the purpose of existence is for the consciousness-aspect of all beings to gain wisdom through life, while inspiring the substantial side of itself to growth in inward awareness and spirituality. At the human level, self-consciousness and will are aroused, making it possible for man to progress willingly and purposefully toward his divine potential, aided by the gods who have passed that way before him, rather than to drift by slow degrees and many detours along the road of inevitable evolution. Odin’s instructions to a disciple, Loddfafner, the dwarf-nature in man, conclude with:

Now is sung the High One’s song in the High One’s hall. Useful to sons of men, useless to sons of giants. Hail Him who sang! Hail him who kens! Rejoice they who understand! Happy they who heed!

Extreme Value Theory

1469941517622

Standard estimators of the dependence between assets are the correlation coefficient or the Spearman’s rank correlation for instance. However, as stressed by [Embrechts et al. ], these kind of dependence measures suffer from many deficiencies. Moreoever, their values are mostly controlled by relatively small moves of the asset prices around their mean. To cure this problem, it has been proposed to use the correlation coefficients conditioned on large movements of the assets. But [Boyer et al.] have emphasized that this approach suffers also from a severe systematic bias leading to spurious strategies: the conditional correlation in general evolves with time even when the true non-conditional correlation remains constant. In fact, [Malevergne and Sornette] have shown that any approach based on conditional dependence measures implies a spurious change of the intrinsic value of the dependence, measured for instance by copulas. Recall that the copula of several random variables is the (unique) function which completely embodies the dependence between these variables, irrespective of their marginal behavior (see [Nelsen] for a mathematical description of the notion of copula).

In view of these limitations of the standard statistical tools, it is natural to turn to extreme value theory. In the univariate case, extreme value theory is very useful and provides many tools for investigating the extreme tails of distributions of assets returns. These new developments rest on the existence of a few fundamental results on extremes, such as the Gnedenko-Pickands-Balkema-de Haan theorem which gives a general expression for the distribution of exceedence over a large threshold. In this framework, the study of large and extreme co-movements requires the multivariate extreme values theory, which unfortunately does not provide strong results. Indeed, in constrast with the univariate case, the class of limiting extreme-value distributions is too broad and cannot be used to constrain accurately the distribution of large co-movements.

In the spirit of the mean-variance portfolio or of utility theory which establish an investment decision on a unique risk measure, we use the coefficient of tail dependence, which, to our knowledge, was first introduced in the financial context by [Embrechts et al.]. The coefficient of tail dependence between assets Xi and Xj is a very natural and easy to understand measure of extreme co-movements. It is defined as the probability that the asset Xi incurs a large loss (or gain) assuming that the asset Xj also undergoes a large loss (or gain) at the same probability level, in the limit where this probability level explores the extreme tails of the distribution of returns of the two assets. Mathematically speaking, the coefficient of lower tail dependence between the two assets Xi and Xj , denoted by λ−ij is defined by

λ−ij = limu→0 Pr{Xi<Fi−1(u)|Xj < Fj−1(u)} —– (1)

where Fi−1(u) and Fj−1(u) represent the quantiles of assets Xand Xj at level u. Similarly the coefficient of the upper tail dependence is

λ+ij = limu→1 Pr{Xi > Fi−1(u)|Xj > Fj−1(u)} —– (2)

λ−ij and λ+ij are of concern to investors with long (respectively short) positions. We refer to [Coles et al.] and references therein for a survey of the properties of the coefficient of tail dependence. Let us stress that the use of quantiles in the definition of λ−ij and λ+ij makes them independent of the marginal distribution of the asset returns: as a consequence, the tail dependence parameters are intrinsic dependence measures. The obvious gain is an “orthogonal” decomposition of the risks into (1) individual risks carried by the marginal distribution of each asset and (2) their collective risk described by their dependence structure or copula.

Being a probability, the coefficient of tail dependence varies between 0 and 1. A large value of λ−ij means that large losses occur almost surely together. Then, large risks can not be diversified away and the assets crash together. This investor and portfolio manager nightmare is further amplified in real life situations by the limited liquidity of markets. When λ−ij vanishes, these assets are said to be asymptotically independent, but this term hides the subtlety that the assets can still present a non-zero dependence in their tails. For instance, two normally distributed assets can be shown to have a vanishing coefficient of tail dependence. Nevertheless, unless their correlation coefficient is identically zero, these assets are never independent. Thus, asymptotic independence must be understood as the weakest dependence which can be quantified by the coefficient of tail dependence.

For practical implementations, a direct application of the definitions (1) and (2) fails to provide reasonable estimations due to the double curse of dimensionality and undersampling of extreme values, so that a fully non-parametric approach is not reliable. It turns out to be possible to circumvent this fundamental difficulty by considering the general class of factor models, which are among the most widespread and versatile models in finance. They come in two classes: multiplicative and additive factor models respectively. The multiplicative factor models are generally used to model asset fluctuations due to an underlying stochastic volatility for a survey of the properties of these models). The additive factor models are made to relate asset fluctuations to market fluctuations, as in the Capital Asset Pricing Model (CAPM) and its generalizations, or to any set of common factors as in Arbitrage Pricing Theory. The coefficient of tail dependence is known in close form for both classes of factor models, which allows for an efficient empirical estimation.

Causation in Financial Markets. Note Quote.

business-analytics-wallpaper-9

The algorithmic top-down view of causation in financial markets is essentially a deterministic, dynamical systems view. This can serve as an interpretation of financial markets whereby markets are understood through assets prices, representing information in the market, which can be described by a dynamical system model. This is the ideal encapsulated in the Laplacian vision:

We ought to regard the present state of the universe as the effect of its antecedent state and as the cause of the state that is to follow. An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present to its eyes. The perfection that the human mind has been able to give to astronomy affords but a feeble outline of such an intelligence.

Here boundary and initial conditions of variables uniquely determine the outcome for the effective dynamics at the level in hierarchy where it is being applied. This implies that higher levels in the hierarchy can drive broad macro-economic behavior, for example: at the highest level there could exist some set of differential equations that describe the behavior of adjustable quantities, such as interest rates, and how they impact measurable quantities such as gross domestic product, aggregate consumption.

The literature on the Lucas critique addresses limitations of this approach. Nevertheless, from a completely ad hoc perspective, a dynamical systems model may offer a best approximation to relationships at a particular level in a complex hierarchy.

Predictors: This system actor views causation in terms of uniquely determined outcomes, based on known boundary and initial conditions. Predictors may be successful when mechanistic dependencies in economic realities become pervasive or dominant. An example of a predictive-based argument since the Global Financial Crises (2007-2009) is the bipolar Risk- On/Risk-Off description for preferences, whereby investors shift to higher risk portfolios when global assessment of riskiness is established to be low and shift to low risk portfolios when global riskiness is considered to be high. Mathematically, a simple approximation of the dynamics can be described by a Lotka-Volterra (or predator-prey) model, which in economics, proposed a way to model the dynamics of various industries by introducing trophic functions between various sectors, and ignoring smaller sectors by considering the interactions of only two industrial sectors. The excess-liquidity due to quantitative easing and the prevalence and ease of trading in exchange traded funds and currencies, combined with low interest rates and the increase use of automation, pro- vided a basis for the risk-on/risk-off analogy for analysing large capital flows in the global arena. In Ising-Potts hierarchy, top down causation is filtered down to the rest of the market through all the shared risk factors, and the top-down information variables, which dominate bottom-up information variables. At higher levels, bottom-up variables are effectively noise terms. Nevertheless, the behaviour of the traders in a lower levels can still become driven by correlations across assets, based on perceived global riskiness. Thus, risk-on/risk-off transitions can have amplified effects.

How Permanent Income Hypothesis/Buffer Stock Model of Milton Friedman Got Nailed?

Milton Friedman and his gang at Chicago, including the ‘boys’ that went back and put their ‘free market’ wrecking ball through Chile under the butcher Pinochet, have really left a mess of confusion and lies behind in the hallowed halls of the academy, which in the 1970s seeped out, like slime, into the central banks and the treasury departments of the world. The overall intent of the literature they developed was to force governments to abandon so-called fiscal activism (the discretionary use of government spending and taxation policy to fine-tune total spending so as to achieve full employment), and, instead, empower central banks to disregard mass unemployment and fight inflation first. Wow!, Billy, these aren’t the usual contretemps and are wittily vitriolic. Several strands of their work – the Monetarist claim that aggregate policy should be reduced to a focus on the central bank controlling the money supply to control inflation (the market would deliver the rest (high employment and economic growth, etc); the promotion of a ‘natural rate of unemployment’ such that governments who tried to reduce the unemployment rate would only accelerate inflation; and the so-called Permanent Income Hypothesis (households ignored short-term movements in income when determining consumption spending), and others – were woven together to form a anti-government phalanx. Later, absurd notions such as rational expectations and real business cycles were added to the litany of Monetarist myths, which indoctrinated graduate students (who became policy makers) even further in the cause. Over time, his damaging legacy has been eroded by researchers and empirical facts but like all tight Groupthink communities the inner sanctum remain faithful and so the research findings haven’t permeated into major shifts in the academy. It will come – but these paradigm shifts take time.

Recently, another of Milton’s legacy bit the dust, thanks to a couple of Harvard economists, Peter Ganong and Pascal Noel, who with their paper “How does unemployment affect consumer spending?” smashed to smithereens the idea that households would not take consumption decisions with discretion, which the Chicagoan held to be a pivot of his active fiscal policy. Time traveling back to John Maynard Keynes, who outlined in his 1936 The General Theory of Employment, Interest and Money a view that household consumption was dependent on disposable income, and, that in times of economic downturn, the government could stimulate employment and income growth using fiscal policy, which would boost consumption.

In Chapter 3 The Principle of Effective Demand, Keynes wrote:

When employment increases, aggregate real income is increased. The psychology of the community is such that when aggregate real income is increased aggregate consumption is increased, but not by so much as income …

The relationship between the community’s income and what it can be expected to spend on consumption, designated by D1, will depend on the psychological characteristic of the community, which we shall call its propensity to consume. That is to say, consumption will depend on the level of aggregate income and, therefore, on the level of employment N, except when there is some change in the propensity to consume.

Keynes later (in Chapter 6 The Definition of Income, Saving and Investment) considered factors that might influence the decision to consume and talked about “how much windfall gain or loss he is making on capital account”.

He elaborated further in Chapter 8 The Propensity to Consume … and wrote:

The amount that the community spends on consumption obviously depends (i) partly on the amount of its income, (ii) partly on the other objective attendant circumstances, and (iii) partly on the subjective needs and the psychological propensities and habits of the individuals composing it and the principles on which the income is divided between them (which may suffer modification as output is increased).

And concluded that:

1. An increase in the real wage (and hence real income at each employment level) will “change in the same proportion”.

2. A rise in the difference between income and net income will influence consumption spending.

3. “Windfall changes in capital-values not allowed for in calculating net income. These are of much more importance in modifying the propensity to consume, since they will bear no stable or regular relationship to the amount of income.” So, wealth changes will impact positively on consumption (up and down).

Later, as he was reflecting in Chapter 24 on the “Social Philosophy towards which the General Theory might lead” he wrote:

… therefore, the enlargement of the functions of government, involved in the task of adjusting to one another the propensity to consume and the inducement to invest, would seem to a nineteenth-century publicist or to a contemporary American financier to be a terrific encroachment on individualism, I defend it, on the contrary, both as the only practicable means of avoiding the destruction of existing economic forms in their entirety and as the condition of the successful functioning of individual initiative.

For if effective demand is deficient, not only is the public scandal of wasted resources intolerable, but the individual enterpriser who seeks to bring these resources into action is operating with the odds loaded against him …

It was thus clear – that active fiscal policy was the “only practicable means of avoiding the destruction” of recession brought about by shifts in consumption and/or investment. That view dominated macroeconomics for several decades.

Then in 1957, Milton Friedman advocated the idea of Permanent income hypothesis. The central idea of the permanent-income hypothesis, proposed by Milton Friedman in 1957, is simple: people base consumption on what they consider their “normal” income. In doing this, they attempt to maintain a fairly constant standard of living even though their incomes may vary considerably from month to month or from year to year. As a result, increases and decreases in income that people see as temporary have little effect on their consumption spending. The idea behind the permanent-income hypothesis is that consumption depends on what people expect to earn over a considerable period of time. As in the life-cycle hypothesis, people smooth out fluctuations in income so that they save during periods of unusually high income and dissave during periods of unusually low income. Thus, a pre-med student should have a higher level of consumption than a graduate student in history if both have the same current income. The pre-med student looks ahead to a much higher future income, and consumes accordingly.Both the permanent-income and life-cycle hypotheses loosen the relationship between consumption and income so that an exogenous change in investment may not have a constant multiplier effect. This is more clearly seen in the permanent-income hypothesis, which suggests that people will try to decide whether or not a change of income is temporary. If they decide that it is, it has a small effect on their spending. Only when they become convinced that it is permanent will consumption change by a sizable amount. As is the case with all economic theory, this theory does not describe any particular household, but only what happens on the average.The life-cycle hypothesis introduced assets into the consumption function, and thereby gave a role to the stock market. A rise in stock prices increases wealth and thus should increase consumption while a fall should reduce consumption. Hence, financial markets matter for consumption as well as for investment. The permanent-income hypothesis introduces lags into the consumption function. An increase in income should not immediately increase consumption spending by very much, but with time it should have a greater and greater effect. Behavior that introduces a lag into the relationship between income and consumption will generate the sort of momentum that business-cycle theories saw. A change in spending changes income, but people only slowly adjust to it. As they do, their extra spending changes income further. An initial increase in spending tends to have effects that take a long time to completely unfold. The existence of lags also makes government attempts to control the economy more difficult. A change of policy does not have its full effect immediately, but only gradually. By the time it has its full effect, the problem that it was designed to attack may have disappeared. Finally, though the life-cycle and permanent-income hypotheses have greatly increased our understanding of consumption behavior, data from the economy does not always fit theory as well as it should, which means they do not provide a complete explanation for consumption behavior.

The idea of a propensity to consume, which had been formalised in textbooks as the Marginal propensity to consume (MPC) – which described the extra consumption that would follow a $ of extra disposable income, was thrown out by Friedman.

The MPC concept – that households consume only a proportion of each extra $1 in disposable income received – formed the basis of the expenditure multiplier. Accordingly, if government deficit spending of, say $100 million, was introduced into a recessed economy, firms would respond by increasing output and incomes by that same amount $100 million. But the extra incomes paid out ($100 m) would stimulate ‘induced consumption’ spending equal to the MPC times $100m. If the MPC was, say, 0.80 (meaning 80 cents of each extra dollar received as disposable income would be spent) then the ‘second-round’ effect of the stimulus would be an additional $80 million in consumption spending (assuming that disposable and total income were the same – that is, assuming away the tax effect for simplicity). In turn, firms would respond and produce an additional $80 million in output and incomes, which would then create further induced consumption effects. Each additional increment, smaller than the last, because the MPC of 0.80 would mean some of the extra disposable income was being lost to saving. But it was argued that the higher the MPC, the greater the overall impact of the stimulus would be. Instead, Friedman claimed that consumption was not driven by current income (or changes in it) but, rather by expected permanent income.

Permanent income becomes an unobservable concept driven by expectations. It also leads to claims that households smooth out their consumption over their lifetimes even though current incomes can fluctuate. So when individuals are facing major declines in their current income – perhaps due to unemployment – they can borrow short-term to maintain the smooth pattern of spending and pay the credit back later, when their current income is in excess of some average expectation.

The idea led to a torrent of articles mostly mathematical in origin trying to formalise the notion of a permanent income. They were all the same – GIGO – garbage in, garbage out. An exercise in mathematical chess although in search of the wrong solution. But Friedman was not one to embrace interdependence. In the ‘free market’ tradition, all decision makers were rational and independent who sought to maximise their lifetime utility. Accordingly, they would borrow when young (to have more consumption than their current income would permit) and save over their lifetimes to compensate when they were old and without incomes. Consumption was strictly determined by this notion of a lifetime income.

Only some major event that altered that projection would lead to changes in consumption.

The Permanent Income Hypothesis is still a core component of the major DSGE macro models that central banks and other forecasting agencies deploy to make statements about the effectiveness of fiscal and monetary policy.

So it matters whether it is a valid theory or not. It is not just one of those academic contests that stoke or deflate egos but have very little consequence for the well-being of the people in general. The empirical world hasn’t been kind to Friedman across all his theories. But the Permanent Income Hypothesis, in particular, hasn’t done well in explaining the dynamics of consumption spending.

Getting back to the paper mentioned in the beginning, it finds deployment of a rich dataset arguing to point where the permanent income hypothesis of Friedman is nailed to the coffin. If the permanent income hypothesis was a good framework for understanding what happens to the consumption patterns of this cohort then we would expect a lot of smoothing going on and relatively stable consumption.

Individuals, according to Friedman, are meant to engage in “self-insurance” to insure against calamity like unemployment. The evidence is that they do not.

The researchers reject what they call the “buffer stock model” (which is a version of the permanent income hypothesis).

They find:

1. “Spending drops sharply at the onset of unemployment, and this drop is better explained by liquidity constraints than by a drop in permanent income or a drop in work-related expenses.”

2. “We find that spending on nondurable goods and services drops by $160 (6%) over the course of two months.”

3. “Consistent with liquidity constraints, we show that states with lower UI benefits have a larger drop in spending at onset.” In other words, the fiscal stimulus coming from the unemployment benefits attenuates the loss of earned income somewhat.

4. “As UI benefit exhaustion approaches, families who remain unemployed barely cut spending, but then cut spending by 11% in the month after benefits are exhausted.”

5. As it turns out the “When benefits are exhausted, the average family loses about $1,000 of monthly income … In the same month, spending drops by $260 (11%).”

6. They compare the “path of spending during unemployment in the data to three benchmark models and find that the buffer stock model fits better than a permanent income model or a hand-to-mouth model.”

The buffer stock model assumes that families smooth their consumption after an income shock by liquidating previous assets – “a key prediction of buffer stock models is that agents accumulate precautionary savings to self-insure against income risk.”

The researchers find that the:

the buffer stock model has two major failures – it predicts substantially more asset holdings at onset and it predicts that spending should be much smoother at benefit exhaustion.

us_pih_study_2016

7. Finally, the researchers found “that families do relatively little self-insurance when unemployed as spending is quite sensitive to current monthly income.” Families “do not prepare for benefit exhaustion”.

Benjamin Noys, Lyotard, Baudrillard and the liquidity grid of capitalism (Notes Quotes)

For Benjamin Noys, as Lyotard put it, “desire underlies capitalism too“, then the result is that: ‘there are errant forces in the signs of capital. Not in its margins as its marginals, but dissimulated in its most essential exchanges.’ For Deleuze and Guattari, the problem of capitalism is not that it deterritorializes, but that it does not deterritorialise enough. It always runs up against its own immanent limit of deterritorialisation – the deterritorialisation of decoded flows of desire through the machine of oedipal grid. It is the figure of the schizophrenic, not to be confused with the empirical psychiatric disorder, which instantiates this radical immersion and the coming of a new porous and collective ‘subject’ of desire. The schizophrenic is the one who seeks out the very limit of capitalism: he is the inherent tendency brought to fulfilment. Contrary to Deleuze and Guattari’s faith in a subject who would incarnate a deterritorialisation in excess of capitalism, Lyotard’s Libidinal Economy denies any form of exteriority, insisting that capital itself is the unbinding of the most insane drives, which releases mutant intensities. the true form of capitalism is incarnated in the a-subjective figure of the libidinal band, a Moëbius strip of freely circulating intensities with neither beginning nor end.

moebiusant-4

Baudrillard argues that the compulsion towards liquidity, flow and accelerated circulation is only the replica or mirror of capitalist circulation. his catastrophic strategy comprises a kind of negative accelerations, in which he seeks the point of immanent reversal that inhabits and destabilises capital. In Symbolic Exchange and Death, this is the death function, which cannot be programmed and localised. against the law of value that determines market exchange, Baudrillard identifies this “death function” with the excessive and superior function of symbolic exchange which is based on the extermination of value.