Banking Assets Depreciation, Insolvency and Liquidation: Why are Defaults so Contagious?

wp621_0

Interlinkages across balance sheets of financial institutions may be modeled by a weighted directed graph G = (V, e) on the vertex set V = {1,…, n} = [n], whose elements represent financial institutions. The exposure matrix is given by e ∈ Rn×n, where the ijth entry e(i, j) represents the exposure (in monetary units) of institution i to institution j. The interbank assets of an institution i are given by

A(i) := ∑j e(i, j), which represents the interbank liabilities of i. In addition to these interbank assets and liabilities, a bank may hold other assets and liabilities (such as deposits).

The net worth of the bank, given by its capital c(i), represents its capacity for absorbing losses while remaining solvent. “Capital Ratio” of institution i, although technically, the ratio of capital to interbank assets and not total assets is given by

γ(i) := c(i)/A(i)

An institution is insolvent if its net worth is negative or zero, in which case, γ(i) is set to 0.

A financial network (e, γ) on the vertex set V = [n] is defined by

• a matrix of exposures {e(i, j)}1≤i,j≤n

• a set of capital ratios {γ(i)}1≤i≤n

In this network, the in-degree of a node i is given by

d(i) := #{j∈V | e(j, i)>0},

which represents the number of nodes exposed to i, while its out-degree

d+(i) := #{j∈V | e(i, j)>0}

represents the number of institutions i is exposed to. The set of initially insolvent institutions is represented by

D0(e, γ) = {i ∈ V | γ(i) = 0}

In a network (e, γ) of counterparties, the default of one or several nodes may lead to the insolvency of other nodes, generating a cascade of defaults. Starting from the set of initially insolvent institutions D0(e, γ) which represent fundamental defaults, contagious process is defined as:

Denoting by R(j) the recovery rate on the assets of j at default, the default of j induces a loss equal to (1 − R(j))e(i, j) for its counterparty i. If this loss exceeds the capital of i, then i becomes in turn insolvent. From the formula for Capital Ration, we have c(i) = γ(i)A(i). The set of nodes which become insolvent due to their exposures to initial defaults is

D1(e, γ) = {i ∈ V | γ(i)A(i) < ∑j∈D0 (1 − R(j)) e(i, j)}

This procedure may be iterated to define the default cascade initiated by a set of initial defaults.

So, when would a default cascade happen? Consider a financial network (e, γ) on the vertex set V = [n]. Set D0(e, γ) = {i ∈ V | γ(i) = 0} of initially insolvent institutions. The increasing sequence (Dk(e, γ), k ≥ 1) of subsets of V defined by

Dk(e, γ) = {i ∈ V | γ(i)A(i) < ∑j∈Dk-1(e,γ) (1−R(j)) e(i, j)}

is called the default cascade initiated by D0(e, γ).

Thus Dk(e, γ) represents the set of institutions whose capital is insufficient to absorb losses due to defaults of institutions in Dk-1(e, γ).

Thus, in a network of size n, the cascade ends after at most n − 1 iterations. Hence, Dn-1(e, γ) represents the set of all nodes which become insolvent starting from the initial set of defaults D0(e, γ).

Consider a financial network (e, γ) on the vertex set V = [n]. The fraction of defaults in the network (e, γ) (initiated by D0(e, γ) is given by

αn(e, γ) := |Dn-1(e, γ)|/n

The recovery rates R(i) may be exogenous or determined endogenously by redistributing assets of a defaulted entity among debtors, proportionally to their outstanding debt. The latter scenario is too optimistic since in practice liquidation takes time and assets may depreciate in value due to fire sales during liquidation. When examining the short term consequences of default, the most realistic assumption on recovery rates is zero: Assets held with a defaulted counterparty are frozen until liquidation takes place, a process which can in practice take a pretty long time to terminate.

The Statistical Physics of Stock Markets. Thought of the Day 143.0

This video is an Order Routing Animation

The externalist view argues that we can make sense of, and profit from stock markets’ behavior, or at least few crucial properties of it, by crunching numbers and looking for patterns and regularities in certain sets of data. The notion of data, hence, is a key element in such an understanding and the quantitative side of the problem is prominent even if it does not mean that a qualitative analysis is ignored. The point here that the outside view maintains that it provides a better understanding than the internalist view. To this end, it endorses a functional perspective on finance and stock markets in particular.

The basic idea of the externalist view is that there are general properties and behavior of stock markets that can be detected and studied through mathematical lens, and they do not depend so much on contextual or domain-specific factors. The point at stake here is that the financial systems can be studied and approached at different scales, and it is virtually impossible to produce all the equations describing at a micro level all the objects of the system and their relations. So, in response, this view focuses on those properties that allow us to get an understanding of the behavior of the systems at a global level without having to produce a detailed conceptual and mathematical account of the inner ‘machinery’ of the system. Hence the two roads: The first one is to embrace an emergentist view on stock market, that is a specific metaphysical, ontological, and methodological thesis, while the second one is to embrace a heuristic view, that is the idea that the choice to focus on those properties that are tractable by the mathematical models is a pure problem-solving option.

A typical view of the externalist approach is the one provided, for instance, by statistical physics. In describing collective behavior, this discipline neglects all the conceptual and mathematical intricacies deriving from a detailed account of the inner, individual, and at micro level functioning of a system. Concepts such as stochastic dynamics, self-similarity, correlations (both short- and long-range), and scaling are tools to get this aim. Econophysics is a stock example in this sense: it employs methods taken from mathematics and mathematical physics in order to detect and forecast the driving forces of stock markets and their critical events, such as bubbles, crashes and their tipping points. Under this respect, markets are not ‘dark boxes’: you can see their characteristics from the outside, or better you can see specific dynamics that shape the trends of stock markets deeply and for a long time. Moreover, these dynamics are complex in the technical sense. This means that this class of behavior is such to encompass timescales, ontology, types of agents, ecologies, regulations, laws, etc. and can be detected, even if not strictly predictable. We can focus on the stock markets as a whole, on few of their critical events, looking at the data of prices (or other indexes) and ignoring all the other details and factors since they will be absorbed in these global dynamics. So this view provides a look at stock markets such that not only they do not appear as a unintelligible casino where wild gamblers face each other, but that shows the reasons and the properties of a systems that serve mostly as a means of fluid transactions that enable and ease the functioning of free markets.

Moreover the study of complex systems theory and that of stock markets seem to offer mutual benefits. On one side, complex systems theory seems to offer a key to understand and break through some of the most salient stock markets’ properties. On the other side, stock markets seem to provide a ‘stress test’ of the complexity theory. Didier Sornette expresses the analogies between stock markets and phase transitions, statistical mechanics, nonlinear dynamics, and disordered systems mold the view from outside:

Take our personal life. We are not really interested in knowing in advance at what time we will go to a given store or drive to a highway. We are much more interested in forecasting the major bifurcations ahead of us, involving the few important things, like health, love, and work, that count for our happiness. Similarly, predicting the detailed evolution of complex systems has no real value, and the fact that we are taught that it is out of reach from a fundamental point of view does not exclude the more interesting possibility of predicting phases of evolutions of complex systems that really count, like the extreme events. It turns out that most complex systems in natural and social sciences do exhibit rare and sudden transitions that occur over time intervals that are short compared to the characteristic time scales of their posterior evolution. Such extreme events express more than anything else the underlying “forces” usually hidden by almost perfect balance and thus provide the potential for a better scientific understanding of complex systems.

Phase transitions, critical points, extreme events seem to be so pervasive in stock markets that they are the crucial concepts to explain and, in case, foresee. And complexity theory provides us a fruitful reading key to understand their dynamics, namely their generation, growth and occurrence. Such a reading key proposes a clear-cut interpretation of them, which can be explained again by means of an analogy with physics, precisely with the unstable position of an object. Complexity theory suggests that critical or extreme events occurring at large scale are the outcome of interactions occurring at smaller scales. In the case of stock markets, this means that, unlike many approaches that attempt to account for crashes by searching for ‘mechanisms’ that work at very short time scales, complexity theory indicates that crashes have causes that date back months or year before it. This reading suggests that it is the increasing, inner interaction between the agents inside the markets that builds up the unstable dynamics (typically the financial bubbles) that eventually ends up with a critical event, the crash. But here the specific, final step that triggers the critical event: the collapse of the prices is not the key for its understanding: a crash occurs because the markets are in an unstable phase and any small interference or event may trigger it. The bottom line: the trigger can be virtually any event external to the markets. The real cause of the crash is its overall unstable position, the proximate ‘cause’ is secondary and accidental. Or, in other words, a crash could be fundamentally endogenous in nature, whilst an exogenous, external, shock is simply the occasional triggering factors of it. The instability is built up by a cooperative behavior among traders, who imitate each other (in this sense is an endogenous process) and contribute to form and reinforce trends that converge up to a critical point.

The main advantage of this approach is that the system (the market) would anticipate the crash by releasing precursory fingerprints observable in the stock market prices: the market prices contain information on impending crashes and this implies that:

if the traders were to learn how to decipher and use this information, they would act on it and on the knowledge that others act on it; nevertheless, the crashes would still probably happen. Our results suggest a weaker form of the “weak efficient market hypothesis”, according to which the market prices contain, in addition to the information generally available to all, subtle information formed by the global market that most or all individual traders have not yet learned to decipher and use. Instead of the usual interpretation of the efficient market hypothesis in which traders extract and consciously incorporate (by their action) all information contained in the market prices, we propose that the market as a whole can exhibit “emergent” behavior not shared by any of its constituents.

In a nutshell, the critical events emerge in a self-organized and cooperative fashion as the macro result of the internal and micro interactions of the traders, their imitation and mirroring.

 

Being Mediatized: How 3 Realms and 8 Dimensions Explain ‘Being’ by Peter Blank.

Untitled

Experience of Reflection: ‘Self itself is an empty word’
Leary – The neuroatomic winner: “In the province of the mind, what is believed true is true, or becomes true within limits to be learned by experience and experiment.” (Dr. John Lilly)

Media theory had noted the shoring up or even annihilation of the subject due to technologies that were used to reconfigure oneself and to see oneself as what one was: pictures, screens. Depersonalization was an often observed, reflective state of being that stood for the experience of anxiety dueto watching a ‘movie of one’s own life’ or experiencing a malfunction or anomaly in one’s self-awareness.

To look at one’s scaffolded media identity meant in some ways to look at the redactionary product of an extreme introspective process. Questioning what one interpreted oneself to be doing in shaping one’s media identities enhanced endogenous viewpoints and experience, similar to focusing on what made a car move instead of deciding whether it should stay on the paved road or drive across a field. This enabled the individual to see the formation of identity from the ‘engine perspective’.

Experience of the Hyperreal: ‘I am (my own) God’
Leary – The metaprogramming winner: “I make my own coincidences, synchronities, luck, and Destiny.”

Meta-analysis of distinctions – seeing a bird fly by, then seeing oneself seeing a bird fly by, then thinking the self that thought that – becomes routine in hyperreality. Media represent the opposite: a humongous distraction from Heidegger’s goal of the search for ‘Thinking’: capturing at present the most alarming of what occupies the mind. Hyperreal experiences could not be traced back to a person’s ‘real’ identities behind their aliases. The most questionable therefore related to dismantled privacy: a privacy that only existed because all aliases were constituting a false privacy realm. There was nothing personal about the conversations, no facts that led back to any person, no real change achieved, no political influence asserted.

From there it led to the difference between networked relations and other relations, call these other relations ‘single’ relations, or relations that remained solemnly silent. They were relations that could not be disclosed against their will because they were either too vague, absent, depressing, shifty, or dangerous to make the effort worthwhile to outsiders.

The privacy of hyperreal being became the ability to hide itself from being sensed by others through channels of information (sight, touch, hearing), but also to hide more private other selves, stored away in different, more private networks from others in more open social networks.

Choosing ‘true’ privacy, then, was throwing away distinctions one experienced between several identities. As identities were space the meaning of time became the capacity for introspection. The hyperreal being’s overall identity to the inside as lived history attained an extra meaning – indeed: as alter- or hyper-ego. With Nietzsche, the physical body within its materiality occasioned a performance that subjected its own subjectivity. Then and only then could it become its own freedom.

With Foucault one could say that the body was not so much subjected but still there functioning on its own premises. Therefore the sensitory systems lived the body’s life in connection with (not separated from) a language based in a mediated faraway from the body. If language and our sensitory systems were inseparable, beings and God may as well be.

Being Mediatized

Gross Domestic Product. Part 1.

The Gross Domestic Product wi of a country i is defined as the “total market value of all final goods and services produced in a country in a given period, equal to total consumer, investment and government spending, plus the value of exports, minus the value of imports”. In other words, there are two main terms contributing to the observed value of the GDP wi of a country i: an endogenous term Ii (also konwn as internal demand) determined by the internal spending due to the country’s economic process and an exogenous term Fi determined by the trade flow with other countries. The above definition can then be rephrased as

wi(t) ≡ Ii(t) + Fi(t) —– (1)

where Fi(t), the total trade value of exports and imports by i to/from all other countries will be denoted by fini(t) and fouti(t) respectively, and it can be expressed as

fini(t) ≡ ∑j=1N(t) fji(t) —– (2)

fouti(t) ≡ ∑j=1N(t) fit(t) —– (3)

The net amount of incoming money due to the trading activity is therefore given by

Fi(t) ≡  fouti(t) – fini(t) —– (4)

The above definition anticipates that the GDP is strongly affected by the structure of the World Trade topology. Looking at some empirical properties of the GDP:

A fundamental macroeconomic question is: how is the GDP distributed across world countries? To address this point we consider the distribution of the rescaled quantity

xi(t) ≡ wi(t) /〈w(t)〉—– (5)

where ⟨w⟩ ≡ wT/N is the average GDP and wT(t) ≡ ∑i=1N(t) wi(t) is the total one. In the figure below, is the cumulative distribution

ρ> (x) ≡ ∫x ρ(x’) dx’ —– (6)

Untitled

Figure: Normalized cumulative distribution of the relative GDP xi(t) ≡ wi(t)/⟨w⟩(t) for all world countries at four different snapshots. Inset: the same data plotted in terms of the rescaled quantity yi(t) ≡ wi(t)/wT(t) = xi(t)/N(t) for the transition region to a power-law curve.

for four different years in the time interval considered. The right tail of the distribution roughly follows a straight line in log-log axes, corresponding to a power-law curve

ρ> (x) ∝ x1−α —– (7)

with exponent 1 − α = −1, which indicates a tail in the GDP probability distribution ρ(x) ∝ x−α with α = 2. This behaviour is qualitatively similar to the power-law character of the per capita GDP distribution.

Moreover, it can be seen that the cumulative distribution departs from the power-law behaviour in the small x region, and that the value x where this happens is larger as time increases. However, if xi(t) is rescaled to

yi(t) ≡ wi(t)/wT(t) = xi(t)/N(t) —– (8)

then the point y = x/N ≈ 0.003 where the power-law tail of the distribution starts is approximately constant in time (see inset of figure). This suggests that the temporal change of x is due to the variation of N(t) affecting ⟨w(t)⟩ and not to other factors. This is because the temporal variation of N(t) affects the average-dependent quantities such as x: note that, while for a system with a fixed number N of units wT would be simply proportional to the average value ⟨w⟩. In particular, the average values of the quantities of interest may display sudden jumps due to the steep increase of N(t) rather than to genuine variations of the quantities themselves…..

Austrian Economics. Ruminations. End Part.

von-hayek

Mainstream economics originates from Jevons’ and Menger’s marginal utility and Walras’ and Marshall’s equilibrium approach. While their foundations are similar, their presentation looks quite different, according to the two schools which typically represent these two approaches: the Austrian school initiated by Menger and the general equilibrium theory initiated by Walras. An important, albeit only formal, difference is that the former presents economic theory mainly in a literary form using ordinary logic, while the latter prefers mathematical expressions and logic.

Lachmann, who excludes determinism from economics since acts of mind are concerned, connects determinism with the equilibrium approach. However, equilibrium theory is not necessarily deterministic, also because it does not establish relationships of succession, but only relationships of coexistence. In this respect, equilibrium theory is not more deterministic than the theory of the Austrian school. Even though the Austrian school does not comprehensively analyze equilibrium, all its main results strictly depend on the assumption that the economy is in equilibrium (intended as a state everybody prefers not to unilaterally deviate from, not necessarily a competitive equilibrium). Considering both competition and monopoly, Menger examines the market for only two commodities in a barter economy. His analysis is the best to be obtained without using mathematics, but it is too limited for determining all the implications of the theory. For instance, it is unclear how the market for a specific commodity is affected by the conditions of the markets for other commodities. However, interdependence is not excluded by the Austrian school. For instance, Böhm-Bawerk examines at length the interdependence between the markets for labor and capital. Despite the incomplete analysis of equilibrium carried out by the Austrian school, many of its results imply that the economy is in equilibrium, as shown by the following examples.

a) The Gossen-Menger loss principle. This principle states that the price of a good can be determined by analyzing the effect of the loss (or the acquisition) of a small quantity of the same good.

b) Wieser’s theory of imputation. Wieser’s theory of imputation attempts to determine the value of the goods used for production in terms of the value (marginal utility) of the consumption goods produced.

c) Böhm-Bawerk’s theory of capital. Böhm-Bawerk proposed a longitudinal theory of capital, where production consists of a time process. A sequence of inputs of labor is employed in order to obtain, at the final stage, a given consumption good. Capital goods, which are the products obtained in the intermediate stages, are seen as a kind of consumption goods in the process of maturing.

A historically specific theory of capital inspired by the Austrian school focuses on the way profit-oriented enterprises organize the allocation of goods and resources in capitalism. One major issue is the relationship between acquisition and production. How does the homogeneity of money figures that entrepreneurs employ in their acquisitive plans connect to the unquestionable heterogeneity of the capital goods in production that these monetary figures depict? The differentiation between acquisition and production distinguishes this theory from the neoclassical approach to capital. The homogeneity of the money figures on the level of acquisition that is important to such a historically specific theory is not due to the assumption of equilibrium, but simply to the existence of money prices. It is real-life homogeneity, so to speak. It does not imply any homogeneity on the level of production, but rather explains the principle according to which the production process is conducted.

In neoclassical economics, in contrast, production and acquisition, the two different levels of analysis, are not separated but are amalgamated by means of the vague term “value”. In equilibrium, assets are valued according to their marginal productivity, and therefore their “value” signifies both their price and their importance to the production process. Capital understood in this way, i.e., as the value of capital goods, can take on the “double meaning of money or goods”. By concentrating on the value of capital goods, the neoclassical approach assumes homogeneity not only on the level of acquisition with its input and output prices, but also on the level of production. The neoclassical approach to capital assumes that the valuation process has already been accomplished. It does not explain how assets come to be valued originally according to their marginal product. In this, an elaborated historically specific theory of capital would provide the necessary tools. In capitalism, inputs and outputs are interrelated by entrepreneurs who are guided by price signals. In their efforts to maximize their monetary profits, they aim to benefit from the spread between input and output prices. Therefore, money tends to be invested where this spread appears to be wide enough to be worth the risk. In other words, business capital flows to those industries and businesses where it yields the largest profit. Competition among entrepreneurs brings about a tendency for price spreads to diminish. The prices of the factors of production are bid up and the prices of the output are bid down until, in the hypothetical state of equilibrium, the factor prices sum up to the price of the product. A historically specific theory of capital is able to describe and analyze the market process that results – or tends to result – in marginal productivity prices, and can therefore also formulate positions concerning endogenous and exogenous misdirections of this process which lead to disequilibrium prices. Consider Mises,

In balance sheets and in profit-and-loss statements, […] it is necessary to enter the estimated money equivalent of all assets and liabilities other than cash. These items should be appraised according to the prices at which they could probably be sold in the future or, as is especially the case with equipment for production processes, in reference to the prices to be expected in the sale of merchandise manufactured with their aid.

According to this, not the monetary costs of the assets, which can be verified unambiguously, but their values are supposed to be the basis of entrepreneurial calculation. As the words indicate, this procedure involves a tremendous amount of uncertainty and can therefore only lead to fair values if equilibrium conditions are assumed.