Banking Assets Depreciation, Insolvency and Liquidation: Why are Defaults so Contagious?


Interlinkages across balance sheets of financial institutions may be modeled by a weighted directed graph G = (V, e) on the vertex set V = {1,…, n} = [n], whose elements represent financial institutions. The exposure matrix is given by e ∈ Rn×n, where the ijth entry e(i, j) represents the exposure (in monetary units) of institution i to institution j. The interbank assets of an institution i are given by

A(i) := ∑j e(i, j), which represents the interbank liabilities of i. In addition to these interbank assets and liabilities, a bank may hold other assets and liabilities (such as deposits).

The net worth of the bank, given by its capital c(i), represents its capacity for absorbing losses while remaining solvent. “Capital Ratio” of institution i, although technically, the ratio of capital to interbank assets and not total assets is given by

γ(i) := c(i)/A(i)

An institution is insolvent if its net worth is negative or zero, in which case, γ(i) is set to 0.

A financial network (e, γ) on the vertex set V = [n] is defined by

• a matrix of exposures {e(i, j)}1≤i,j≤n

• a set of capital ratios {γ(i)}1≤i≤n

In this network, the in-degree of a node i is given by

d(i) := #{j∈V | e(j, i)>0},

which represents the number of nodes exposed to i, while its out-degree

d+(i) := #{j∈V | e(i, j)>0}

represents the number of institutions i is exposed to. The set of initially insolvent institutions is represented by

D0(e, γ) = {i ∈ V | γ(i) = 0}

In a network (e, γ) of counterparties, the default of one or several nodes may lead to the insolvency of other nodes, generating a cascade of defaults. Starting from the set of initially insolvent institutions D0(e, γ) which represent fundamental defaults, contagious process is defined as:

Denoting by R(j) the recovery rate on the assets of j at default, the default of j induces a loss equal to (1 − R(j))e(i, j) for its counterparty i. If this loss exceeds the capital of i, then i becomes in turn insolvent. From the formula for Capital Ration, we have c(i) = γ(i)A(i). The set of nodes which become insolvent due to their exposures to initial defaults is

D1(e, γ) = {i ∈ V | γ(i)A(i) < ∑j∈D0 (1 − R(j)) e(i, j)}

This procedure may be iterated to define the default cascade initiated by a set of initial defaults.

So, when would a default cascade happen? Consider a financial network (e, γ) on the vertex set V = [n]. Set D0(e, γ) = {i ∈ V | γ(i) = 0} of initially insolvent institutions. The increasing sequence (Dk(e, γ), k ≥ 1) of subsets of V defined by

Dk(e, γ) = {i ∈ V | γ(i)A(i) < ∑j∈Dk-1(e,γ) (1−R(j)) e(i, j)}

is called the default cascade initiated by D0(e, γ).

Thus Dk(e, γ) represents the set of institutions whose capital is insufficient to absorb losses due to defaults of institutions in Dk-1(e, γ).

Thus, in a network of size n, the cascade ends after at most n − 1 iterations. Hence, Dn-1(e, γ) represents the set of all nodes which become insolvent starting from the initial set of defaults D0(e, γ).

Consider a financial network (e, γ) on the vertex set V = [n]. The fraction of defaults in the network (e, γ) (initiated by D0(e, γ) is given by

αn(e, γ) := |Dn-1(e, γ)|/n

The recovery rates R(i) may be exogenous or determined endogenously by redistributing assets of a defaulted entity among debtors, proportionally to their outstanding debt. The latter scenario is too optimistic since in practice liquidation takes time and assets may depreciate in value due to fire sales during liquidation. When examining the short term consequences of default, the most realistic assumption on recovery rates is zero: Assets held with a defaulted counterparty are frozen until liquidation takes place, a process which can in practice take a pretty long time to terminate.




If e0 ∈ R1+1 is a future-directed timelike unit vector, and if e1 is the unique spacelike unit vector with e0e1 = 0 that “points to the right,” then coordinates x0 and x1 on R1+1 are defined by x0(q) := qe0 and x1(q) := qe1. The partial differential operator

x : = ∂2x0 − ∂2x1

does not depend on the choice of e0.

The Fourier transform of the Klein-Gordon equation

(□ + m2)u = 0 —– (1)

where m > 0 is a given mass, is

(−p2 + m2)û(p) = 0 —– (2)

As a consequence, the support of û has to be a subset of the hyperbola Hm ⊂ R1+1 specified by the condition p2 = m2. One connected component of Hm consists of positive-energy vectors only; it is called the upper mass shell Hm+. The elements of Hm+ are the 4-momenta of classical relativistic point particles.

Denote by L1 the restricted Lorentz group, i.e., the connected component of the Lorentz group containing its unit element. In 1 + 1 dimensions, L1 coincides with the one-parameter Abelian group B(χ), χ ∈ R, of boosts. Hm+ is an orbit of L1 without fixed points. So if one chooses any point p′ ∈ Hm+, then there is, for each p ∈ Hm+, a unique χ(p) ∈ R with p = B(χ(p))p′. By construction, χ(B(ξ)p) = χ(p) + ξ, so the measure dχ on Hm+ is invariant under boosts and does note depend on the choice of p′.

For each p ∈ Hm+, the plane wave q ↦ e±ipq on R1+1 is a classical solution of the Klein-Gordon equation. The Klein-Gordon equation is linear, so if a+ and a are, say, integrable functions on Hm+, then

F(q) := ∫Hm+ (a+(p)e-ipq + a(p)eipq dχ(p) —– (3)

is a solution of the Klein-Gordon equation as well. If the functions a± are not integrable, the field F may still be well defined as a distribution. As an example, put a± ≡ (2π)−1, then

F(q) = (2π)−1 Hm+ (e-ipq + eipq) dχ(p) = π−1Hm+ cos(pq) dχ(p) =: Φ(q) —– (4)

and for a± ≡ ±(2πi)−1, F equals

F(q) = (2πi)−1Hm+ (e-ipq – eipq) dχ(p) = π−1Hm+ sin(pq) dχ(p) =: ∆(q) —– (5)

Quantum fields are obtained by “plugging” classical field equations and their solutions into the well-known second quantization procedure. This procedure replaces the complex (or, more generally speaking, finite-dimensional vector) field values by linear operators in an infinite-dimensional Hilbert space, namely, a Fock space. The Hilbert space of the hermitian scalar field is constructed from wave functions that are considered as the wave functions of one or several particles of mass m. The single-particle wave functions are the elements of the Hilbert space H1 := L2(Hm+, dχ). Put the vacuum (zero-particle) space H0 equal to C, define the vacuum vector Ω := 1 ∈ H0, and define the N-particle space HN as the Hilbert space of symmetric wave functions in L2((Hm+)N, dNχ), i.e., all wave functions ψ with

ψ(pπ(1) ···pπ(N)) = ψ(p1 ···pN)

∀ permutations π ∈ SN. The bosonic Fock space H is defined by

H := ⊕N∈N HN.

The subspace

D := ∪M∈N ⊕0≤M≤N HN is called a finite particle space.

The definition of the N-particle wave functions as symmetric functions endows the field with a Bose–Einstein statistics. To each wave function φ ∈ H1, assign a creation operator a+(φ) by

a+(φ)ψ := CNφ ⊗s ψ, ψ ∈ D,

where ⊗s denotes the symmetrized tensor product and where CN is a constant.

(a+(φ)ψ)(p1 ···pN) = CN/N ∑v φ(pν)ψ(pπ(1) ···p̂ν ···pπ(N)) —– (6)

where the hat symbol indicates omission of the argument. This defines a+(φ) as a linear operator on the finite-particle space D.

The adjoint operator a(φ) := a+(φ) is called an annihilation operator; it assigns to each ψ ∈ HN, N ≥ 1, the wave function a(φ)ψ ∈ HN−1 defined by

(a(φ)ψ)(p1 ···pN) := CN ∫Hm+ φ(p)ψ(p1 ···pN−1, p) dχ(p)

together with a(φ)Ω := 0, this suffices to specify a(φ) on D. Annihilation operators can also be defined for sharp momenta. Namely, one can define to each p ∈ Hm+ the annihilation operator a(p) assigning to

each ψ ∈ HN, N ≥ 1, the wave function a(p)ψ ∈ HN−1 given by

(a(p)ψ)(p1 ···pN−1) := Cψ(p, p1 ···pN−1), ψ ∈ HN,

and assigning 0 ∈ H to Ω. a(p) is, like a(φ), well defined on the finite-particle space D as an operator, but its hermitian adjoint is ill-defined as an operator, since the symmetric tensor product of a wave function by a delta function is no wave function.

Given any single-particle wave functions ψ, φ ∈ H1, the commutators [a(ψ), a(φ)] and [a+(ψ), a+(φ)] vanish by construction. It is customary to choose the constants CN in such a fashion that creation and annihilation operators exhibit the commutation relation

[a(φ), a+(ψ)] = ⟨φ, ψ⟩ —– (7)

which requires CN = N. With this choice, all creation and annihilation operators are unbounded, i.e., they are not continuous.

When defining the hermitian scalar field as an operator valued distribution, it must be taken into account that an annihilation operator a(φ) depends on its argument φ in an antilinear fashion. The dependence is, however, R-linear, and one can define the scalar field as a C-linear distribution in two steps.

For each real-valued test function φ on R1+1, define

Φ(φ) := a(φˆ|Hm+) + a+(φˆ|Hm+)

then one can define for an arbitrary complex-valued φ

Φ(φ) := Φ(Re(φ)) + iΦ(Im(φ))

Referring to (4), Φ is called the hermitian scalar field of mass m.

Thereafter, one could see

[Φ(q), Φ(q′)] = i∆(q − q′) —– (8)

Referring to (5), which is to be read as an equation of distributions. The distribution ∆ vanishes outside the light cone, i.e., ∆(q) = 0 if q2 < 0. Namely, the integrand in (5) is odd with respect to some p′ ∈ Hm+ if q is spacelike. Note that pq > 0 for all p ∈ Hm+ if q ∈ V+. The consequence of this is called microcausality: field operators located in spacelike separated regions commute (for the hermitian scalar field).

The Statistical Physics of Stock Markets. Thought of the Day 143.0

This video is an Order Routing Animation

The externalist view argues that we can make sense of, and profit from stock markets’ behavior, or at least few crucial properties of it, by crunching numbers and looking for patterns and regularities in certain sets of data. The notion of data, hence, is a key element in such an understanding and the quantitative side of the problem is prominent even if it does not mean that a qualitative analysis is ignored. The point here that the outside view maintains that it provides a better understanding than the internalist view. To this end, it endorses a functional perspective on finance and stock markets in particular.

The basic idea of the externalist view is that there are general properties and behavior of stock markets that can be detected and studied through mathematical lens, and they do not depend so much on contextual or domain-specific factors. The point at stake here is that the financial systems can be studied and approached at different scales, and it is virtually impossible to produce all the equations describing at a micro level all the objects of the system and their relations. So, in response, this view focuses on those properties that allow us to get an understanding of the behavior of the systems at a global level without having to produce a detailed conceptual and mathematical account of the inner ‘machinery’ of the system. Hence the two roads: The first one is to embrace an emergentist view on stock market, that is a specific metaphysical, ontological, and methodological thesis, while the second one is to embrace a heuristic view, that is the idea that the choice to focus on those properties that are tractable by the mathematical models is a pure problem-solving option.

A typical view of the externalist approach is the one provided, for instance, by statistical physics. In describing collective behavior, this discipline neglects all the conceptual and mathematical intricacies deriving from a detailed account of the inner, individual, and at micro level functioning of a system. Concepts such as stochastic dynamics, self-similarity, correlations (both short- and long-range), and scaling are tools to get this aim. Econophysics is a stock example in this sense: it employs methods taken from mathematics and mathematical physics in order to detect and forecast the driving forces of stock markets and their critical events, such as bubbles, crashes and their tipping points. Under this respect, markets are not ‘dark boxes’: you can see their characteristics from the outside, or better you can see specific dynamics that shape the trends of stock markets deeply and for a long time. Moreover, these dynamics are complex in the technical sense. This means that this class of behavior is such to encompass timescales, ontology, types of agents, ecologies, regulations, laws, etc. and can be detected, even if not strictly predictable. We can focus on the stock markets as a whole, on few of their critical events, looking at the data of prices (or other indexes) and ignoring all the other details and factors since they will be absorbed in these global dynamics. So this view provides a look at stock markets such that not only they do not appear as a unintelligible casino where wild gamblers face each other, but that shows the reasons and the properties of a systems that serve mostly as a means of fluid transactions that enable and ease the functioning of free markets.

Moreover the study of complex systems theory and that of stock markets seem to offer mutual benefits. On one side, complex systems theory seems to offer a key to understand and break through some of the most salient stock markets’ properties. On the other side, stock markets seem to provide a ‘stress test’ of the complexity theory. Didier Sornette expresses the analogies between stock markets and phase transitions, statistical mechanics, nonlinear dynamics, and disordered systems mold the view from outside:

Take our personal life. We are not really interested in knowing in advance at what time we will go to a given store or drive to a highway. We are much more interested in forecasting the major bifurcations ahead of us, involving the few important things, like health, love, and work, that count for our happiness. Similarly, predicting the detailed evolution of complex systems has no real value, and the fact that we are taught that it is out of reach from a fundamental point of view does not exclude the more interesting possibility of predicting phases of evolutions of complex systems that really count, like the extreme events. It turns out that most complex systems in natural and social sciences do exhibit rare and sudden transitions that occur over time intervals that are short compared to the characteristic time scales of their posterior evolution. Such extreme events express more than anything else the underlying “forces” usually hidden by almost perfect balance and thus provide the potential for a better scientific understanding of complex systems.

Phase transitions, critical points, extreme events seem to be so pervasive in stock markets that they are the crucial concepts to explain and, in case, foresee. And complexity theory provides us a fruitful reading key to understand their dynamics, namely their generation, growth and occurrence. Such a reading key proposes a clear-cut interpretation of them, which can be explained again by means of an analogy with physics, precisely with the unstable position of an object. Complexity theory suggests that critical or extreme events occurring at large scale are the outcome of interactions occurring at smaller scales. In the case of stock markets, this means that, unlike many approaches that attempt to account for crashes by searching for ‘mechanisms’ that work at very short time scales, complexity theory indicates that crashes have causes that date back months or year before it. This reading suggests that it is the increasing, inner interaction between the agents inside the markets that builds up the unstable dynamics (typically the financial bubbles) that eventually ends up with a critical event, the crash. But here the specific, final step that triggers the critical event: the collapse of the prices is not the key for its understanding: a crash occurs because the markets are in an unstable phase and any small interference or event may trigger it. The bottom line: the trigger can be virtually any event external to the markets. The real cause of the crash is its overall unstable position, the proximate ‘cause’ is secondary and accidental. Or, in other words, a crash could be fundamentally endogenous in nature, whilst an exogenous, external, shock is simply the occasional triggering factors of it. The instability is built up by a cooperative behavior among traders, who imitate each other (in this sense is an endogenous process) and contribute to form and reinforce trends that converge up to a critical point.

The main advantage of this approach is that the system (the market) would anticipate the crash by releasing precursory fingerprints observable in the stock market prices: the market prices contain information on impending crashes and this implies that:

if the traders were to learn how to decipher and use this information, they would act on it and on the knowledge that others act on it; nevertheless, the crashes would still probably happen. Our results suggest a weaker form of the “weak efficient market hypothesis”, according to which the market prices contain, in addition to the information generally available to all, subtle information formed by the global market that most or all individual traders have not yet learned to decipher and use. Instead of the usual interpretation of the efficient market hypothesis in which traders extract and consciously incorporate (by their action) all information contained in the market prices, we propose that the market as a whole can exhibit “emergent” behavior not shared by any of its constituents.

In a nutshell, the critical events emerge in a self-organized and cooperative fashion as the macro result of the internal and micro interactions of the traders, their imitation and mirroring.


Momentum Space Topology Generates Massive Fermions. Thought of the Day 142.0


Topological quantum phase transitions: The vacua at b0 ≠ 0 and b > M have Fermi surfaces. At b2 > b20 + M2, these Fermi surfaces have nonzero global topological charges N3 = +1 and N3 = −1. At the quantum phase transition occurring on the line b0 = 0, b > M (thick horizontal line) the Fermi surfaces shrink to the Fermi points with nonzero N3. At M2 < b2 < b20 + M2 the global topology of the Fermi surfaces is trivial, N3 = 0. At the quantum phase transition occurring on the line b = M (thick vertical line), the Fermi surfaces shrink to the points; and since their global topology is trivial the zeroes disappear at b < M where the vacuum is fully gapped. The quantum phase transition between the Fermi surfaces with and without topological charge N3 occurs at b2 = b20 + M2 (dashed line). At this transition, the Fermi surfaces touch each other, and their topological charges annihilate each other.

What we have assumed here is that the Fermi point in the Standard Model above the electroweak energy scale is marginal, i.e. its total topological charge is N3 = 0. Since the topology does not protect such a point, everything depends on symmetry, which is more subtle. In principle, one may expect that the vacuum is always fully gapped. This is supported by the Monte-Carlo simulations which suggest that in the Standard Model there is no second-order phase transition at finite temperature, instead one has either the first-order electroweak transition or crossover depending on the ratio of masses of the Higgs and gauge bosons. This would actually mean that the fermions are always massive.

Such scenario does not contradict to the momentum-space topology, only if the total topological charge N3 is zero. However, from the point of view of the momentum-space topology there is another scheme of the description of the Standard Model. Let us assume that the Standard Model follows from the GUT with SO(10) group. Here, the 16 Standard Model fermions form at high energy the 16-plet of the SO(10) group. All the particles of this multiplet are left-handed fermions. These are: four left-handed SU(2) doublets (neutrino-electron and 3 doublets of quarks) + eight left SU(2) singlets of anti-particles (antineutrino, positron and 6 anti-quarks). The total topological charge of the Fermi point at p = 0 is N3 = −16, and thus such a vacuum is topologically stable and is protected against the mass of fermions. This topological protection works even if the SU (2) × U (1) symmetry is violated perturbatively, say, due to the mixing of different species of the 16-plet. Mixing of left leptonic doublet with left singlets (antineutrino and positron) violates SU(2) × U(1) symmetry, but this does not lead to annihilation of Fermi points and mass formation since the topological charge N3 is conserved.

What this means in a nutshell is that if the total topological charge of the Fermi surfaces is non-zero, the gap cannot appear perturbatively. It can only arise due to the crucial reconstruction of the fermionic spectrum with effective doubling of fermions. In the same manner, in the SO(10) GUT model the mass generation can only occur non-perturbatively. The mixing of the left and right fermions requires the introduction of the right fermions, and thus the effective doubling of the number of fermions. The corresponding Gor’kov’s Green’s function in this case will be the (16 × 2) × (16 × 2) matrix. The nullification of the topological charge N3 = −16 occurs exactly in the same manner, as in superconductors. In the extended (Gor’kov) Green’s function formalism appropriate below the transition, the topological charge of the original Fermi point is annihilated by the opposite charge N3 = +16 of the Fermi point of “holes” (right-handed particles).

This demonstrates that the mechanism of generation of mass of fermions essentially depends on the momentum space topology. If the Standard Model originates from the SO(10) group, the vacuum belongs to the universality class with the topologically non-trivial chiral Fermi point (i.e. with N3 ≠ 0), and the smooth crossover to the fully-gapped vacuum is impossible. On the other hand, if the Standard Model originates from the left-right symmetric Pati–Salam group such as SU(2)L × SU(2)R × SU(4), and its vacuum has the topologically trivial (marginal) Fermi point with N3 = 0, the smooth crossover to the fully-gapped vacuum is possible.