Banking Assets Depreciation, Insolvency and Liquidation: Why are Defaults so Contagious?

wp621_0

Interlinkages across balance sheets of financial institutions may be modeled by a weighted directed graph G = (V, e) on the vertex set V = {1,…, n} = [n], whose elements represent financial institutions. The exposure matrix is given by e ∈ Rn×n, where the ijth entry e(i, j) represents the exposure (in monetary units) of institution i to institution j. The interbank assets of an institution i are given by

A(i) := ∑j e(i, j), which represents the interbank liabilities of i. In addition to these interbank assets and liabilities, a bank may hold other assets and liabilities (such as deposits).

The net worth of the bank, given by its capital c(i), represents its capacity for absorbing losses while remaining solvent. “Capital Ratio” of institution i, although technically, the ratio of capital to interbank assets and not total assets is given by

γ(i) := c(i)/A(i)

An institution is insolvent if its net worth is negative or zero, in which case, γ(i) is set to 0.

A financial network (e, γ) on the vertex set V = [n] is defined by

• a matrix of exposures {e(i, j)}1≤i,j≤n

• a set of capital ratios {γ(i)}1≤i≤n

In this network, the in-degree of a node i is given by

d(i) := #{j∈V | e(j, i)>0},

which represents the number of nodes exposed to i, while its out-degree

d+(i) := #{j∈V | e(i, j)>0}

represents the number of institutions i is exposed to. The set of initially insolvent institutions is represented by

D0(e, γ) = {i ∈ V | γ(i) = 0}

In a network (e, γ) of counterparties, the default of one or several nodes may lead to the insolvency of other nodes, generating a cascade of defaults. Starting from the set of initially insolvent institutions D0(e, γ) which represent fundamental defaults, contagious process is defined as:

Denoting by R(j) the recovery rate on the assets of j at default, the default of j induces a loss equal to (1 − R(j))e(i, j) for its counterparty i. If this loss exceeds the capital of i, then i becomes in turn insolvent. From the formula for Capital Ration, we have c(i) = γ(i)A(i). The set of nodes which become insolvent due to their exposures to initial defaults is

D1(e, γ) = {i ∈ V | γ(i)A(i) < ∑j∈D0 (1 − R(j)) e(i, j)}

This procedure may be iterated to define the default cascade initiated by a set of initial defaults.

So, when would a default cascade happen? Consider a financial network (e, γ) on the vertex set V = [n]. Set D0(e, γ) = {i ∈ V | γ(i) = 0} of initially insolvent institutions. The increasing sequence (Dk(e, γ), k ≥ 1) of subsets of V defined by

Dk(e, γ) = {i ∈ V | γ(i)A(i) < ∑j∈Dk-1(e,γ) (1−R(j)) e(i, j)}

is called the default cascade initiated by D0(e, γ).

Thus Dk(e, γ) represents the set of institutions whose capital is insufficient to absorb losses due to defaults of institutions in Dk-1(e, γ).

Thus, in a network of size n, the cascade ends after at most n − 1 iterations. Hence, Dn-1(e, γ) represents the set of all nodes which become insolvent starting from the initial set of defaults D0(e, γ).

Consider a financial network (e, γ) on the vertex set V = [n]. The fraction of defaults in the network (e, γ) (initiated by D0(e, γ) is given by

αn(e, γ) := |Dn-1(e, γ)|/n

The recovery rates R(i) may be exogenous or determined endogenously by redistributing assets of a defaulted entity among debtors, proportionally to their outstanding debt. The latter scenario is too optimistic since in practice liquidation takes time and assets may depreciate in value due to fire sales during liquidation. When examining the short term consequences of default, the most realistic assumption on recovery rates is zero: Assets held with a defaulted counterparty are frozen until liquidation takes place, a process which can in practice take a pretty long time to terminate.

Advertisements

Microcausality

1-s2.0-S1355219803000662-fx1

If e0 ∈ R1+1 is a future-directed timelike unit vector, and if e1 is the unique spacelike unit vector with e0e1 = 0 that “points to the right,” then coordinates x0 and x1 on R1+1 are defined by x0(q) := qe0 and x1(q) := qe1. The partial differential operator

x : = ∂2x0 − ∂2x1

does not depend on the choice of e0.

The Fourier transform of the Klein-Gordon equation

(□ + m2)u = 0 —– (1)

where m > 0 is a given mass, is

(−p2 + m2)û(p) = 0 —– (2)

As a consequence, the support of û has to be a subset of the hyperbola Hm ⊂ R1+1 specified by the condition p2 = m2. One connected component of Hm consists of positive-energy vectors only; it is called the upper mass shell Hm+. The elements of Hm+ are the 4-momenta of classical relativistic point particles.

Denote by L1 the restricted Lorentz group, i.e., the connected component of the Lorentz group containing its unit element. In 1 + 1 dimensions, L1 coincides with the one-parameter Abelian group B(χ), χ ∈ R, of boosts. Hm+ is an orbit of L1 without fixed points. So if one chooses any point p′ ∈ Hm+, then there is, for each p ∈ Hm+, a unique χ(p) ∈ R with p = B(χ(p))p′. By construction, χ(B(ξ)p) = χ(p) + ξ, so the measure dχ on Hm+ is invariant under boosts and does note depend on the choice of p′.

For each p ∈ Hm+, the plane wave q ↦ e±ipq on R1+1 is a classical solution of the Klein-Gordon equation. The Klein-Gordon equation is linear, so if a+ and a are, say, integrable functions on Hm+, then

F(q) := ∫Hm+ (a+(p)e-ipq + a(p)eipq dχ(p) —– (3)

is a solution of the Klein-Gordon equation as well. If the functions a± are not integrable, the field F may still be well defined as a distribution. As an example, put a± ≡ (2π)−1, then

F(q) = (2π)−1 Hm+ (e-ipq + eipq) dχ(p) = π−1Hm+ cos(pq) dχ(p) =: Φ(q) —– (4)

and for a± ≡ ±(2πi)−1, F equals

F(q) = (2πi)−1Hm+ (e-ipq – eipq) dχ(p) = π−1Hm+ sin(pq) dχ(p) =: ∆(q) —– (5)

Quantum fields are obtained by “plugging” classical field equations and their solutions into the well-known second quantization procedure. This procedure replaces the complex (or, more generally speaking, finite-dimensional vector) field values by linear operators in an infinite-dimensional Hilbert space, namely, a Fock space. The Hilbert space of the hermitian scalar field is constructed from wave functions that are considered as the wave functions of one or several particles of mass m. The single-particle wave functions are the elements of the Hilbert space H1 := L2(Hm+, dχ). Put the vacuum (zero-particle) space H0 equal to C, define the vacuum vector Ω := 1 ∈ H0, and define the N-particle space HN as the Hilbert space of symmetric wave functions in L2((Hm+)N, dNχ), i.e., all wave functions ψ with

ψ(pπ(1) ···pπ(N)) = ψ(p1 ···pN)

∀ permutations π ∈ SN. The bosonic Fock space H is defined by

H := ⊕N∈N HN.

The subspace

D := ∪M∈N ⊕0≤M≤N HN is called a finite particle space.

The definition of the N-particle wave functions as symmetric functions endows the field with a Bose–Einstein statistics. To each wave function φ ∈ H1, assign a creation operator a+(φ) by

a+(φ)ψ := CNφ ⊗s ψ, ψ ∈ D,

where ⊗s denotes the symmetrized tensor product and where CN is a constant.

(a+(φ)ψ)(p1 ···pN) = CN/N ∑v φ(pν)ψ(pπ(1) ···p̂ν ···pπ(N)) —– (6)

where the hat symbol indicates omission of the argument. This defines a+(φ) as a linear operator on the finite-particle space D.

The adjoint operator a(φ) := a+(φ) is called an annihilation operator; it assigns to each ψ ∈ HN, N ≥ 1, the wave function a(φ)ψ ∈ HN−1 defined by

(a(φ)ψ)(p1 ···pN) := CN ∫Hm+ φ(p)ψ(p1 ···pN−1, p) dχ(p)

together with a(φ)Ω := 0, this suffices to specify a(φ) on D. Annihilation operators can also be defined for sharp momenta. Namely, one can define to each p ∈ Hm+ the annihilation operator a(p) assigning to

each ψ ∈ HN, N ≥ 1, the wave function a(p)ψ ∈ HN−1 given by

(a(p)ψ)(p1 ···pN−1) := Cψ(p, p1 ···pN−1), ψ ∈ HN,

and assigning 0 ∈ H to Ω. a(p) is, like a(φ), well defined on the finite-particle space D as an operator, but its hermitian adjoint is ill-defined as an operator, since the symmetric tensor product of a wave function by a delta function is no wave function.

Given any single-particle wave functions ψ, φ ∈ H1, the commutators [a(ψ), a(φ)] and [a+(ψ), a+(φ)] vanish by construction. It is customary to choose the constants CN in such a fashion that creation and annihilation operators exhibit the commutation relation

[a(φ), a+(ψ)] = ⟨φ, ψ⟩ —– (7)

which requires CN = N. With this choice, all creation and annihilation operators are unbounded, i.e., they are not continuous.

When defining the hermitian scalar field as an operator valued distribution, it must be taken into account that an annihilation operator a(φ) depends on its argument φ in an antilinear fashion. The dependence is, however, R-linear, and one can define the scalar field as a C-linear distribution in two steps.

For each real-valued test function φ on R1+1, define

Φ(φ) := a(φˆ|Hm+) + a+(φˆ|Hm+)

then one can define for an arbitrary complex-valued φ

Φ(φ) := Φ(Re(φ)) + iΦ(Im(φ))

Referring to (4), Φ is called the hermitian scalar field of mass m.

Thereafter, one could see

[Φ(q), Φ(q′)] = i∆(q − q′) —– (8)

Referring to (5), which is to be read as an equation of distributions. The distribution ∆ vanishes outside the light cone, i.e., ∆(q) = 0 if q2 < 0. Namely, the integrand in (5) is odd with respect to some p′ ∈ Hm+ if q is spacelike. Note that pq > 0 for all p ∈ Hm+ if q ∈ V+. The consequence of this is called microcausality: field operators located in spacelike separated regions commute (for the hermitian scalar field).

The Statistical Physics of Stock Markets. Thought of the Day 143.0

This video is an Order Routing Animation

The externalist view argues that we can make sense of, and profit from stock markets’ behavior, or at least few crucial properties of it, by crunching numbers and looking for patterns and regularities in certain sets of data. The notion of data, hence, is a key element in such an understanding and the quantitative side of the problem is prominent even if it does not mean that a qualitative analysis is ignored. The point here that the outside view maintains that it provides a better understanding than the internalist view. To this end, it endorses a functional perspective on finance and stock markets in particular.

The basic idea of the externalist view is that there are general properties and behavior of stock markets that can be detected and studied through mathematical lens, and they do not depend so much on contextual or domain-specific factors. The point at stake here is that the financial systems can be studied and approached at different scales, and it is virtually impossible to produce all the equations describing at a micro level all the objects of the system and their relations. So, in response, this view focuses on those properties that allow us to get an understanding of the behavior of the systems at a global level without having to produce a detailed conceptual and mathematical account of the inner ‘machinery’ of the system. Hence the two roads: The first one is to embrace an emergentist view on stock market, that is a specific metaphysical, ontological, and methodological thesis, while the second one is to embrace a heuristic view, that is the idea that the choice to focus on those properties that are tractable by the mathematical models is a pure problem-solving option.

A typical view of the externalist approach is the one provided, for instance, by statistical physics. In describing collective behavior, this discipline neglects all the conceptual and mathematical intricacies deriving from a detailed account of the inner, individual, and at micro level functioning of a system. Concepts such as stochastic dynamics, self-similarity, correlations (both short- and long-range), and scaling are tools to get this aim. Econophysics is a stock example in this sense: it employs methods taken from mathematics and mathematical physics in order to detect and forecast the driving forces of stock markets and their critical events, such as bubbles, crashes and their tipping points. Under this respect, markets are not ‘dark boxes’: you can see their characteristics from the outside, or better you can see specific dynamics that shape the trends of stock markets deeply and for a long time. Moreover, these dynamics are complex in the technical sense. This means that this class of behavior is such to encompass timescales, ontology, types of agents, ecologies, regulations, laws, etc. and can be detected, even if not strictly predictable. We can focus on the stock markets as a whole, on few of their critical events, looking at the data of prices (or other indexes) and ignoring all the other details and factors since they will be absorbed in these global dynamics. So this view provides a look at stock markets such that not only they do not appear as a unintelligible casino where wild gamblers face each other, but that shows the reasons and the properties of a systems that serve mostly as a means of fluid transactions that enable and ease the functioning of free markets.

Moreover the study of complex systems theory and that of stock markets seem to offer mutual benefits. On one side, complex systems theory seems to offer a key to understand and break through some of the most salient stock markets’ properties. On the other side, stock markets seem to provide a ‘stress test’ of the complexity theory. Didier Sornette expresses the analogies between stock markets and phase transitions, statistical mechanics, nonlinear dynamics, and disordered systems mold the view from outside:

Take our personal life. We are not really interested in knowing in advance at what time we will go to a given store or drive to a highway. We are much more interested in forecasting the major bifurcations ahead of us, involving the few important things, like health, love, and work, that count for our happiness. Similarly, predicting the detailed evolution of complex systems has no real value, and the fact that we are taught that it is out of reach from a fundamental point of view does not exclude the more interesting possibility of predicting phases of evolutions of complex systems that really count, like the extreme events. It turns out that most complex systems in natural and social sciences do exhibit rare and sudden transitions that occur over time intervals that are short compared to the characteristic time scales of their posterior evolution. Such extreme events express more than anything else the underlying “forces” usually hidden by almost perfect balance and thus provide the potential for a better scientific understanding of complex systems.

Phase transitions, critical points, extreme events seem to be so pervasive in stock markets that they are the crucial concepts to explain and, in case, foresee. And complexity theory provides us a fruitful reading key to understand their dynamics, namely their generation, growth and occurrence. Such a reading key proposes a clear-cut interpretation of them, which can be explained again by means of an analogy with physics, precisely with the unstable position of an object. Complexity theory suggests that critical or extreme events occurring at large scale are the outcome of interactions occurring at smaller scales. In the case of stock markets, this means that, unlike many approaches that attempt to account for crashes by searching for ‘mechanisms’ that work at very short time scales, complexity theory indicates that crashes have causes that date back months or year before it. This reading suggests that it is the increasing, inner interaction between the agents inside the markets that builds up the unstable dynamics (typically the financial bubbles) that eventually ends up with a critical event, the crash. But here the specific, final step that triggers the critical event: the collapse of the prices is not the key for its understanding: a crash occurs because the markets are in an unstable phase and any small interference or event may trigger it. The bottom line: the trigger can be virtually any event external to the markets. The real cause of the crash is its overall unstable position, the proximate ‘cause’ is secondary and accidental. Or, in other words, a crash could be fundamentally endogenous in nature, whilst an exogenous, external, shock is simply the occasional triggering factors of it. The instability is built up by a cooperative behavior among traders, who imitate each other (in this sense is an endogenous process) and contribute to form and reinforce trends that converge up to a critical point.

The main advantage of this approach is that the system (the market) would anticipate the crash by releasing precursory fingerprints observable in the stock market prices: the market prices contain information on impending crashes and this implies that:

if the traders were to learn how to decipher and use this information, they would act on it and on the knowledge that others act on it; nevertheless, the crashes would still probably happen. Our results suggest a weaker form of the “weak efficient market hypothesis”, according to which the market prices contain, in addition to the information generally available to all, subtle information formed by the global market that most or all individual traders have not yet learned to decipher and use. Instead of the usual interpretation of the efficient market hypothesis in which traders extract and consciously incorporate (by their action) all information contained in the market prices, we propose that the market as a whole can exhibit “emergent” behavior not shared by any of its constituents.

In a nutshell, the critical events emerge in a self-organized and cooperative fashion as the macro result of the internal and micro interactions of the traders, their imitation and mirroring.

 

Momentum Space Topology Generates Massive Fermions. Thought of the Day 142.0

Untitled

Topological quantum phase transitions: The vacua at b0 ≠ 0 and b > M have Fermi surfaces. At b2 > b20 + M2, these Fermi surfaces have nonzero global topological charges N3 = +1 and N3 = −1. At the quantum phase transition occurring on the line b0 = 0, b > M (thick horizontal line) the Fermi surfaces shrink to the Fermi points with nonzero N3. At M2 < b2 < b20 + M2 the global topology of the Fermi surfaces is trivial, N3 = 0. At the quantum phase transition occurring on the line b = M (thick vertical line), the Fermi surfaces shrink to the points; and since their global topology is trivial the zeroes disappear at b < M where the vacuum is fully gapped. The quantum phase transition between the Fermi surfaces with and without topological charge N3 occurs at b2 = b20 + M2 (dashed line). At this transition, the Fermi surfaces touch each other, and their topological charges annihilate each other.

What we have assumed here is that the Fermi point in the Standard Model above the electroweak energy scale is marginal, i.e. its total topological charge is N3 = 0. Since the topology does not protect such a point, everything depends on symmetry, which is more subtle. In principle, one may expect that the vacuum is always fully gapped. This is supported by the Monte-Carlo simulations which suggest that in the Standard Model there is no second-order phase transition at finite temperature, instead one has either the first-order electroweak transition or crossover depending on the ratio of masses of the Higgs and gauge bosons. This would actually mean that the fermions are always massive.

Such scenario does not contradict to the momentum-space topology, only if the total topological charge N3 is zero. However, from the point of view of the momentum-space topology there is another scheme of the description of the Standard Model. Let us assume that the Standard Model follows from the GUT with SO(10) group. Here, the 16 Standard Model fermions form at high energy the 16-plet of the SO(10) group. All the particles of this multiplet are left-handed fermions. These are: four left-handed SU(2) doublets (neutrino-electron and 3 doublets of quarks) + eight left SU(2) singlets of anti-particles (antineutrino, positron and 6 anti-quarks). The total topological charge of the Fermi point at p = 0 is N3 = −16, and thus such a vacuum is topologically stable and is protected against the mass of fermions. This topological protection works even if the SU (2) × U (1) symmetry is violated perturbatively, say, due to the mixing of different species of the 16-plet. Mixing of left leptonic doublet with left singlets (antineutrino and positron) violates SU(2) × U(1) symmetry, but this does not lead to annihilation of Fermi points and mass formation since the topological charge N3 is conserved.

What this means in a nutshell is that if the total topological charge of the Fermi surfaces is non-zero, the gap cannot appear perturbatively. It can only arise due to the crucial reconstruction of the fermionic spectrum with effective doubling of fermions. In the same manner, in the SO(10) GUT model the mass generation can only occur non-perturbatively. The mixing of the left and right fermions requires the introduction of the right fermions, and thus the effective doubling of the number of fermions. The corresponding Gor’kov’s Green’s function in this case will be the (16 × 2) × (16 × 2) matrix. The nullification of the topological charge N3 = −16 occurs exactly in the same manner, as in superconductors. In the extended (Gor’kov) Green’s function formalism appropriate below the transition, the topological charge of the original Fermi point is annihilated by the opposite charge N3 = +16 of the Fermi point of “holes” (right-handed particles).

This demonstrates that the mechanism of generation of mass of fermions essentially depends on the momentum space topology. If the Standard Model originates from the SO(10) group, the vacuum belongs to the universality class with the topologically non-trivial chiral Fermi point (i.e. with N3 ≠ 0), and the smooth crossover to the fully-gapped vacuum is impossible. On the other hand, if the Standard Model originates from the left-right symmetric Pati–Salam group such as SU(2)L × SU(2)R × SU(4), and its vacuum has the topologically trivial (marginal) Fermi point with N3 = 0, the smooth crossover to the fully-gapped vacuum is possible.

Black Hole Analogue: Extreme Blue Shift Disturbance. Thought of the Day 141.0

One major contribution of the theoretical study of black hole analogues has been to help clarify the derivation of the Hawking effect, which leads to a study of Hawking radiation in a more general context, one that involves, among other features, two horizons. There is an apparent contradiction in Hawking’s semiclassical derivation of black hole evaporation, in that the radiated fields undergo arbitrarily large blue-shifting in the calculation, thus acquiring arbitrarily large masses, which contravenes the underlying assumption that the gravitational effects of the quantum fields may be ignored. This is known as the trans-Planckian problem. A similar issue arises in condensed matter analogues such as the sonic black hole.

Untitled

Sonic horizons in a moving fluid, in which the speed of sound is 1. The velocity profile of the fluid, v(z), attains the value −1 at two values of z; these are horizons for sound waves that are right-moving with respect to the fluid. At the right-hand horizon right-moving waves are trapped, with waves just to the left of the horizon being swept into the supersonic flow region v < −1; no sound can emerge from this region through the horizon, so it is reminiscent of a black hole. At the left-hand horizon right-moving waves become frozen and cannot enter the supersonic flow region; this is reminiscent of a white hole.

Considering the sonic horizons in one-dimensional fluid flow, the velocity profile of the fluid as depicted in the figure above, the two horizons are formed for sound waves that propagate to the right with respect to the fluid. The horizon on the right of the supersonic flow region v < −1 behaves like a black hole horizon for right-moving waves, while the horizon on the left of the supersonic flow region behaves like a white hole horizon for these waves. In such a system, the equation for a small perturbation φ of the velocity potential is

(∂t + ∂zv)(∂t + v∂z)φ − ∂z2φ = 0 —– (1)

In terms of a new coordinate τ defined by

dτ := dt + v/(1 – v2) dz

(1) is the equation φ = 0 of a scalar field in the black-hole-type metric

ds2 = (1 – v2)dτ2 – dz2/(1 – v2)

Each horizon will produce a thermal spectrum of phonons with a temperature determined by the quantity that corresponds to the surface gravity at the horizon, namely the absolute value of the slope of the velocity profile:

kBT = ħα/2π, α := |dv/dz|v=-1 —– (2)

Untitled

Hawking phonons in the fluid flow: Real phonons have positive frequency in the fluid-element frame and for right-moving phonons this frequency (ω − vk) is ω/(1 + v) = k. Thus in the subsonic-flow regions ω (conserved 1 + v for each ray) is positive, whereas in the supersonic-flow region it is negative; k is positive for all real phonons. The frequency in the fluid-element frame diverges at the horizons – the trans-Planckian problem.

The trajectories of the created phonons are formally deduced from the dispersion relation of the sound equation (1). Geometrical acoustics applied to (1) gives the dispersion relation

ω − vk = ±k —– (3)

and the Hamiltonians

dz/dt = ∂ω/∂k = v ± 1 —– (4)

dk/dt = -∂ω/∂z = − v′k —– (5)

The left-hand side of (3) is the frequency in the frame co-moving with a fluid element, whereas ω is the frequency in the laboratory frame; the latter is constant for a time-independent fluid flow (“time-independent Hamiltonian” dω/dt = ∂ω/∂t = 0). Since the Hawking radiation is right-moving with respect to the fluid, we clearly must choose the positive sign in (3) and hence in (4) also. By approximating v(z) as a linear function near the horizons we obtain from (4) and (5) the ray trajectories. The disturbing feature of the rays is the behavior of the wave vector k: at the horizons the radiation is exponentially blue-shifted, leading to a diverging frequency in the fluid-element frame. These runaway frequencies are unphysical since (1) asserts that sound in a fluid element obeys the ordinary wave equation at all wavelengths, in contradiction with the atomic nature of fluids. Moreover the conclusion that this Hawking radiation is actually present in the fluid also assumes that (1) holds at all wavelengths, as exponential blue-shifting of wave packets at the horizon is a feature of the derivation. Similarly, in the black-hole case the equation does not hold at arbitrarily high frequencies because it ignores the gravity of the fields. For the black hole, a complete resolution of this difficulty will require inputs from the gravitational physics of quantum fields, i.e. quantum gravity, but for the dumb hole the physics is available for a more realistic treatment.

 

Adjacency of the Possible: Teleology of Autocatalysis. Thought of the Day 140.0

abiogenesisautocatalysis

Given a network of catalyzed chemical reactions, a (sub)set R of such reactions is called:

  1. Reflexively autocatalytic (RA) if every reaction in R is catalyzed by at least one molecule involved in any of the reactions in R;
  2. F-generated (F) if every reactant in R can be constructed from a small “food set” F by successive applications of reactions from R;
  3. Reflexively autocatalytic and F-generated (RAF) if it is both RA and F.

The food set F contains molecules that are assumed to be freely available in the environment. Thus, an RAF set formally captures the notion of “catalytic closure”, i.e., a self-sustaining set supported by a steady supply of (simple) molecules from some food set….

Stuart Kauffman begins with the Darwinian idea of the origin of life in a biological ‘primordial soup’ of organic chemicals and investigates the possibility of one chemical substance to catalyze the reaction of two others, forming new reagents in the soup. Such catalyses may, of course, form chains, so that one reagent catalyzes the formation of another catalyzing another, etc., and self-sustaining loops of reaction chains is an evident possibility in the appropriate chemical environment. A statistical analysis would reveal that such catalytic reactions may form interdependent networks when the rate of catalyzed reactions per molecule approaches one, creating a self-organizing chemical cycle which he calls an ‘autocatalytic set’. When the rate of catalyses per reagent is low, only small local reaction chains form, but as the rate approaches one, the reaction chains in the soup suddenly ‘freeze’ so that what was a group of chains or islands in the soup now connects into one large interdependent network, constituting an ‘autocatalytic set’. Such an interdependent reaction network constitutes the core of the body definition unfolding in Kauffman, and its cyclic character forms the basic precondition for self-sustainment. ‘Autonomous agent’ is an autocatalytic set able to reproduce and to undertake at least one thermodynamic work cycle.

This definition implies two things: reproduction possibility, and the appearance of completely new, interdependent goals in work cycles. The latter idea requires the ability of the autocatalytic set to save energy in order to spend it in its own self-organization, in its search for reagents necessary to uphold the network. These goals evidently introduce a – restricted, to be sure – teleology defined simply by the survival of the autocatalytic set itself: actions supporting this have a local teleological character. Thus, the autocatalytic set may, as it evolves, enlarge its cyclic network by recruiting new subcycles supporting and enhancing it in a developing structure of subcycles and sub-sub-cycles. 

Kauffman proposes that the concept of ‘autonomous agent’ implies a whole new cluster of interdependent concepts. Thus, the autonomy of the agent is defined by ‘catalytic closure’ (any reaction in the network demanding catalysis will get it) which is a genuine Gestalt property in the molecular system as a whole – and thus not in any way derivable from the chemistry of single chemical reactions alone.

Kauffman’s definitions on the basis of speculative chemistry thus entail not only the Kantian cyclic structure, but also the primitive perception and action phases of Uexküll’s functional circle. Thus, Kauffman’s definition of the organism in terms of an ‘autonomous agent’ basically builds on an Uexküllian intuition, namely the idea that the most basic property in a body is metabolism: the constrained, organizing processing of high-energy chemical material and the correlated perception and action performed to localize and utilize it – all of this constituting a metabolic cycle coordinating the organism’s in- and outside, defining teleological action. Perception and action phases are so to speak the extension of the cyclical structure of the closed catalytical set to encompass parts of its surroundings, so that the circle of metabolism may only be completed by means of successful perception and action parts.

The evolution of autonomous agents is taken as the empirical basis for the hypothesis of a general thermodynamic regularity based on non-ergodicity: the Big Bang universe (and, consequently, the biosphere) is not at equilibrium and will not reach equilibrium during the life-time of the universe. This gives rise to Kauffman’s idea of the ‘adjacent possible’. At a given point in evolution, one can define the set of chemical substances which do not exist in the universe – but which is at a distance of one chemical reaction only from a substance already existing in the universe. Biological evolution has, evidently, led to an enormous growth of types of organic macromolecules, and new such substances come into being every day. Maybe there is a sort of chemical potential leading from the actually realized substances and into the adjacent possible which is in some sense driving the evolution? In any case, Kauffman claims the hypothesis that the biosphere as such is supercritical in the sense that there is, in general, more than one action catalyzed by each reagent. Cells, in order not to be destroyed by this chemical storm, must be internally subcritical (even if close to the critical boundary). But if the biosphere as such is, in fact, supercritical, then this distinction seemingly a priori necessitates the existence of a boundary of the agent, protecting it against the environment.

BASEL III: The Deflationary Symbiotic Alliance Between Governments and Banking Sector. Thought of the Day 139.0

basel_reforms

The Bank for International Settlements (BIS) is steering the banks to deal with government debt, since the governments have been running large deficits to deal with the catastrophe of BASEL 2-inspired mortgaged-backed securities collapse. The deficits are ranged anywhere between 3 to 7 per cent of the GDP, and in cases even higher. These deficits were being used to create a floor under growth by stimulating the economy and bailing out financial institutions that got carried away by the wholesale funding of real estate. And this is precisely what BASEL 2 promulgated, i.e. encouraging financial institutions to hold mortgage-backed securities for investments.

In comes the BASEL 3 rules that implore than banks must be in compliance with these regulations. But, who gets to decide these regulations? Actually, banks do, since they then come on board for discussions with the governments, and such negotiations are catered to bail banks out with government deficits in order to oil the engine of economic growth. The logic here underlines the fact that governments can continue to find a godown of sorts for their deficits, while the banks can buy government debt without any capital commitment and make a good spread without the risk, thus serving the interests of the both parties involved mutually. Moreover, for the government, the process is political, as no government would find it acceptable to be objective in its viewership of letting a bubble deflate, because any process of deleveraging would cause the banks to offset their lending orgy, which is detrimental to the engineered economic growth. Importantly, without these deficits, the financial system could go down the deflationary spiral, which might turn out to be a difficult proposition to recover if there isn’t any complicity in rhyme and reason accorded to this particular dysfunctional and symbiotic relationship. So, whats the implication of all this? The more government debt banks hold, the less overall capital they need. And who says so? BASEL 3.

But, the mesh just seems to be building up here. In the same way that banks engineered counterfeit AAA-backed securities that were in fact an improbable financial hoax, how can countries that have government debt/GDP ratio to the tune of 90 – 120 per cent get a Standard&Poor’s ratings of a double-A? They have these ratings because they belong to a apical club that gives their members exclusive rights to a high rating even if they are irresponsible with their issuing of debts. Well, is that this simple? Yes and no. Yes, as is above, and no is merely clothing itself in a bit of an economic jargon, in that these are the countries where the government debt can be held without any capital against it. In other words, if a debt cannot be held, it cannot be issued, and that is the reason why countries are striving for issuing debts that have a zero weighting.

Let us take snippets across gradations of BASEL 1, 2 and 3. In BASEL 1, the unintended consequences were that banks were all buying equity in cross-owned companies. When the unwinding happened, equity just fell apart, since any beginning of a financial crisis is tailored to smash bank equities to begin with. Thats the first wound to rationality. In BASEL 2, banks were told to hold as much AAA-rated paper as they wanted with no capital against it. What happened if these ratings were downgraded? It would trigger a tsunami cutting through pension and insurance schemes to begin with forcing them to sell their papers and pile up huge losses meant to absorbed by capital, which doesn’t exist against these papers. So whatever gets sold is politically cushioned and buffered for by the governments, for the risks cannot be afforded to get any more denser as that explosion would sound the catastrophic death knell for the economy. BASEL 3 doesn’t really help, even if it mandated to hold a concentrated portfolio of government debt without any capital against it, for absorption of losses in case of a crisis hitting would have to exhumed through government bail-outs in scenarios where government debts are a century plus. So, are the banks in-stability, or given to more instability via BASEL 3?  The incentives to ever more hold government securities increase bank exposure to sovereign bonds, adding to existing exposure of government securities via repurchase transactions, investments and trading inventories. A ratings downgrade results in a fall in value of bonds triggering losses. Banks would then face calls for additional collateral, which would drain liquidity, and which would then require additional capital as way of compensation. where would this capital come in from, if not for the governments to source it? One way out would be recapitalization through government debt. On the other hand, the markets are required to hedge against the large holdings of government securities and so short stocks, currencies and insurance companies are all made to stare in the face of volatility that rips through them, of which the net resultant is falling liquidity. So, this vicious cycle would continue to cycle its way through any downgrades. And thats why the deflationary symbiotic alliance between the governments and banking sector isn’t anything more than high-fatigue tolerance….