Many deliberations of stochasticity start with “let (Ω, F, P) be a probability space”. One can actually follow such discussions without having the slightest idea what Ω is and who lives inside. So, what is “Ω, F, P” and why do we need it? Indeed, for many users of probability and statistics, a random variable X is synonymous with its probability distribution μ_{X} and all computations such as sums, expectations, etc., done on random variables amount to analytical operations such as integrations, Fourier transforms, convolutions, etc., done on their distributions. For defining such operations, you do not need a probability space. Isn’t this all there is to it?

One can in fact compute quite a lot of things without using probability spaces in an essential way. However the notions of probability space and random variable are central in modern probability theory so it is important to understand why and when these concepts are relevant.

From a modelling perspective, the starting point is a set of observations taking values in some set E (think for instance of numerical measurement, E = R) for which we would like to build a stochastic model. We would like to represent such observations x_{1}, . . . , x_{n} as samples drawn from a random variable X defined on some probability space (Ω, F, P). It is important to see that the only natural ingredient here is the set E where the random variables will take their values: the set of events Ω is not given *a priori* and there are many different ways to construct a probability space (Ω, F, P) for modelling the same set of observations.

Sometimes it is natural to identify Ω with E, i.e., to identify the randomness ω with its observed effect. For example if we consider the outcome of a dice rolling experiment as an integer-valued random variable X, we can define the set of events to be precisely the set of possible outcomes: Ω = {1, 2, 3, 4, 5, 6}. In this case, X(ω) = ω: the outcome of the randomness is identified with the randomness itself. This choice of Ω is called the canonical space for the random variable X. In this case the random variable X is simply the identity map X(ω) = ω and the probability measure P is formally the same as the distribution of X. Note that here X is a one-to-one map: given the outcome of X one knows which scenario has happened so any other random variable Y is completely determined by the observation of X. Therefore using the canonical construction for the random variable X, we cannot define, on the same probability space, another random variable which is independent of X: X will be the sole source of randomness for all other variables in the model. This also show that, although the canonical construction is the simplest way to construct a probability space for representing a given random variable, it forces us to identify this particular random variable with the “source of randomness” in the model. Therefore when we want to deal with models with a sufficiently rich structure, we need to distinguish Ω – the set of scenarios of randomness – from E, the set of values of our random variables.

Let us give an example where it is natural to distinguish the source of randomness from the random variable itself. For instance, if one is modelling the market value of a stock at some date T in the future as a random variable S_{1}, one may consider that the stock value is affected by many factors such as external news, market supply and demand, economic indicators, etc., summed up in some abstract variable ω, which may not even have a numerical representation: it corresponds to a scenario for the future evolution of the market. S_{1}(ω) is then the stock value if the market scenario which occurs is given by ω. If the only interesting quantity in the model is the stock price then one can always label the scenario ω by the value of the stock price S_{1}(ω), which amounts to identifying all scenarios where the stock S_{1} takes the same value and using the canonical construction. However if one considers a richer model where there are now other stocks S_{2}, S_{3}, . . . involved, it is more natural to distinguish the scenario ω from the random variables S_{1}(ω), S_{2}(ω),… whose values are observed in these scenarios but may not completely pin them down: knowing S_{1}(ω), S_{2}(ω),… one does not necessarily know which scenario has happened. In this way one reserves the possibility of adding more random variables later on without changing the probability space.

These have the following important consequence: the probabilistic description of a random variable X can be reduced to the knowledge of its distribution μ_{X} only in the case where the random variable X is the only source of randomness. In this case, a stochastic model can be built using a canonical construction for X. In all other cases – as soon as we are concerned with a second random variable which is not a deterministic function of X – the underlying probability measure P contains more information on X than just its distribution. In particular, it contains all the information about the dependence of the random variable X with respect to all other random variables in the model: specifying P means specifying the joint distributions of all random variables constructed on Ω. For instance, knowing the distributions μ_{X}, μ_{Y} of two variables X, Y does not allow to compute their covariance or joint moments. Only in the case where all random variables involved are mutually independent can one reduce all computations to operations on their distributions. This is the case covered in most introductory texts on probability, which explains why one can go quite far, for example in the study of random walks, without formalizing the notion of probability space.

Even though formalism of homological mirror symmetry is an established case, what of other explanations of mirror symmetry which lie closer to classical differential and algebraic geometry? One way to tackle this is the so-called Strominger, Yau and Zaslow mirror symmetry or SYZ in short.

The central physical ingredient in this proposal is T-duality. To explain this, let us consider a superconformal sigma model with target space (M, g), and denote it (defined as a geometric functor, or as a set of correlation functions), as

CFT(M, g)

In physics, a duality is an equivalence

CFT(M, g) ≅ CFT(M′, g′)

which holds despite the fact that the underlying geometries (M,g) and (M′, g′) are not classically diffeomorphic.

T-duality is a duality which relates two CFT’s with toroidal target space, M ≅ M′ ≅ T^{d}, but different metrics. In rough terms, the duality relates a “small” target space, with noncontractible cycles of length L < l_{s}, with a “large” target space in which all such cycles have length L > l_{s}.

This sort of relation is generic to dualities and follows from the following logic. If all length scales (lengths of cycles, curvature lengths, etc.) are greater than l_{s}, string theory reduces to conventional geometry. Now, in conventional geometry, we know what it means for (M, g) and (M′, g′) to be non-isomorphic. Any modification to this notion must be associated with a breakdown of conventional geometry, which requires some length scale to be “sub-stringy,” with L < l_{s}. To state T-duality precisely, let us first consider M = M′ = S^{1}. We parameterise this with a coordinate X ∈ R making the identification X ∼ X + 2π. Consider a Euclidean metric g_{R} given by ds^{2} = R^{2}dX^{2}. The real parameter R is usually called the “radius” from the obvious embedding in R^{2}. This manifold is Ricci-flat and thus the sigma model with this target space is a conformal field theory, the “c = 1 boson.” Let us furthermore set the string scale l_{s} = 1. With this, we attain a complete physical equivalence.

CFT(S^{1}, g_{R}) ≅ CFT(S^{1}, g_{1/R})

Thus these two target spaces are indistinguishable from the point of view of string theory.

Just to give a physical picture for what this means, suppose for sake of discussion that superstring theory describes our universe, and thus that in some sense there must be six extra spatial dimensions. Suppose further that we had evidence that the extra dimensions factorized topologically and metrically as K_{5} × S^{1}; then it would make sense to ask: What is the radius R of this S^{1} in our universe? In principle this could be measured by producing sufficiently energetic particles (so-called “Kaluza-Klein modes”), or perhaps measuring deviations from Newton’s inverse square law of gravity at distances L ∼ R. In string theory, T-duality implies that R ≥ l_{s}, because any theory with R < l_{s} is equivalent to another theory with R > l_{s}. Thus we have a nontrivial relation between two (in principle) observable quantities, R and l_{s}, which one might imagine testing experimentally. Let us now consider the theory CFT(T^{d}, g), where T^{d} is the d-dimensional torus, with coordinates X^{i} parameterising R^{d}/2πZ^{d}, and a constant metric tensor g_{ij}. Then there is a complete physical equivalence

CFT(T^{d}, g) ≅ CFT(T^{d}, g^{−1})

In fact this is just one element of a discrete group of T-duality symmetries, generated by T-dualities along one-cycles, and large diffeomorphisms (those not continuously connected to the identity). The complete group is isomorphic to SO(d, d; Z).

While very different from conventional geometry, T-duality has a simple intuitive explanation. This starts with the observation that the possible embeddings of a string into X can be classified by the fundamental group π_{1}(X). Strings representing non-trivial homotopy classes are usually referred to as “winding states.” Furthermore, since strings interact by interconnecting at points, the group structure on π_{1} provided by concatenation of based loops is meaningful and is respected by interactions in the string theory. Now π_{1}(T^{d}) ≅ Z^{d}, as an abelian group, referred to as the group of “winding numbers”.

Of course, there is another Z^{d} we could bring into the discussion, the Pontryagin dual of the U(1)^{d} of which T^{d} is an affinization. An element of this group is referred to physically as a “momentum,” as it is the eigenvalue of a translation operator on T^{d}. Again, this group structure is respected by the interactions. These two group structures, momentum and winding, can be summarized in the statement that the full closed string algebra contains the group algebra C[Z^{d}] ⊕ C[Z^{d}].

In essence, the point of T-duality is that if we quantize the string on a sufficiently small target space, the roles of momentum and winding will be interchanged. But the main point can be seen by bringing in some elementary spectral geometry. Besides the algebra structure, another invariant of a conformal field theory is the spectrum of its Hamiltonian H (technically, the Virasoro operator L_{0} + L^{ ̄}_{0}). This Hamiltonian can be thought of as an analog of the standard Laplacian ∆_{g} on functions on X, and its spectrum on T^{d} with metric g is

Spec ∆_{g }= {∑_{i,j=1}^{d} g^{ij}p_{i}p_{j}; p_{i} ∈ Z^{d}}

On the other hand, the energy of a winding string is (intuitively) a function of its length. On our torus, a geodesic with winding number w ∈ Z^{d} has length squared

L^{2} = ∑_{i,j=1}^{d} g_{ij}w^{i}w^{j}

Now, the only string theory input we need to bring in is that the total Hamiltonian contains both terms,

H = ∆_{g} + L^{2} + · · ·

where the extra terms … express the energy of excited (or “oscillator”) modes of the string. Then, the inversion g → g^{−1}, combined with the interchange p w, leaves the spectrum of H invariant. This is T-duality.

There is a simple generalization of the above to the case with a non-zero B-field on the torus satisfying dB = 0. In this case, since B is a constant antisymmetric tensor, we can label CFT’s by the matrix g + B. Now, the basic T-duality relation becomes

CFT(T^{d}, g + B) ≅ CFT(T^{d}, (g + B)^{−1})

_{s}, and has curvature lengths far greater than l_{s}. This is sometimes called the “adiabatic limit” in physics. While this is a very restrictive assumption, there are more heuristic physical arguments that T-duality should hold more generally, with corrections to the relations proportional to curvatures l_{s}^{2}R and derivatives l_{s}∂ of the fiber metric, both in perturbation theory and from world-sheet instantons.

A topological vector space V is both a topological space and a vector space such that the vector space operations are continuous. A topological vector space is locally convex if its topology admits a basis consisting of convex sets (a set A is convex if (1 – t) + ty ∈ A ∀ x, y ∈ A and t ∈ [0, 1].

We say that a locally convex topological vector space is a Fréchet space if its topology is induced by a translation-invariant metric d and the space is complete with respect to d, that is, all the Cauchy sequences are convergent.

A seminorm on a vector space V is a real-valued function p such that ∀ x, y ∈ V and scalars a we have:

(1) p(x + y) ≤ p(x) + p(y),

(2) p(ax) = |a|p(x),

(3) p(x) ≥ 0.

The difference between the norm and the seminorm comes from the last property: we do not ask that if x ≠ 0, then p(x) > 0, as we would do for a norm.

If {p_{i}}_{{i∈N}} is a countable family of seminorms on a topological vector space V, separating points, i.e. if x ≠ 0, there is an i with p_{i}(x) ≠ 0, then ∃ a translation-invariant metric d inducing the topology, defined in terms of the {p_{i}}:

d(x, y) = ∑_{i=1}^{∞} 1/2^{i} p_{i}(x – y)/(1 + p_{i}(x – y))

The following characterizes Fréchet spaces, giving an effective method to construct them using seminorms.

A topological vector space V is a Fréchet space iff it satisfies the following three properties:

- it is complete as a topological vector space;
- it is a Hausdorff space;
- its topology is induced by a countable family of seminorms {p
_{i}}_{{i∈N}}, i.e., U ⊂ V is open iff for every u ∈ U ∃ K ≥ 0 and ε > 0 such that {v|p_{k}(u – v) < ε ∀ k ≤ K} ⊂ U.

We say that a sequence (xn) in V converges to x in the Fréchet space topology defined by a family of seminorms iff it converges to x with respect to each of the given seminorms. In other words, x_{n} → x, iff p_{i}(x_{n} – x) → 0 for each i.

Two families of seminorms defined on the locally convex vector space V are said to be equivalent if they induce the same topology on V.

To construct a Fréchet space, one typically starts with a locally convex topological vector space V and defines a countable family of seminorms p_{k} on V inducing its topology and such that:

- if x ∈ V and p
_{k}(x) = 0 ∀ k ≥ 0, then x = 0 (separation property); - if (x
_{n}) is a sequence in V which is Cauchy with respect to each seminorm, then ∃ x ∈ V such that (x_{n}) converges to x with respect to each seminorm (completeness property).

The topology induced by these seminorms turns V into a Fréchet space; property (1) ensures that it is Hausdorff, while the property (2) guarantees that it is complete. A translation-invariant complete metric inducing the topology on V can then be defined as above.

The most important example of Fréchet space, is the vector space C^{∞}(U), the space of smooth functions on the open set U ⊆ R^{n} or more generally the vector space C^{∞}(M), where M is a differentiable manifold.

For each open set U ⊆ R^{n} (or U ⊂ M), for each K ⊂ U compact and for each multi-index I , we define

||ƒ||_{K,I} := sup_{x∈K} |(∂^{|I|}/∂x^{I} (ƒ)) (x)|, ƒ ∈ C^{∞}(U)

Each ||.||_{K,I} defines a seminorm. The family of seminorms obtained by considering all of the multi-indices I and the (countable number of) compact subsets K covering U satisfies the properties (1) and (1) detailed above, hence makes C^{∞}(U) into a Fréchet space. The sets of the form

|ƒ ∈ C^{∞}(U)| ||ƒ – g||_{K,I} < ε

with fixed g ∈ C^{∞}(U), K ⊆ U compact, and multi-index I are open sets and together with their finite intersections form a basis for the topology.

All these constructions and results can be generalized to smooth manifolds. Let M be a smooth manifold and let U be an open subset of M. If K is a compact subset of U and D is a differential operator over U, then

p_{K,D}(ƒ) := sup_{x∈K}|D(ƒ)|

is a seminorm. The family of all the seminorms p_{K,D} with K and D varying among all compact subsets and differential operators respectively is a separating family of seminorms endowing C_{M}^{∞}(U) with the structure of a complete locally convex vector space. Moreover there exists an equivalent countable family of seminorms, hence C_{M}^{∞}(U) is a Fréchet space. Let indeed {V_{j}} be a countable open cover of U by open coordinate subsets, and let, for each j, {K_{j,i}} be a countable family of compact subsets of V_{j} such that ∪_{i} K_{j,i} = V_{j}. We have the countable family of seminorms

p_{K,I} := sup_{x∈K} |(∂^{|I|}/∂x^{I} (ƒ)) (x)|, K ∈ {K_{j,i}}

inducing the topology. C_{M}^{∞}(U) is also an algebra: the product of two smooth functions being a smooth function.

A Fréchet space V is said to be a Fréchet algebra if its topology can be defined by a countable family of submultiplicative seminorms, i.e., a countable family {q_{i})_{i∈N} of seminorms satisfying

q_{i}(ƒg) ≤q_{i} (ƒ) q_{i}(g) ∀ i ∈ N

Let F be a sheaf of real vector spaces over a manifold M. F is a Fréchet sheaf if:

(1) for each open set U ⊆ M, F(U) is a Fréchet space;

(2) for each open set U ⊆ M and for each open cover {U_{i}} of U, the topology of F(U) is the initial topology with respect to the restriction maps F(U) → F(U_{i}), that is, the coarsest topology making the restriction morphisms continuous.

As a consequence, we have the restriction map F(U) → F(V) (V ⊆ U) as continuous. A morphism of sheaves ψ: F → F’ is said to be continuous if the map F(U) → F'(U) is open for each open subset U ⊆ M.

**What are fitted values in statistics?**

The values for an output variable that have been predicted by a model fitted to a set of data. a statistical is generally an equation, the graph of which includes or approximates a majority of data points in a given data set. Fitted values are generated by extending the model of past known data points in order to predict unknown values. These are also called predicted values.

**What are outliers in statistics?**

These are observation points that are distant from other observations and may arise due to variability in the measurement or it may indicate experimental errors. These may also arise due to heavy tailed distribution.

**What is LBS (Locational Banking statistics)?**

The locational banking statistics gather quarterly data on international financial claims and liabilities of bank offices in the reporting countries. Total positions are broken down by currency, by sector (bank and non-bank), by country of residence of the counterparty, and by nationality of reporting banks. Both domestically-owned and foreign-owned banking offices in the reporting countries record their positions on a gross (unconsolidated) basis, including those vis-à-vis own affiliates in other countries. This is consistent with the residency principle of national accounts, balance of payments and external debt statistics.

**What is CEIC?**

**Census and Economic Information Centre**

**What are spillover effects?**

These refer to the impact that seemingly unrelated events in one nation can have on the economies of other nations. since 2009, China has emerged a major source of spillover effects. This is because Chinese manufacturers have driven much of the global commodity demand growth since 2000. With China now being the second largest economy in the world, the number of countries that experience spillover effects from a Chinese slowdown is significant. China slowing down has a palpable impact on worldwide trade in metals, energy, grains and other commodities.

**How does China deal with its Non-Performing Assets?**

China adopted a four-point strategy to address the problems. The first was to reduce risks by strengthening banks and spearheading reforms of the state-owned enterprises (SOEs) by reducing their level of debt. The Chinese ensured that the nationalized banks were strengthened by raising disclosure standards across the board.

The second important measure was enacting laws that allowed the creation of asset management companies, equity participation and most importantly, asset-based securitization. The “securitization” approach is being taken by the Chinese to handle even their current NPA issue and is reportedly being piloted by a handful of large banks with specific emphasis on domestic investors. According to the International Monetary Fund (IMF), this is a prudent and preferred strategy since it gets assets off the balance sheets quickly and allows banks to receive cash which could be used for lending.

The third key measure that the Chinese took was to ensure that the government had the financial loss of debt “discounted” and debt equity swaps were allowed in case a growth opportunity existed. The term “debt-equity swap” (or “debt-equity conversion”) means the conversion of a heavily indebted or financially distressed company’s debt into equity or the acquisition by a company’s creditors of shares in that company paid for by the value of their loans to the company. Or, to put it more simply, debt-equity swaps transfer bank loans from the liabilities section of company balance sheets to common stock or additional paid-in capital in the shareholders’ equity section.

The first benefit that results from this is the improvement in the company’s finances produced by the reduction in debt. The second benefit (from the change in control) is that the creditors become committed to reorganizing the company, and the scope for moral hazard by the management is limited. Another benefit is one peculiar to equity: a return (i.e., repayment) in the form of an increase in enterprise value in the future. In other words, the fact that the creditors stand to make a return on their original investment if the reorganization is successful and the value of the business rises means that, like the debtor company, they have more to gain from this than from simply writing off their loans. If the reorganization is not successful, the equity may, of course, prove worthless.

The fourth measure they took was producing incentives like tax breaks, exemption from administrative fees and transparent evaluations norms. These strategic measures ensured the Chinese were on top of the NPA issue in the early 2000s, when it was far larger than it is today. The noteworthy thing is that they were indeed successful in reducing NPAs. How is this relevant to India and how can we address the NPA issue more effectively?

For now, capital controls and the paying down of foreign currency loans imply that there are few channels through which a foreign-induced debt sell-off could trigger a collapse in asset prices. Despite concerns in 2016 over capital outflow, China’s foreign exchange reserves have stabilised.

But there is a long-term cost. China is now more vulnerable to capital outflow. Errors and omissions on its national accounts remain large, suggesting persistent unrecorded capital outflows. This loss of capital should act as a salutary reminder to those who believe that China can take the lead on globalisation or provide the investment or currency business to fuel things like a post-Brexit economy.

The Chinese government’s focus on debt management will mean tighter controls on speculative international investments. It will also provide a stern test of China’s centrally planned financial system for the foreseeable future.

Let H be a fixed, separable Hilbert space of dimension ≥ 1. Lets denote the associated projective space of H by P = P(H). It is compact iff H is finite-dimensional. Let PU = PU(H) = U(H)/U(1) be the projective unitary group of H equipped with the compact-open topology. A projective bundle over X is a locally trivial bundle of projective spaces, i.e., a fibre bundle P → X with fibre P(H) and structure group PU(H). An application of the * Banach-Steinhaus theorem* shows that we may identify projective bundles with principal PU(H)-bundles and the pointwise convergence topology on PU(H).

If G is a topological group, let GX denote the sheaf of germs of continuous functions G → X, i.e., the sheaf associated to the constant presheaf given by U → F(U) = G. Given a projective bundle P → X and a sufficiently fine good open cover {U_{i}}_{i∈I} of X, the transition functions between trivializations P|_{Ui} can be lifted to bundle isomorphisms g_{ij} on double intersections U_{ij} = U_{i} ∩ U_{j} which are projectively coherent, i.e., over each of the triple intersections U_{ijk} = U_{i} ∩ U_{j} ∩ U_{k} the composition g_{ki} g_{jk} g_{ij} is given as multiplication by a U(1)-valued function f_{ijk} : U_{ijk} → U(1). The collection {(U_{ij}, f_{ijk})} defines a U(1)-valued two-cocycle called a B-field on X,which represents a class B_{P} in the sheaf cohomology group H^{2}(X, U(1)_{X}). On the other hand, the sheaf cohomology H^{1}(X, PU(H)_{X}) consists of isomorphism classes of principal PU(H)-bundles, and we can consider the isomorphism class [P] ∈ H^{1}(X,PU(H)_{X}).

There is an isomorphism

H^{1}(X, PU(H)_{X}) →^{≈} H^{2}(X, U(1)_{X}) provided by the

boundary map [P] ↦ B_{P}. There is also an isomorphism

H^{2}(X, U(1)_{X}) →^{≈} H^{3}(X, Z_{X}) ≅ H^{3}(X, Z)

The image δ(P) ∈ H^{3}(X, Z) of B_{P} is called the * Dixmier-Douady* invariant of P. When δ(P) = [H] is represented in H

where we identify the cyclic group Z_{n} with the group of n-th roots of unity. Let P be a projective bundle with structure group PU(n), i.e., with fibres P(C^{n}). Then the commutative diagram of long exact sequences of sheaf cohomology groups associated to the above commutative diagram of groups implies that the element B_{P} ∈ H^{2}(X, U(1)_{X}) comes from H^{2}(X, (Z_{n})_{X}), and therefore its order divides n.

One also has δ(P_{1} ⊗ P_{2}) = δ(P_{1}) + δ(P_{2}) and δ(P^{∨}) = −δ(P). This follows from the commutative diagram

and the fact that P^{∨} ⊗ P = P(E) where E is the vector bundle of * Hilbert-Schmidt endomorphisms *of P . Putting everything together, it follows that the cohomology group H

We are now ready to define the twisted K-theory of the manifold X equipped with a projective bundle P → X, such that P_{x} = P(H) ∀ x ∈ X. We will first give a definition in terms of Fredholm operators, and then provide some equivalent, but more geometric definitions. Let H be a Z_{2}-graded Hilbert space. We define Fred^{0}(H) to be the space of self-adjoint degree 1 Fredholm operators T on H such that T^{2} − 1 ∈ K(H), together with the subspace topology induced by the embedding Fred^{0}(H) ֒→ B(H) × K(H) given by T → (T, T^{2} − 1) where the algebra of bounded linear operators B(H) is given the compact-open topology and the Banach algebra of compact operators K = K(H) is given the norm topology.

Let P = P_{H} → X be a projective Hilbert bundle. Then we can construct an associated bundle Fred^{0}(P) whose fibres are Fred^{0}(H). We define the twisted K-theory group of the pair (X, P) to be the group of homotopy classes of maps

K^{0}(X, H) = [X, Fred^{0}(P_{H})]

The group K^{0}(X, H) depends functorially on the pair (X, P_{H}), and an isomorphism of projective bundles ρ : P → P′ induces a group isomorphism ρ∗ : K^{0}(X, H) → K^{0}(X, H′). Addition in K^{0}(X, H) is defined by fibre-wise direct sum, so that the sum of two elements lies in K^{0}(X, H_{2}) with [H_{2}] = δ(P ⊗ P(C^{2})) = δ(P) = [H]. Under the isomorphism H ⊗ C^{2} ≅ H, there is a projective bundle isomorphism P → P ⊗ P(C^{2}) for any projective bundle P and so K^{0}(X, H_{2}) is canonically isomorphic to K^{0}(X, H). When [H] is a non-torsion element of H^{3}(X, Z), so that P = P_{H} is an infinite-dimensional bundle of projective spaces, then the index map K^{0}(X, H) → Z is zero, i.e., any section of Fred^{0}(P) takes values in the index zero component of Fred^{0}(H).

Let us now describe some other models for twisted K-theory which will be useful in our physical applications later on. A definition in algebraic K-theory may given as follows. A bundle of projective spaces P yields a bundle End(P) of algebras. However, if H is an infinite-dimensional Hilbert space, then one has natural isomorphisms H ≅ H ⊕ H and

End(H) ≅ Hom(H ⊕ H, H) ≅ End(H) ⊕ End(H)

as left End(H)-modules, and so the algebraic K-theory of the algebra End(H) is trivial. Instead, we will work with the Banach algebra K(H) of compact operators on H with the norm topology. Given that the unitary group U(H) with the compact-open topology acts continuously on K(H) by conjugation, to a given projective bundle P_{H} we can associate a bundle of compact operators E_{H} → X given by

E_{H} = P_{H} ×_{PU} K

with δ(E_{H}) = [H]. The Banach algebra A_{H} := C_{0}(X, E_{H}) of continuous sections of E_{H} vanishing at infinity is the continuous trace C∗-algebra CT(X, H). Then the twisted K-theory group K^{•}(X, H) of X is canonically isomorphic to the algebraic K-theory group K_{•}(A_{H}).

We will also need a smooth version of this definition. Let A^{∞}_{H} be the smooth subalgebra of A_{H} given by the algebra CT^{∞}(X, H) = C^{∞}(X, L^{1}_{PH}),

where L^{1}_{PH} = P_{H} ×_{PU}L^{1}. Then the inclusion CT^{∞}(X, H) → CT(X, H) induces an isomorphism K_{•}CT(X, H) →^{≈} K_{•}CT(X, H) of algebraic K-theory groups. Upon choosing a bundle gerbe connection, one has an isomorphism K_{•}CT^{∞}(X, H) ≅ K^{•}(X, H) with the twisted K-theory defined in terms of projective Hilbert bundles P = P_{H} over X.

Finally, we propose a general definition based on K-theory with coefficients in a sheaf of rings. It parallels the bundle gerbe approach to twisted K-theory. Let B be a Banach algebra over C. Let E(B, X) be the category of continuous B-bundles over X, and let C(X, B) be the sheaf of continuous maps X → B. The ring structure in B equips C(X, B) with the structure of a sheaf of rings over X. We can therefore consider left (or right) C(X, B)-modules, and in particular the category LF C(X, B) of locally free C(X, B)-modules. Using the functor in the usual way, for X an equivalence of additive categories

E(B, X) ≅ LF (C(X, B))

Since these are both additive categories, we can apply the Grothendieck functor to each of them and obtain the abelian groups K(LF(C(X, B))) and K(E(B, X)). The equivalence of categories ensures that there is a natural isomorphism of groups

K(LF (C(X, B))) ≅ K(E(B, X))

This motivates the following general definition. If A is a sheaf of rings over X, then we define the K-theory of X with coefficients in A to be the abelian group

K(X, A) := K LF(A)

For example, consider the case B = C. Then C(X, C) is just the sheaf of continuous functions X → C, while E(C, X) is the category of complex vector bundles over X. Using the isomorphism of K-theory groups we then have

K(X, C(X,C)) := K(LF (C(X, C))) ≅ K (E(C, X)) = K^{0}(X)

The definition of twisted K-theory uses another special instance of this general construction. For this, we define an Azumaya algebra over X of rank m to be a locally trivial algebra bundle over X with fibre isomorphic to the algebra of m × m complex matrices over C, M_{m}(C). An example is the algebra End(E) of endomorphisms of a complex vector bundle E → X. We can define an equivalence relation on the set A(X) of * Azumaya algebras* over X in the following way. Two Azumaya algebras A, A′ are called equivalent if there are vector bundles E, E′ over X such that the algebras A ⊗ End(E), A′ ⊗ End(E′) are isomorphic. Then every Azumaya algebra of the form End(E) is equivalent to the algebra of functions C(X) on X. The set of all equivalence classes is a group under the tensor product of algebras, called the

δ : Br(X) →^{≈} tor(H^{3}(X, Z))

where tor(H^{3}(X, Z)) is the torsion subgroup of H^{3}(X, Z).

If A is an Azumaya algebra bundle, then the space of continuous sections C(X, A) of X is a ring and we can consider the algebraic K-theory group K(A) := K_{0}(C(X,A)) of equivalence classes of projective C(X, A)-modules, which depends only on the equivalence class of A in the Brauer group. Under the equivalence, we can represent the Brauer group Br(X) as the set of isomorphism classes of sheaves of Azumaya algebras. Let A be a sheaf of Azumaya algebras, and LF(A) the category of locally free A-modules. Then as above there is an isomorphism

K(X, C(X, A)) ≅ K Proj (C(X, A))

^{3}(X, Z)) and A ∈ Br(X) such that δ(A) = [H], this group can be identified as the twisted K-theory group K^{0}(X, H) of X with twisting A. This definition is equivalent to the description in terms of bundle gerbe modules, and from this construction it follows that K^{0}(X, H) is a subgroup of the ordinary K-theory of X. If δ(A) = 0, then A is equivalent to C(X) and we have K(A) := K_{0}(C(X)) = K^{0}(X). The projective C(X, A)-modules over a rank m Azumaya algebra A are vector bundles E → X with fibre C^{nm} ≅ (C^{m})^{⊕n}, which is naturally an M_{m}(C)-module.

The physics treatment of Dirichlet branes in terms of boundary conditions is very analogous to that of the “bulk” quantum field theory, and the next step is again to study the renormalization group. This leads to equations of motion for the fields which arise from the open string, namely the data (M, E, ∇). In the supergravity limit, these equations are solved by taking the submanifold M to be volume minimizing in the metric on X, and the connection ∇ to satisfy the Yang-Mills equations.

Like the Einstein equations, the equations governing a submanifold of minimal volume are highly nonlinear, and their general theory is difficult. This is one motivation to look for special classes of solutions; the physical arguments favoring supersymmetry are another. Just as supersymmetric compactification manifolds correspond to a special class of Ricci-flat manifolds, those admitting a covariantly constant spinor, supersymmetry for a Dirichlet brane will correspond to embedding it into a special class of minimal volume submanifolds. Since the physical analysis is based on a covariantly constant spinor, this special class should be defined using the spinor, or else the covariantly constant forms which are bilinear in the spinor.

The standard physical arguments leading to this class are based on the * kappa symmetry* of the

φ ≡ Re ε^{t} Γε|_{M} = Vol|_{M} —– (1)

In words, the real part of one of the covariantly constant forms on M must equal the volume form when restricted to the brane.

Clearly dφ = 0, since it is covariantly constant. Thus,

Z(M) ≡ ∫_{M }φ —– (2)

depends only on the homology class of M. Thus, it is what physicists would call a “topological charge”, or a “central charge”.

If in addition the p-form φ is dominated by the volume form Vol upon restriction to any p-dimensional subspace V ⊂ T_{x} X, i.e.,

φ|_{V} ≤ Vol|_{V} —– (3)

then φ will be a calibration in the sense of implying the global statement

∫_{M }φ ≤ ∫_{M }Vol —– (4)

for any submanifold M . Thus, the central charge |Z (M)| is an absolute lower bound for Vol(M).

A calibrated submanifold M is now one satisfying (1), thereby attaining the lower bound and thus of minimal volume. Physically these are usually called “BPS branes,” after a prototypical argument of this type due, for magnetic monopole solutions in * nonabelian gauge theory*.

For a Calabi-Yau X, all of the forms ω^{p} can be calibrations, and the corresponding calibrated submanifolds are p-dimensional holomorphic submanifolds. Furthermore, the n-form Re e^{iθ}Ω for any choice of real parameter θ is a calibration, and the corresponding calibrated submanifolds are called special Lagrangian.

This generalizes to the presence of a general connection on M, and leads to the following two types of BPS branes for a Calabi-Yau X. Let n = dim_{R} M, and let F be the (End(E)-valued) curvature two-form of ∇.

The first kind of BPS D-brane, based on the ω^{p} calibrations, is (for historical reasons) called a “B-type brane”. Here the BPS constraint is equivalent to the following three requirements:

- M is a p-dimensional complex submanifold of X.
- The 2-form F is of type (1, 1), i.e., (E, ∇) is a holomorphic vector bundle on M.
- In the supergravity limit, F satisfies the Hermitian Yang-Mills equation:ω|
^{p−1}_{M}∧ F = c · ω|^{p}_{M}for some real constant c. - F satisfies Im e
^{iφ}(ω|_{M}+ il_{s}^{2}F)^{p}= 0 for some real constant φ, where l_{s}is the correction.

The second kind of BPS D-brane, based on the Re e^{iθ}Ω calibration, is called an “A-type” brane. The simplest examples of A-branes are the so-called special Lagrangian submanifolds (SLAGs), satisfying

(1) M is a Lagrangian submanifold of X with respect to ω.

(2) F = 0, i.e., the vector bundle E is flat.

(3) Im e^{iα} Ω|_{M} = 0 for some real constant α.

More generally, one also has the “coisotropic branes”. In the case when E is a line bundle, such A-branes satisfy the following four requirements:

(1) M is a coisotropic submanifold of X with respect to ω, i.e., for any x ∈ M the skew-orthogonal complement of T_{x}M ⊂ T_{x}X is contained in T_{x}M. Equivalently, one requires ker ω_{M} to be an integrable distribution on M.

(2) The 2-form F annihilates ker ω_{M}.

(3) Let F M be the vector bundle T M/ ker ω_{M}. It follows from the first two conditions that ω_{M} and F descend to a pair of skew-symmetric forms on FM, denoted by σ and f. Clearly, σ is nondegenerate. One requires the endomorphism σ^{−1}f : FM → FM to be a complex structure on FM.

(4) Let r be the complex dimension of FM. r is even and that r + n = dim_{R} M. Let Ω be the holomorphic trivialization of K_{X}. One requires that Im e^{iα}Ω|_{M} ∧ F^{r/2} = 0 for some real constant α.

A geometric Dirichlet brane is a triple (L, E, ∇_{E}) – a submanifold L ⊂ M, carrying a vector bundle E, with connection ∇_{E}.

The real dimension of L is also often brought into the nomenclature, so that one speaks of a Dirichlet p-brane if p = dim_{R}L.

An open string which stretches from a Dirichlet brane (L, E, ∇_{E}) to a Dirichlet brane (K, F, ∇_{F}), is a map X from an interval I ≅ [0,1] to M, such that X(0) ∈ L and X(1) ∈ K. An “open string history” is a map from R into open strings, or equivalently a map from a two-dimensional surface with boundary, say Σ ≡ I × R, to M , such that the two boundaries embed into L and K.

The quantum theory of these open strings is defined by a functional integral over these histories, with a weight which depends on the connections ∇_{E} and ∇_{F}. It describes the time evolution of an open string state which is a wave function in a Hilbert space H_{B,B′} labelled by the two choices of brane B = (L, E, ∇_{E}) and B′ = (K, F, ∇_{F}).

Distinct Dirichlet branes can embed into the same submanifold L. One way to represent this would be to specify the configurations of Dirichlet branes as a set of submanifolds with multiplicity. However, we can also represent this choice by using the choice of bundle E. Thus, a set of N identical branes will be represented by tensoring the bundle E with C^{N}. The connection is also obtained by tensor product. An N-fold copy of the Dirichlet brane (L, E, ∇_{E}) is thus a triple (L, E ⊗C^{N}, ∇_{E} ⊗ id_{N}).

In physics, one visualizes this choice by labelling each open string boundary with a basis vector of C^{N}, which specifies a choice among the N identical branes. These labels are called * Chan-Paton factors*. One then uses them to constrain the interactions between open strings. If we picture such an interaction as the joining of two open strings to one, the end of the first to the beginning of the second, we require not only the positions of the two ends to agree, but also the Chan-Paton factors. This operation is the intuitive algebra of open strings.

Mathematically, an algebra of open strings can always be tensored with a matrix algebra, in general producing a noncommutative algebra. More generally, if there is more than one possible boundary condition, then, rather than an algebra, it is better to think of this as a groupoid or categorical structure on the boundary conditions and the corresponding open strings. In the language of groupoids, particular open strings are elements of the groupoid, and the composition law is defined only for pairs of open strings with a common boundary. In the categorical language, boundary conditions are objects, and open strings are morphisms. The simplest intuitive argument that a non-trivial choice can be made here is to call upon the general principle that any local deformation of the world-sheet action should be a physically valid choice. In particular, particles in physics can be charged under a gauge field, for example the Maxwell field for an electron, the color * Yang-Mills field* for a quark, and so on. The wave function for a charged particle is then not complex-valued, but takes values in a bundle E.

Now, the effect of a general connection ∇_{E} is to modify the functional integral by modifying the weight associated to a given history of the particle. Suppose the trajectory of a particle is defined by a map φ : R → M; then a natural functional on trajectories associated with a connection ∇ on M is simply its * holonomy* along the trajectory, a linear map from E|

The simplest way to generalize this to a string is to consider the l_{s} → 0 limit. Now the constraint of finiteness of energy is satisfied only by a string of vanishingly small length, effectively a particle. In this limit, both ends of the string map to the same point, which must therefore lie on L ∩ K.

The upshot is that, in this limit, the wave function of an open string between Dirichlet branes (L, E, ∇) and (K, F, ∇_{F}) transforms as a section of E^{∨} ⊠ F over L ∩ K, with the natural connection on the direct product. In the special case of (L, E, ∇_{E}) ≅ (K, F, ∇_{F}), this reduces to the statement that an open string state is a section of EndE. Open string states are sections of a graded vector bundle End E ⊗ Λ•T∗L, the degree-1 part of which corresponds to infinitesimal deformations of ∇_{E}. In fact, these open string states are the infinitesimal deformations of ∇_{E}, in the standard sense of quantum field theory, i.e., a single open string is a localized excitation of the field obtained by quantizing the connection ∇_{E}. Similarly, other open string states are sections of the normal bundle of L within X, and are related in the same way to infinitesimal deformations of the submanifold. These relations, and their generalizations to open strings stretched between Dirichlet branes, define the physical sense in which the particular set of Dirichlet branes associated to a specified background X can be deduced from string theory.

Interlinkages across balance sheets of financial institutions may be modeled by a weighted directed graph G = (V, e) on the vertex set V = {1,…, n} = [n], whose elements represent financial institutions. The exposure matrix is given by e ∈ R^{n×n}, where the ij^{th} entry e(i, j) represents the exposure (in monetary units) of institution i to institution j. The interbank assets of an institution i are given by

A(i) := ∑_{j} e(i, j), which represents the interbank liabilities of i. In addition to these interbank assets and liabilities, a bank may hold other assets and liabilities (such as deposits).

The net worth of the bank, given by its capital c(i), represents its capacity for absorbing losses while remaining solvent. “Capital Ratio” of institution i, although technically, the ratio of capital to interbank assets and not total assets is given by

γ(i) := c(i)/A(i)

An institution is insolvent if its net worth is negative or zero, in which case, γ(i) is set to 0.

A financial network (e, γ) on the vertex set V = [n] is defined by

• a matrix of exposures {e(i, j)}_{1≤i,j≤n}

• a set of capital ratios {γ(i)}_{1≤i≤n}

In this network, the in-degree of a node i is given by

d^{−}(i) := #{j∈V | e(j, i)>0},

which represents the number of nodes exposed to i, while its out-degree

d^{+}(i) := #{j∈V | e(i, j)>0}

represents the number of institutions i is exposed to. The set of initially insolvent institutions is represented by

D_{0}(e, γ) = {i ∈ V | γ(i) = 0}

In a network (e, γ) of counterparties, the default of one or several nodes may lead to the insolvency of other nodes, generating a cascade of defaults. Starting from the set of initially insolvent institutions D_{0}(e, γ) which represent fundamental defaults, contagious process is defined as:

Denoting by R(j) the recovery rate on the assets of j at default, the default of j induces a loss equal to (1 − R(j))e(i, j) for its counterparty i. If this loss exceeds the capital of i, then i becomes in turn insolvent. From the formula for Capital Ration, we have c(i) = γ(i)A(i). The set of nodes which become insolvent due to their exposures to initial defaults is

D_{1}(e, γ) = {i ∈ V | γ(i)A(i) < ∑_{j∈D0} (1 − R(j)) e(i, j)}

This procedure may be iterated to define the default cascade initiated by a set of initial defaults.

So, when would a default cascade happen? Consider a financial network (e, γ) on the vertex set V = [n]. Set D_{0}(e, γ) = {i ∈ V | γ(i) = 0} of initially insolvent institutions. The increasing sequence (D_{k}(e, γ), k ≥ 1) of subsets of V defined by

D_{k}(e, γ) = {i ∈ V | γ(i)A(i) < ∑_{j∈Dk-1(e,γ)} (1−R(j)) e(i, j)}

is called the default cascade initiated by D_{0}(e, γ).

Thus D_{k}(e, γ) represents the set of institutions whose capital is insufficient to absorb losses due to defaults of institutions in D_{k-1}(e, γ).

Thus, in a network of size n, the cascade ends after at most n − 1 iterations. Hence, D_{n-1}(e, γ) represents the set of all nodes which become insolvent starting from the initial set of defaults D_{0}(e, γ).

Consider a financial network (e, γ) on the vertex set V = [n]. The fraction of defaults in the network (e, γ) (initiated by D_{0}(e, γ) is given by

α_{n}(e, γ) := |D_{n-1}(e, γ)|/n

If e_{0} ∈ R^{1+1} is a future-directed timelike unit vector, and if e_{1} is the unique spacelike unit vector with e_{0}e_{1} = 0 that “points to the right,” then coordinates x_{0} and x_{1} on R^{1+1} are defined by x_{0}(q) := qe_{0} and x_{1}(q) := qe_{1}. The partial differential operator

□_{x} : = ∂^{2}_{x0} − ∂^{2}_{x1}

does not depend on the choice of e_{0}.

The Fourier transform of the Klein-Gordon equation

(□ + m^{2})u = 0 —– (1)

where m > 0 is a given mass, is

(−p^{2} + m^{2})û(p) = 0 —– (2)

As a consequence, the support of û has to be a subset of the hyperbola H_{m} ⊂ R^{1+1} specified by the condition p^{2} = m^{2}. One connected component of H_{m} consists of positive-energy vectors only; it is called the upper mass shell H_{m}^{+}. The elements of H_{m}^{+} are the 4-momenta of classical relativistic point particles.

Denote by L_{1} the restricted Lorentz group, i.e., the connected component of the Lorentz group containing its unit element. In 1 + 1 dimensions, L_{1} coincides with the one-parameter Abelian group B(χ), χ ∈ R, of boosts. H_{m}^{+} is an orbit of L_{1} without fixed points. So if one chooses any point p′ ∈ H_{m}^{+}, then there is, for each p ∈ H_{m}^{+}, a unique χ(p) ∈ R with p = B(χ(p))p′. By construction, χ(B(ξ)p) = χ(p) + ξ, so the measure dχ on H_{m}^{+} is invariant under boosts and does note depend on the choice of p′.

For each p ∈ H_{m}^{+}, the plane wave q ↦ e^{±ipq} on R^{1+1} is a classical solution of the Klein-Gordon equation. The Klein-Gordon equation is linear, so if a_{+} and a_{−} are, say, integrable functions on H_{m}^{+}, then

F(q) := ∫_{Hm+} (a_{+}(p)e^{-ipq} + a_{–}(p)e^{ipq} dχ(p) —– (3)

is a solution of the Klein-Gordon equation as well. If the functions a_{±} are not integrable, the field F may still be well defined as a distribution. As an example, put a_{±} ≡ (2π)^{−1}, then

F(q) = (2π)^{−1 }∫_{Hm+} (e^{-ipq} + e^{ipq}) dχ(p) = π^{−1} ∫_{Hm+} cos(pq) dχ(p) =: Φ(q) —– (4)

and for a_{±} ≡ ±(2πi)^{−1}, F equals

F(q) = (2πi)^{−1} ∫_{Hm+} (e^{-ipq} – e^{ipq}) dχ(p) = π^{−1} ∫_{Hm+} sin(pq) dχ(p) =: ∆(q) —– (5)

Quantum fields are obtained by “plugging” classical field equations and their solutions into the well-known second quantization procedure. This procedure replaces the complex (or, more generally speaking, finite-dimensional vector) field values by linear operators in an infinite-dimensional Hilbert space, namely, a Fock space. The Hilbert space of the hermitian scalar field is constructed from wave functions that are considered as the wave functions of one or several particles of mass m. The single-particle wave functions are the elements of the Hilbert space H_{1} := L^{2}(H_{m}^{+}, dχ). Put the vacuum (zero-particle) space H_{0} equal to C, define the vacuum vector Ω := 1 ∈ H_{0}, and define the N-particle space H_{N} as the Hilbert space of symmetric wave functions in L_{2}((H_{m}^{+})^{N}, d^{N}χ), i.e., all wave functions ψ with

ψ(p_{π(1)} ···p_{π(N)}) = ψ(p_{1} ···p_{N})

∀ permutations π ∈ S_{N}. The bosonic Fock space H is defined by

H := ⊕_{N∈N} H_{N}.

The subspace

D := ∪_{M∈N} ⊕_{0≤M≤N} H_{N} is called a finite particle space.

The definition of the N-particle wave functions as symmetric functions endows the field with a Bose–Einstein statistics. To each wave function φ ∈ H_{1}, assign a creation operator a^{+}(φ) by

a^{+}(φ)ψ := C_{N}φ ⊗_{s} ψ, ψ ∈ D,

where ⊗_{s} denotes the symmetrized tensor product and where C_{N} is a constant.

(a^{+}(φ)ψ)(p_{1} ···p_{N}) = C_{N}/N ∑_{v} φ(p_{ν})ψ(p_{π(1)} ···p̂_{ν} ···p_{π(N)}) —– (6)

where the hat symbol indicates omission of the argument. This defines a^{+}(φ) as a linear operator on the finite-particle space D.

The adjoint operator a(φ) := a^{+}(φ)^{∗} is called an annihilation operator; it assigns to each ψ ∈ H_{N}, N ≥ 1, the wave function a(φ)ψ ∈ H_{N−1} defined by

(a(φ)ψ)(p_{1} ···p_{N}) := C_{N} ∫H_{m}^{+} φ(p)ψ(p_{1} ···p_{N−1}, p) dχ(p)

together with a(φ)Ω := 0, this suffices to specify a(φ) on D. Annihilation operators can also be defined for sharp momenta. Namely, one can define to each p ∈ H_{m}^{+} the annihilation operator a(p) assigning to

each ψ ∈ H_{N}, N ≥ 1, the wave function a(p)ψ ∈ H_{N−1} given by

(a(p)ψ)(p_{1} ···p_{N−1}) := C_{N }ψ(p, p_{1} ···p_{N−1}), ψ ∈ H_{N},

and assigning 0 ∈ H to Ω. a(p) is, like a(φ), well defined on the finite-particle space D as an operator, but its hermitian adjoint is ill-defined as an operator, since the symmetric tensor product of a wave function by a delta function is no wave function.

Given any single-particle wave functions ψ, φ ∈ H_{1}, the commutators [a(ψ), a(φ)] and [a^{+}(ψ), a^{+}(φ)] vanish by construction. It is customary to choose the constants C_{N} in such a fashion that creation and annihilation operators exhibit the commutation relation

[a(φ), a^{+}(ψ)] = ⟨φ, ψ⟩ —– (7)

which requires C_{N} = N. With this choice, all creation and annihilation operators are unbounded, i.e., they are not continuous.

When defining the hermitian scalar field as an operator valued distribution, it must be taken into account that an annihilation operator a(φ) depends on its argument φ in an antilinear fashion. The dependence is, however, R-linear, and one can define the scalar field as a C-linear distribution in two steps.

For each real-valued test function φ on R^{1+1}, define

Φ(φ) := a(φˆ|_{Hm+}) + a^{+}(φˆ|_{Hm+})

then one can define for an arbitrary complex-valued φ

Φ(φ) := Φ(Re(φ)) + iΦ(Im(φ))

Referring to (4), Φ is called the hermitian scalar field of mass m.

Thereafter, one could see

[Φ(q), Φ(q′)] = i∆(q − q′) —– (8)

^{2} < 0. Namely, the integrand in (5) is odd with respect to some p′ ∈ H_{m}^{+} if q is spacelike. Note that pq > 0 for all p ∈ H_{m}^{+} if q ∈ V_{+}. The consequence of this is called microcausality: field operators located in spacelike separated regions commute (for the hermitian scalar field).

*This video is an Order Routing Animation*

The externalist view argues that we can make sense of, and profit from stock markets’ behavior, or at least few crucial properties of it, by crunching numbers and looking for patterns and regularities in certain sets of data. The notion of data, hence, is a key element in such an understanding and the quantitative side of the problem is prominent even if it does not mean that a qualitative analysis is ignored. The point here that the outside view maintains that it provides a better understanding than the internalist view. To this end, it endorses a functional perspective on finance and stock markets in particular.

The basic idea of the externalist view is that there are general properties and behavior of stock markets that can be detected and studied through mathematical lens, and they do not depend so much on contextual or domain-specific factors. The point at stake here is that the financial systems can be studied and approached at different scales, and it is virtually impossible to produce all the equations describing at a micro level all the objects of the system and their relations. So, in response, this view focuses on those properties that allow us to get an understanding of the behavior of the systems at a global level without having to produce a detailed conceptual and mathematical account of the inner ‘machinery’ of the system. Hence the two roads: The first one is to embrace an emergentist view on stock market, that is a specific metaphysical, ontological, and methodological thesis, while the second one is to embrace a heuristic view, that is the idea that the choice to focus on those properties that are tractable by the mathematical models is a pure problem-solving option.

A typical view of the externalist approach is the one provided, for instance, by statistical physics. In describing collective behavior, this discipline neglects all the conceptual and mathematical intricacies deriving from a detailed account of the inner, individual, and at micro level functioning of a system. Concepts such as stochastic dynamics, self-similarity, correlations (both short- and long-range), and scaling are tools to get this aim. Econophysics is a stock example in this sense: it employs methods taken from mathematics and mathematical physics in order to detect and forecast the driving forces of stock markets and their critical events, such as bubbles, crashes and their tipping points. Under this respect, markets are not ‘dark boxes’: you can see their characteristics from the outside, or better you can see specific dynamics that shape the trends of stock markets deeply and for a long time. Moreover, these dynamics are complex in the technical sense. This means that this class of behavior is such to encompass timescales, ontology, types of agents, ecologies, regulations, laws, etc. and can be detected, even if not strictly predictable. We can focus on the stock markets as a whole, on few of their critical events, looking at the data of prices (or other indexes) and ignoring all the other details and factors since they will be absorbed in these global dynamics. So this view provides a look at stock markets such that not only they do not appear as a unintelligible casino where wild gamblers face each other, but that shows the reasons and the properties of a systems that serve mostly as a means of fluid transactions that enable and ease the functioning of free markets.

Moreover the study of complex systems theory and that of stock markets seem to offer mutual benefits. On one side, complex systems theory seems to offer a key to understand and break through some of the most salient stock markets’ properties. On the other side, stock markets seem to provide a ‘stress test’ of the complexity theory. Didier Sornette expresses the analogies between stock markets and phase transitions, statistical mechanics, nonlinear dynamics, and disordered systems mold the view from outside:

Take our personal life. We are not really interested in knowing in advance at what time we will go to a given store or drive to a highway. We are much more interested in forecasting the major bifurcations ahead of us, involving the few important things, like health, love, and work, that count for our happiness. Similarly, predicting the detailed evolution of complex systems has no real value, and the fact that we are taught that it is out of reach from a fundamental point of view does not exclude the more interesting possibility of predicting phases of evolutions of complex systems that really count, like the extreme events. It turns out that most complex systems in natural and social sciences do exhibit rare and sudden transitions that occur over time intervals that are short compared to the characteristic time scales of their posterior evolution. Such extreme events express more than anything else the underlying “forces” usually hidden by almost perfect balance and thus provide the potential for a better scientific understanding of complex systems.

Phase transitions, critical points, extreme events seem to be so pervasive in stock markets that they are the crucial concepts to explain and, in case, foresee. And complexity theory provides us a fruitful reading key to understand their dynamics, namely their generation, growth and occurrence. Such a reading key proposes a clear-cut interpretation of them, which can be explained again by means of an analogy with physics, precisely with the unstable position of an object. Complexity theory suggests that critical or extreme events occurring at large scale are the outcome of interactions occurring at smaller scales. In the case of stock markets, this means that, unlike many approaches that attempt to account for crashes by searching for ‘mechanisms’ that work at very short time scales, complexity theory indicates that crashes have causes that date back months or year before it. This reading suggests that it is the increasing, inner interaction between the agents inside the markets that builds up the unstable dynamics (typically the financial bubbles) that eventually ends up with a critical event, the crash. But here the specific, final step that triggers the critical event: the collapse of the prices is not the key for its understanding: a crash occurs because the markets are in an unstable phase and any small interference or event may trigger it. The bottom line: the trigger can be virtually any event external to the markets. The real cause of the crash is its overall unstable position, the proximate ‘cause’ is secondary and accidental. Or, in other words, a crash could be fundamentally endogenous in nature, whilst an exogenous, external, shock is simply the occasional triggering factors of it. The instability is built up by a cooperative behavior among traders, who imitate each other (in this sense is an endogenous process) and contribute to form and reinforce trends that converge up to a critical point.

The main advantage of this approach is that the system (the market) would anticipate the crash by releasing precursory fingerprints observable in the stock market prices: the market prices contain information on impending crashes and this implies that:

if the traders were to learn how to decipher and use this information, they would act on it and on the knowledge that others act on it; nevertheless, the crashes would still probably happen. Our results suggest a weaker form of the “weak efficient market hypothesis”, according to which the market prices contain, in addition to the information generally available to all, subtle information formed by the global market that most or all individual traders have not yet learned to decipher and use. Instead of the usual interpretation of the efficient market hypothesis in which traders extract and consciously incorporate (by their action) all information contained in the market prices, we propose that the market as a whole can exhibit “emergent” behavior not shared by any of its constituents.

]]>