Morphism of Complexes Induces Corresponding Morphisms on Cohomology Objects – Thought of the Day 146.0

Let A = Mod(R) be an abelian category. A complex in A is a sequence of objects and morphisms in A

… → Mi-1 →di-1 Mi →di → Mi+1 → …

such that di ◦ di-1 = 0 ∀ i. We denote such a complex by M.

A morphism of complexes f : M → N is a sequence of morphisms fi : Mi → Ni in A, making the following diagram commute, where diM, diN denote the respective differentials:


We let C(A) denote the category whose objects are complexes in A and whose morphisms are morphisms of complexes.

Given a complex M of objects of A, the ith cohomology object is the quotient

Hi(M) = ker(di)/im(di−1)

This operation of taking cohomology at the ith place defines a functor

Hi(−) : C(A) → A,

since a morphism of complexes induces corresponding morphisms on cohomology objects.

Put another way, an object of C(A) is a Z-graded object

M = ⊕i Mi

of A, equipped with a differential, in other words an endomorphism d: M → M satisfying d2 = 0. The occurrence of differential graded objects in physics is well-known. In mathematics they are also extremely common. In topology one associates to a space X a complex of free abelian groups whose cohomology objects are the cohomology groups of X. In algebra it is often convenient to replace a module over a ring by resolutions of various kinds.

A topological space X may have many triangulations and these lead to different chain complexes. Associating to X a unique equivalence class of complexes, resolutions of a fixed module of a given type will not usually be unique and one would like to consider all these resolutions on an equal footing.

A morphism of complexes f: M → N is a quasi-isomorphism if the induced morphisms on cohomology

Hi(f): Hi(M) → Hi(N) are isomorphisms ∀ i.

Two complexes M and N are said to be quasi-isomorphic if they are related by a chain of quasi-isomorphisms. In fact, it is sufficient to consider chains of length one, so that two complexes M and N are quasi-isomorphic iff there are quasi-isomorphisms

M ← P → N

For example, the chain complex of a topological space is well-defined up to quasi-isomorphism because any two triangulations have a common resolution. Similarly, all possible resolutions of a given module are quasi-isomorphic. Indeed, if

0 → S →f M0 →d0 M1 →d1 M2 → …

is a resolution of a module S, then by definition the morphism of complexes


is a quasi-isomorphism.

The objects of the derived category D(A) of our abelian category A will just be complexes of objects of A, but morphisms will be such that quasi-isomorphic complexes become isomorphic in D(A). In fact we can formally invert the quasi-isomorphisms in C(A) as follows:

There is a category D(A) and a functor Q: C(A) → D(A)

with the following two properties:

(a) Q inverts quasi-isomorphisms: if s: a → b is a quasi-isomorphism, then Q(s): Q(a) → Q(b) is an isomorphism.

(b) Q is universal with this property: if Q′ : C(A) → D′ is another functor which inverts quasi-isomorphisms, then there is a functor F : D(A) → D′ and an isomorphism of functors Q′ ≅ F ◦ Q.

First, consider the category C(A) as an oriented graph Γ, with the objects lying at the vertices and the morphisms being directed edges. Let Γ∗ be the graph obtained from Γ by adding in one extra edge s−1: b → a for each quasi-isomorphism s: a → b. Thus a finite path in Γ∗ is a sequence of the form f1 · f2 ·· · ·· fr−1 · fr where each fi is either a morphism of C(A), or is of the form s−1 for some quasi-isomorphism s of C(A). There is a unique minimal equivalence relation ∼ on the set of finite paths in Γ∗ generated by the following relations:

(a) s · s−1 ∼ idb and s−1 · s ∼ ida for each quasi-isomorphism s: a → b in C(A).

(b) g · f ∼ g ◦ f for composable morphisms f: a → b and g: b → c of C(A).

Define D(A) to be the category whose objects are the vertices of Γ∗ (these are the same as the objects of C(A)) and whose morphisms are given by equivalence classes of finite paths in Γ∗. Define a functor Q: C(A) → D(A) by using the identity morphism on objects, and by sending a morphism f of C(A) to the length one path in Γ∗ defined by f. The resulting functor Q satisfies the conditions of the above lemma.

The second property ensures that the category D(A) of the Lemma is unique up to equivalence of categories. We define the derived category of A to be any of these equivalent categories. The functor Q: C(A) → D(A) is called the localisation functor. Observe that there is a fully faithful functor

J: A → C(A)

which sends an object M to the trivial complex with M in the zeroth position, and a morphism F: M → N to the morphism of complexes


Composing with Q we obtain a functor A → D(A) which we denote by J. This functor J is fully faithful, and so defines an embedding A → D(A). By definition the functor Hi(−): C(A) → A inverts quasi-isomorphisms and so descends to a functor

Hi(−): D(A) → A

establishing that composite functor H0(−) ◦ J is isomorphic to the identity functor on A.


Coarse Philosophies of Coarse Embeddabilities: Metric Space Conjectures Act Algorithmically On Manifolds – Thought of the Day 145.0


A coarse structure on a set X is defined to be a collection of subsets of X × X, called the controlled sets or entourages for the coarse structure, which satisfy some simple axioms. The most important of these states that if E and F are controlled then so is

E ◦ F := {(x, z) : ∃y, (x, y) ∈ E, (y, z) ∈ F}

Consider the metric spaces Zn and Rn. Their small-scale structure, their topology is entirely different, but on the large scale they resemble each other closely: any geometric configuration in Rn can be approximated by one in Zn, to within a uniformly bounded error. We think of such spaces as “coarsely equivalent”. The other axioms require that the diagonal should be a controlled set, and that subsets, transposes, and (finite) unions of controlled sets should be controlled. It is accurate to say that a coarse structure is the large-scale counterpart of a uniformity than of a topology.

Coarse structures and coarse spaces enjoy a philosophical advantage over coarse metric spaces, in that, all left invariant bounded geometry metrics on a countable group induce the same metric coarse structure which is therefore transparently uniquely determined by the group. On the other hand, the absence of a natural gauge complicates the notion of a coarse family, while it is natural to speak of sets of uniform size in different metric spaces it is not possible to do so in different coarse spaces without imposing additional structure.

Mikhail Leonidovich Gromov introduced the notion of coarse embedding for metric spaces. Let X and Y be metric spaces.

A map f : X → Y is said to be a coarse embedding if ∃ nondecreasing functions ρ1 and ρ2 from R+ = [0, ∞) to R such that

  • ρ1(d(x,y)) ≤ d(f(x),f(y)) ≤ ρ2(d(x,y)) ∀ x, y ∈ X.
  • limr→∞ ρi(r) = +∞ (i=1, 2).

Intuitively, coarse embeddability of a metric space X into Y means that we can draw a picture of X in Y which reflects the large scale geometry of X. In early 90’s, Gromov suggested that coarse embeddability of a discrete group into Hilbert space or some Banach spaces should be relevant to solving the Novikov conjecture. The connection between large scale geometry and differential topology and differential geometry, such as the Novikov conjecture, is built by index theory. Recall that an elliptic differential operator D on a compact manifold M is Fredholm in the sense that the kernel and cokernel of D are finite dimensional. The Fredholm index of D, which is defined by

index(D) = dim(kerD) − dim(cokerD),

has the following fundamental properties:

(1) it is an obstruction to invertibility of D;

(2) it is invariant under homotopy equivalence.

The celebrated Atiyah-Singer index theorem computes the Fredholm index of elliptic differential operators on compact manifolds and has important applications. However, an elliptic differential operator on a noncompact manifold is in general not Fredholm in the usual sense, but Fredholm in a generalized sense. The generalized Fredholm index for such an operator is called the higher index. In particular, on a general noncompact complete Riemannian manifold M, John Roe (Coarse Cohomology and Index Theory on Complete Riemannian Manifolds) introduced a higher index theory for elliptic differential operators on M.

The coarse Baum-Connes conjecture is an algorithm to compute the higher index of an elliptic differential operator on noncompact complete Riemannian manifolds. By the descent principal, the coarse Baum-Connes conjecture implies the Novikov higher signature conjecture. Guoliang Yu has proved the coarse Baum-Connes conjecture for bounded geometry metric spaces which are coarsely embeddable into Hilbert space. The metric spaces which admit coarse embeddings into Hilbert space are a large class, including e.g. all amenable groups and hyperbolic groups. In general, however, there are counterexamples to the coarse Baum-Connes conjecture. A notorious one is expander graphs. On the other hand, the coarse Novikov conjecture (i.e. the injectivity part of the coarse Baum-Connes conjecture) is an algorithm of determining non-vanishing of the higher index. Kasparov-Yu have proved the coarse Novikov conjecture for spaces which admit coarse embeddings into a uniformly convex Banach space.

Game Theory and Finite Strategies: Nash Equilibrium Takes Quantum Computations to Optimality.


Finite games of strategy, within the framework of noncooperative quantum game theory, can be approached from finite chain categories, where, by finite chain category, it is understood a category C(n;N) that is generated by n objects and N morphic chains, called primitive chains, linking the objects in a specific order, such that there is a single labelling. C(n;N) is, thus, generated by N primitive chains of the form:

x0 →f1 x1 →f2 x1 → … xn-1 →fn xn —– (1)

A finite chain category is interpreted as a finite game category as follows: to each morphism in a chain xi-1 →fi xi, there corresponds a strategy played by a player that occupies the position i, in this way, a chain corresponds to a sequence of strategic choices available to the players. A quantum formal theory, for a finite game category C(n;N), is defined as a formal structure such that each morphic fundament fi of the morphic relation xi-1 →fi xis a tuple of the form:

fi := (Hi, Pi, Pˆfi) —– (2)

where Hi is the i-th player’s Hilbert space, Pi is a complete set of projectors onto a basis that spans the Hilbert space, and Pˆfi ∈ Pi. This structure is interpreted as follows: from the strategic Hilbert space Hi, given the pure strategies’ projectors Pi, the player chooses to play Pˆfi .

From the morphic fundament (2), an assumption has to be made on composition in the finite category, we assume the following tensor product composition operation:

fj ◦ fi = fji —– (3)

fji = (Hji = Hj ⊗ Hi, Pji = Pj ⊗ Pi, Pˆfji = Pˆfj ⊗ Pˆfi) —– (4)

From here, a morphism for a game choice path could be introduced as:

x0 →fn…21 xn —– (5)

fn…21 = (HG = ⊗i=n1 Hi, PG = ⊗i=n1 Pi, Pˆ fn…21 = ⊗i=n1fi) —– (6)

in this way, the choices along the chain of players are completely encoded in the tensor product projectors Pˆfn…21. There are N = ∏i=1n dim(Hi) such morphisms, a number that coincides with the number of primitive chains in the category C(n;N).

Each projector can be addressed as a strategic marker of a game path, and leads to the matrix form of an Arrow-Debreu security, therefore, we call it game Arrow-Debreu projector. While, in traditional financial economics, the Arrow-Debreu securities pay one unit of numeraire per state of nature, in the present game setting, they pay one unit of payoff per game path at the beginning of the game, however this analogy may be taken it must be addressed with some care, since these are not securities, rather, they represent, projectively, strategic choice chains in the game, so that the price of a projector Pˆfn…21 (the Arrow-Debreu price) is the price of a strategic choice and, therefore, the result of the strategic evaluation of the game by the different players.

Now, let |Ψ⟩ be a ket vector in the game’s Hilbert space HG, such that:

|Ψ⟩ = ∑fn…21 ψ(fn…21)|(fn…21⟩ —– (7)

where ψ(fn…21) is the Arrow-Debreu price amplitude, with the condition:

fn…21 |ψ(fn…21)|2 = D —– (8)

for D > 0, then, the |ψ(fn…21)|corresponds to the Arrow-Debreu prices for the game path fn…21 and D is the discount factor in riskless borrowing, defining an economic scale for temporal connections between one unit of payoff now and one unit of payoff at the end of the game, such that one unit of payoff now can be capitalized to the end of the game (when the decision takes place) through a multiplication by 1/D, while one unit of payoff at the end of the game can be discounted to the beginning of the game through multiplication by D.

In this case, the unit operator, 1ˆ = ∑fn…21 Pˆfn…21 has a similar profile as that of a bond in standard financial economics, with ⟨Ψ|1ˆ|Ψ⟩ = D, on the other hand, the general payoff system, for each player, can be addressed from an operator expansion:

πiˆ = ∑fn…21 πi (fn…21) Pˆfn…21 —– (9)

where each weight πi(fn…21) corresponds to quantities associated with each Arrow-Debreu projector that can be interpreted as similar to the quantities of each Arrow-Debreu security for a general asset. Multiplying each weight by the corresponding Arrow-Debreu price, one obtains the payoff value for each alternative such that the total payoff for the player at the end of the game is given by:

⟨Ψ|1ˆ|Ψ⟩ = ∑fn…21 πi(fn…21) |ψ(fn…21)|2/D —– (10)

We can discount the total payoff to the beginning of the game using the discount factor D, leading to the present value payoff for the player:

PVi = D ⟨Ψ|πiˆ|Ψ⟩ = D ∑fn…21 π (fn…21) |ψ(fn…21)|2/D —– (11)

, where π (fn…21) represents quantities, while the ratio |ψ(fn…21)|2/D represents the future value at the decision moment of the quantum Arrow- Debreu prices (capitalized quantum Arrow-Debreu prices). Introducing the ket

|Q⟩ ∈ HG, such that:

|Q⟩ = 1/√D |Ψ⟩ —– (12)

then, |Q⟩ is a normalized ket for which the price amplitudes are expressed in terms of their future value. Replacing in (11), we have:

PVi = D ⟨Q|πˆi|Q⟩ —– (13)

In the quantum game setting, the capitalized Arrow-Debreu price amplitudes ⟨fn…21|Q⟩ become quantum strategic configurations, resulting from the strategic cognition of the players with respect to the game. Given |Q⟩, each player’s strategic valuation of each pure strategy can be obtained by introducing the projector chains:

Cˆfi = ∑fn…i+1fi-1…1 Pˆfn…i+1 ⊗ Pˆfi ⊗ Pˆfi-1…1 —– (14)

with ∑fi Cˆfi = 1ˆ. For each alternative choice of the player i, the chain sums over all of the other choice paths for the rest of the players, such chains are called coarse-grained chains in the decoherent histories approach to quantum mechanics. Following this approach, one may introduce the pricing functional from the expression for the decoherence functional:

D (fi, gi : |Q⟩) = ⟨Q| Cˆfi Cgi|Q⟩  —– (15)

we, then have, for each player

D (fi, gi : |Q⟩) = 0, ∀ fi ≠ gi —– (16)

this is the usual quantum mechanics’ condition for an aditivity of measure (also known as decoherence condition), which means that the capitalized prices for two different alternative choices of player i are additive. Then, we can work with the pricing functional D(fi, fi :|Q⟩) as giving, for each player an Arrow-Debreu capitalized price associated with the pure strategy, expressed by fi. Given that (16) is satisfied, each player’s quantum Arrow-Debreu pricing matrix, defined analogously to the decoherence matrix from the decoherent histories approach, is a diagonal matrix and can be expanded as a linear combination of the projectors for each player’s pure strategies as follows:

Di (|Q⟩) = ∑fi D(fi, f: |Q⟩) Pˆfi —– (17)

which has the mathematical expression of a mixed strategy. Thus, each player chooses from all of the possible quantum computations, the one that maximizes the present value payoff function with all the other strategies held fixed, which is in agreement with Nash.

Probability Space Intertwines Random Walks – Thought of the Day 144.0


agByQMany deliberations of stochasticity start with “let (Ω, F, P) be a probability space”. One can actually follow such discussions without having the slightest idea what Ω is and who lives inside. So, what is “Ω, F, P” and why do we need it? Indeed, for many users of probability and statistics, a random variable X is synonymous with its probability distribution μX and all computations such as sums, expectations, etc., done on random variables amount to analytical operations such as integrations, Fourier transforms, convolutions, etc., done on their distributions. For defining such operations, you do not need a probability space. Isn’t this all there is to it?

One can in fact compute quite a lot of things without using probability spaces in an essential way. However the notions of probability space and random variable are central in modern probability theory so it is important to understand why and when these concepts are relevant.

From a modelling perspective, the starting point is a set of observations taking values in some set E (think for instance of numerical measurement, E = R) for which we would like to build a stochastic model. We would like to represent such observations x1, . . . , xn as samples drawn from a random variable X defined on some probability space (Ω, F, P). It is important to see that the only natural ingredient here is the set E where the random variables will take their values: the set of events Ω is not given a priori and there are many different ways to construct a probability space (Ω, F, P) for modelling the same set of observations.

Sometimes it is natural to identify Ω with E, i.e., to identify the randomness ω with its observed effect. For example if we consider the outcome of a dice rolling experiment as an integer-valued random variable X, we can define the set of events to be precisely the set of possible outcomes: Ω = {1, 2, 3, 4, 5, 6}. In this case, X(ω) = ω: the outcome of the randomness is identified with the randomness itself. This choice of Ω is called the canonical space for the random variable X. In this case the random variable X is simply the identity map X(ω) = ω and the probability measure P is formally the same as the distribution of X. Note that here X is a one-to-one map: given the outcome of X one knows which scenario has happened so any other random variable Y is completely determined by the observation of X. Therefore using the canonical construction for the random variable X, we cannot define, on the same probability space, another random variable which is independent of X: X will be the sole source of randomness for all other variables in the model. This also show that, although the canonical construction is the simplest way to construct a probability space for representing a given random variable, it forces us to identify this particular random variable with the “source of randomness” in the model. Therefore when we want to deal with models with a sufficiently rich structure, we need to distinguish Ω – the set of scenarios of randomness – from E, the set of values of our random variables.

Let us give an example where it is natural to distinguish the source of randomness from the random variable itself. For instance, if one is modelling the market value of a stock at some date T in the future as a random variable S1, one may consider that the stock value is affected by many factors such as external news, market supply and demand, economic indicators, etc., summed up in some abstract variable ω, which may not even have a numerical representation: it corresponds to a scenario for the future evolution of the market. S1(ω) is then the stock value if the market scenario which occurs is given by ω. If the only interesting quantity in the model is the stock price then one can always label the scenario ω by the value of the stock price S1(ω), which amounts to identifying all scenarios where the stock S1 takes the same value and using the canonical construction. However if one considers a richer model where there are now other stocks S2, S3, . . . involved, it is more natural to distinguish the scenario ω from the random variables S1(ω), S2(ω),… whose values are observed in these scenarios but may not completely pin them down: knowing S1(ω), S2(ω),… one does not necessarily know which scenario has happened. In this way one reserves the possibility of adding more random variables later on without changing the probability space.

These have the following important consequence: the probabilistic description of a random variable X can be reduced to the knowledge of its distribution μX only in the case where the random variable X is the only source of randomness. In this case, a stochastic model can be built using a canonical construction for X. In all other cases – as soon as we are concerned with a second random variable which is not a deterministic function of X – the underlying probability measure P contains more information on X than just its distribution. In particular, it contains all the information about the dependence of the random variable X with respect to all other random variables in the model: specifying P means specifying the joint distributions of all random variables constructed on Ω. For instance, knowing the distributions μX, μY of two variables X, Y does not allow to compute their covariance or joint moments. Only in the case where all random variables involved are mutually independent can one reduce all computations to operations on their distributions. This is the case covered in most introductory texts on probability, which explains why one can go quite far, for example in the study of random walks, without formalizing the notion of probability space.

The Affinity of Mirror Symmetry to Algebraic Geometry: Going Beyond Formalism



Even though formalism of homological mirror symmetry is an established case, what of other explanations of mirror symmetry which lie closer to classical differential and algebraic geometry? One way to tackle this is the so-called Strominger, Yau and Zaslow mirror symmetry or SYZ in short.

The central physical ingredient in this proposal is T-duality. To explain this, let us consider a superconformal sigma model with target space (M, g), and denote it (defined as a geometric functor, or as a set of correlation functions), as

CFT(M, g)

In physics, a duality is an equivalence

CFT(M, g) ≅ CFT(M′, g′)

which holds despite the fact that the underlying geometries (M,g) and (M′, g′) are not classically diffeomorphic.

T-duality is a duality which relates two CFT’s with toroidal target space, M ≅ M′ ≅ Td, but different metrics. In rough terms, the duality relates a “small” target space, with noncontractible cycles of length L < ls, with a “large” target space in which all such cycles have length L > ls.

This sort of relation is generic to dualities and follows from the following logic. If all length scales (lengths of cycles, curvature lengths, etc.) are greater than ls, string theory reduces to conventional geometry. Now, in conventional geometry, we know what it means for (M, g) and (M′, g′) to be non-isomorphic. Any modification to this notion must be associated with a breakdown of conventional geometry, which requires some length scale to be “sub-stringy,” with L < ls. To state T-duality precisely, let us first consider M = M′ = S1. We parameterise this with a coordinate X ∈ R making the identification X ∼ X + 2π. Consider a Euclidean metric gR given by ds2 = R2dX2. The real parameter R is usually called the “radius” from the obvious embedding in R2. This manifold is Ricci-flat and thus the sigma model with this target space is a conformal field theory, the “c = 1 boson.” Let us furthermore set the string scale ls = 1. With this, we attain a complete physical equivalence.

CFT(S1, gR) ≅ CFT(S1, g1/R)

Thus these two target spaces are indistinguishable from the point of view of string theory.

Just to give a physical picture for what this means, suppose for sake of discussion that superstring theory describes our universe, and thus that in some sense there must be six extra spatial dimensions. Suppose further that we had evidence that the extra dimensions factorized topologically and metrically as K5 × S1; then it would make sense to ask: What is the radius R of this S1 in our universe? In principle this could be measured by producing sufficiently energetic particles (so-called “Kaluza-Klein modes”), or perhaps measuring deviations from Newton’s inverse square law of gravity at distances L ∼ R. In string theory, T-duality implies that R ≥ ls, because any theory with R < ls is equivalent to another theory with R > ls. Thus we have a nontrivial relation between two (in principle) observable quantities, R and ls, which one might imagine testing experimentally. Let us now consider the theory CFT(Td, g), where Td is the d-dimensional torus, with coordinates Xi parameterising Rd/2πZd, and a constant metric tensor gij. Then there is a complete physical equivalence

CFT(Td, g) ≅ CFT(Td, g−1)

In fact this is just one element of a discrete group of T-duality symmetries, generated by T-dualities along one-cycles, and large diffeomorphisms (those not continuously connected to the identity). The complete group is isomorphic to SO(d, d; Z).

While very different from conventional geometry, T-duality has a simple intuitive explanation. This starts with the observation that the possible embeddings of a string into X can be classified by the fundamental group π1(X). Strings representing non-trivial homotopy classes are usually referred to as “winding states.” Furthermore, since strings interact by interconnecting at points, the group structure on π1 provided by concatenation of based loops is meaningful and is respected by interactions in the string theory. Now π1(Td) ≅ Zd, as an abelian group, referred to as the group of “winding numbers”.

Of course, there is another Zd we could bring into the discussion, the Pontryagin dual of the U(1)d of which Td is an affinization. An element of this group is referred to physically as a “momentum,” as it is the eigenvalue of a translation operator on Td. Again, this group structure is respected by the interactions. These two group structures, momentum and winding, can be summarized in the statement that the full closed string algebra contains the group algebra C[Zd] ⊕ C[Zd].

In essence, the point of T-duality is that if we quantize the string on a sufficiently small target space, the roles of momentum and winding will be interchanged. But the main point can be seen by bringing in some elementary spectral geometry. Besides the algebra structure, another invariant of a conformal field theory is the spectrum of its Hamiltonian H (technically, the Virasoro operator L0 + L ̄0). This Hamiltonian can be thought of as an analog of the standard Laplacian ∆g on functions on X, and its spectrum on Td with metric g is

Spec ∆= {∑i,j=1d gijpipj; pi ∈ Zd}

On the other hand, the energy of a winding string is (intuitively) a function of its length. On our torus, a geodesic with winding number w ∈ Zd has length squared

L2 = ∑i,j=1d gijwiwj

Now, the only string theory input we need to bring in is that the total Hamiltonian contains both terms,

H = ∆g + L2 + · · ·

where the extra terms … express the energy of excited (or “oscillator”) modes of the string. Then, the inversion g → g−1, combined with the interchange p ↔ w, leaves the spectrum of H invariant. This is T-duality.

There is a simple generalization of the above to the case with a non-zero B-field on the torus satisfying dB = 0. In this case, since B is a constant antisymmetric tensor, we can label CFT’s by the matrix g + B. Now, the basic T-duality relation becomes

CFT(Td, g + B) ≅ CFT(Td, (g + B)−1)

Another generalization, which is considerably more subtle, is to do T-duality in families, or fiberwise T-duality. The same arguments can be made, and would become precise in the limit that the metric on the fibers varies on length scales far greater than ls, and has curvature lengths far greater than ls. This is sometimes called the “adiabatic limit” in physics. While this is a very restrictive assumption, there are more heuristic physical arguments that T-duality should hold more generally, with corrections to the relations proportional to curvatures ls2R and derivatives ls∂ of the fiber metric, both in perturbation theory and from world-sheet instantons.

Fréchet Spaces and Presheave Morphisms.



A topological vector space V is both a topological space and a vector space such that the vector space operations are continuous. A topological vector space is locally convex if its topology admits a basis consisting of convex sets (a set A is convex if (1 – t) + ty ∈ A ∀ x, y ∈ A and t ∈ [0, 1].

We say that a locally convex topological vector space is a Fréchet space if its topology is induced by a translation-invariant metric d and the space is complete with respect to d, that is, all the Cauchy sequences are convergent.

A seminorm on a vector space V is a real-valued function p such that ∀ x, y ∈ V and scalars a we have:

(1) p(x + y) ≤ p(x) + p(y),

(2) p(ax) = |a|p(x),

(3) p(x) ≥ 0.

The difference between the norm and the seminorm comes from the last property: we do not ask that if x ≠ 0, then p(x) > 0, as we would do for a norm.

If {pi}{i∈N} is a countable family of seminorms on a topological vector space V, separating points, i.e. if x ≠ 0, there is an i with pi(x) ≠ 0, then ∃ a translation-invariant metric d inducing the topology, defined in terms of the {pi}:

d(x, y) = ∑i=1 1/2i pi(x – y)/(1 + pi(x – y))

The following characterizes Fréchet spaces, giving an effective method to construct them using seminorms.

A topological vector space V is a Fréchet space iff it satisfies the following three properties:

  • it is complete as a topological vector space;
  • it is a Hausdorff space;
  • its topology is induced by a countable family of seminorms {pi}{i∈N}, i.e., U ⊂ V is open iff for every u ∈ U ∃ K ≥ 0 and ε > 0 such that {v|pk(u – v) < ε ∀ k ≤ K} ⊂ U.

We say that a sequence (xn) in V converges to x in the Fréchet space topology defined by a family of seminorms iff it converges to x with respect to each of the given seminorms. In other words, xn → x, iff pi(xn – x) → 0 for each i.

Two families of seminorms defined on the locally convex vector space V are said to be equivalent if they induce the same topology on V.

To construct a Fréchet space, one typically starts with a locally convex topological vector space V and defines a countable family of seminorms pk on V inducing its topology and such that:

  1. if x ∈ V and pk(x) = 0 ∀ k ≥ 0, then x = 0 (separation property);
  2. if (xn) is a sequence in V which is Cauchy with respect to each seminorm, then ∃ x ∈ V such that (xn) converges to x with respect to each seminorm (completeness property).

The topology induced by these seminorms turns V into a Fréchet space; property (1) ensures that it is Hausdorff, while the property (2) guarantees that it is complete. A translation-invariant complete metric inducing the topology on V can then be defined as above.

The most important example of Fréchet space, is the vector space C(U), the space of smooth functions on the open set U ⊆ Rn or more generally the vector space C(M), where M is a differentiable manifold.

For each open set U ⊆ Rn (or U ⊂ M), for each K ⊂ U compact and for each multi-index I , we define

||ƒ||K,I := supx∈K |(∂|I|/∂xI (ƒ)) (x)|, ƒ ∈ C(U)

Each ||.||K,I defines a seminorm. The family of seminorms obtained by considering all of the multi-indices I and the (countable number of) compact subsets K covering U satisfies the properties (1) and (1) detailed above, hence makes C(U) into a Fréchet space. The sets of the form

|ƒ ∈ C(U)| ||ƒ – g||K,I < ε

with fixed g ∈ C(U), K ⊆ U compact, and multi-index I are open sets and together with their finite intersections form a basis for the topology.

All these constructions and results can be generalized to smooth manifolds. Let M be a smooth manifold and let U be an open subset of M. If K is a compact subset of U and D is a differential operator over U, then

pK,D(ƒ) := supx∈K|D(ƒ)|

is a seminorm. The family of all the seminorms  pK,D with K and D varying among all compact subsets and differential operators respectively is a separating family of seminorms endowing CM(U) with the structure of a complete locally convex vector space. Moreover there exists an equivalent countable family of seminorms, hence CM(U) is a Fréchet space. Let indeed {Vj} be a countable open cover of U by open coordinate subsets, and let, for each j, {Kj,i} be a countable family of compact subsets of Vj such that ∪i Kj,i = Vj. We have the countable family of seminorms

pK,I := supx∈K |(∂|I|/∂xI (ƒ)) (x)|, K ∈  {Kj,i}

inducing the topology. CM(U) is also an algebra: the product of two smooth functions being a smooth function.

A Fréchet space V is said to be a Fréchet algebra if its topology can be defined by a countable family of submultiplicative seminorms, i.e., a countable family {qi)i∈N of seminorms satisfying

qi(ƒg) ≤qi (ƒ) qi(g) ∀ i ∈ N

Let F be a sheaf of real vector spaces over a manifold M. F is a Fréchet sheaf if:

(1)  for each open set U ⊆ M, F(U) is a Fréchet space;

(2)  for each open set U ⊆ M and for each open cover {Ui} of U, the topology of F(U) is the initial topology with respect to the restriction maps F(U) → F(Ui), that is, the coarsest topology making the restriction morphisms continuous.

As a consequence, we have the restriction map F(U) → F(V) (V ⊆ U) as continuous. A morphism of sheaves ψ: F → F’ is said to be continuous if the map F(U) → F'(U) is open for each open subset U ⊆ M.

Global Significance of Chinese Investments. My Deliberations in Mumbai (04/03/2018)


What are fitted values in statistics?

The values for an output variable that have been predicted by a model fitted to a set of data. a statistical is generally an equation, the graph of which includes or approximates a majority of data points in a given data set. Fitted values are generated by extending the model of past known data points in order to predict unknown values. These are also called predicted values.

What are outliers in statistics?

These are observation points that are distant from other observations and may arise due to variability in the measurement  or it may indicate experimental errors. These may also arise due to heavy tailed distribution.

What is LBS (Locational Banking statistics)?

The locational banking statistics gather quarterly data on international financial claims and liabilities of bank offices in the reporting countries. Total positions are broken down by currency, by sector (bank and non-bank), by country of residence of the counterparty, and by nationality of reporting banks. Both domestically-owned and foreign-owned banking offices in the reporting countries record their positions on a gross (unconsolidated) basis, including those vis-à-vis own affiliates in other countries. This is consistent with the residency principle of national accounts, balance of payments and external debt statistics.

What is CEIC?

Census and Economic Information Centre

What are spillover effects?

These refer to the impact that seemingly unrelated events in one nation can have on the economies of other nations. since 2009, China has emerged a major source of spillover effects. This is because Chinese manufacturers have driven much of the global commodity demand growth since 2000. With China now being the second largest economy in the world, the number of countries that experience spillover effects from a Chinese slowdown is significant. China slowing down has a palpable impact on worldwide trade in metals, energy, grains and other commodities.

How does China deal with its Non-Performing Assets?


China adopted a four-point strategy to address the problems. The first was to reduce risks by strengthening banks and spearheading reforms of the state-owned enterprises (SOEs) by reducing their level of debt. The Chinese ensured that the nationalized banks were strengthened by raising disclosure standards across the board.

The second important measure was enacting laws that allowed the creation of asset management companies, equity participation and most importantly, asset-based securitization. The “securitization” approach is being taken by the Chinese to handle even their current NPA issue and is reportedly being piloted by a handful of large banks with specific emphasis on domestic investors. According to the International Monetary Fund (IMF), this is a prudent and preferred strategy since it gets assets off the balance sheets quickly and allows banks to receive cash which could be used for lending.

The third key measure that the Chinese took was to ensure that the government had the financial loss of debt “discounted” and debt equity swaps were allowed in case a growth opportunity existed. The term “debt-equity swap” (or “debt-equity conversion”) means the conversion of a heavily indebted or financially distressed company’s debt into equity or the acquisition by a company’s creditors of shares in that company paid for by the value of their loans to the company. Or, to put it more simply, debt-equity swaps transfer bank loans from the liabilities section of company balance sheets to common stock or additional paid-in capital in the shareholders’ equity section.

Let us imagine a company, as on the left-hand side of the below figure, with assets of 500, bank loans of 300, miscellaneous debt of 200, common stock of 50 and a carry-forward loss of 50. By converting 100 of its debt into equity (transferring 50 to common stock and 50 to additional paid-in capital), thereby improving the balance sheet position and depleting additional paid-in capital (or using the net income from the following year), as on the right-hand side of the figure, the company escapes insolvency. The former creditors become shareholders, suddenly acquiring 50% of the voting shares and control of the company.

Screen Shot 2018-03-07 at 10.09.47 AM

The first benefit that results from this is the improvement in the company’s finances produced by the reduction in debt. The second benefit (from the change in control) is that the creditors become committed to reorganizing the company, and the scope for moral hazard by the management is limited. Another benefit is one peculiar to equity: a return (i.e., repayment) in the form of an increase in enterprise value in the future. In other words, the fact that the creditors stand to make a return on their original investment if the reorganization is successful and the value of the business rises means that, like the debtor company, they have more to gain from this than from simply writing off their loans. If the reorganization is not successful, the equity may, of course, prove worthless.

The fourth measure they took was producing incentives like tax breaks, exemption from administrative fees and transparent evaluations norms. These strategic measures ensured the Chinese were on top of the NPA issue in the early 2000s, when it was far larger than it is today. The noteworthy thing is that they were indeed successful in reducing NPAs. How is this relevant to India and how can we address the NPA issue more effectively?

For now, capital controls and the paying down of foreign currency loans imply that there are few channels through which a foreign-induced debt sell-off could trigger a collapse in asset prices. Despite concerns in 2016 over capital outflow, China’s foreign exchange reserves have stabilised.

But there is a long-term cost. China is now more vulnerable to capital outflow. Errors and omissions on its national accounts remain large, suggesting persistent unrecorded capital outflows. This loss of capital should act as a salutary reminder to those who believe that China can take the lead on globalisation or provide the investment or currency business to fuel things like a post-Brexit economy.

The Chinese government’s focus on debt management will mean tighter controls on speculative international investments. It will also provide a stern test of China’s centrally planned financial system for the foreseeable future.

Global Significance of Chinese investments