Conformal Field Theory and Virasoro Algebra. Note Quote.

Realization of the Virasoro algebra

There are a few reasons why Conformal Field Theories (CFTs) are very interesting to study: The first is that at fixed points of Renormalization Group flows, or at second order phase transitions, a quantum field theory is scale invariant. Scale invariance is a weaker form of conformal invariance, and it turns out in all cases that we know of scale invariance of a quantum field theory actually ends up implying the larger symmetry of conformal invariance. The second reason is that the requirement that a theory is conformally invariant is so restrictive that many things can be solved for that would otherwise be intractable. As an example, conformal invariance fixes 2- and 3-point functions entirely. In an ordinary quantum field theory, especially one at strong coupling, these would be hard or impossible to calculate at all. A third reason is string theory. In string theory, the worldsheet theory describing the string’s excitations is a CFT, so if string theory is correct, then in some sense conformal invariance is really one of the most fundamental features of the elemental constituents of reality. And through string theory we have the most precise and best-understood gauge/gravity dualities (the AdS/CFT dualities) that also involve CFT’s.

A Conformal Field Theory (CFT) is a Quantum Field Theory (QFT) in which conformal rescaling of the metric acts by conjugation. For the family of morphisms Dg

D[ehg] = ec·α[h] L−1[h|B1] Dg L[h|B2] —– (1)

The analogous statement (conjugating the state on each boundary) is true for any Σ.

Here L is a linear operator depending only on the restriction of h to one of the boundaries of the annulus. All the dependence on the conformal rescaling away from the boundary is determined by a universal (independent of the particular Conformal Field Theory) functional α[h] ∈ R, which appears in an overall multiplicative factor ec·α[h]. The quantity c, called “Virasoro central charge”.

The corresponding operators L[h] form a semigroup, with a self-adjoint generator H. Then, since according to the axioms of QFT the spectrum of H is bounded below, we can promote this to a group action. This can be used to map any of the Hilbert spaces Hd to a single Hl for a fixed value of l, say l = 1. We will now do this and use the simpler notation H ≅ H1,

How do we determine the L[h]? First, we uniformize Σ – in other words, we find a complex diffeomorphism φ from our surface with boundary Σ to a constant curvature surface. We then consider the restriction of φ to each of the boundary components Bi, to get an element φi of Diff S1 × R+, where the R+ factor acts by an overall rescaling. We then express each φi as the exponential of an element li in the Lie algebra Diff S1, to find an appropriate projective representation of this Lie algebra on H.

Certain subtleties are in order here: The Lie algebra Diff S1 which appears is actually a subalgebra of a direct sum of two commuting algebras, which act independently on “left moving” and “right moving” factors in H. Thus, we can write H as a direct sum of irreps of this direct sum algebra,

H = ⊕iHL,i ⊗ HR,i —– (2)

Each of these two commuting algebras is a central extension of the Lie algebra Diff S1, usually called the Virasoro algebra or Vir.

Now, consider the natural action of Diff S1 on functions on an S1 parameterized by θ ∈ [0, 2π). After complexification, we can take the following set of generators,

ln = −ieinθ ∂/∂θ n ∈ Z —– (3)

which satisfy the relations

[lm, ln] = (m − n)lm+n —– (4)

The Virasoro algebra is the universal central extension of this, with generators Ln with n ∈ Z, c ∈ R, and the relations

[Lm, Ln] = (m − n)Lm+n + c/12 n(n2 − 1)δm+n,0 —– (5)

The parameter c is again the Virasoro central charge. It is to be noted that the central extension is required in any non-trivial unitary CFT. Unitarity and other QFT axioms require the Virasoro representation to act on a Hilbert space, so that L−n = Ln. In particular, L0 is self-adjoint and can be diagonalized. Take a “highest weight representation,” in which the spectrum of L0 is bounded below. The L0 eigenvector with the minimum eigenvalue, h, is by definition the “highest weight state”, or a state |h⟩, so that

L0|h⟩ = h|h⟩ —– (6)

and normalize it so that ⟨h|h⟩ = 1. Since this is a norm in a Hilbert space, we conclude that h ≥ 0, with equality only if L−1|h⟩ = 0. In fact, L−1|0⟩ = 0 can be related to the translation invariance of the vacuum. Rephrasing this in terms of local operators, instead of in terms of states, take Σ to be the infinite cylinder R × S1, or equivalently the punctured complex plane C with the complex coordinate z. In a CFT the component Tzz of the stress tensor can be expressed in terms of the Virasoro generators:

Tzz ≡ T(z) = ∑n∈Z Lnz−n−2 —– (7)

The component Tz̄z̄ is antiholomorphic and can be similarly expressed in terms of the generators L̄n of the second copy of the Virasoro algebra:

Tz̄z̄ ≡ T(z̄) = ∑n∈Zn−n−2 —– (8)

The mixed component Tzz̄ = Tz̄z is a c-number which vanishes for a flat metric. The state corresponding to T(z) is L−2|0⟩.

Holonomies: Philosophies of Conjugacy. Part 1.


Suppose that N is an irreducible 2n-dimensional Riemannian symmetric space. We may realise N as a coset space N = G/K with Gτ ⊂ K ⊂ (Gτ)0 for some involution τ of G. Now K is (a covering of) the holonomy group of N and similarly the coset fibration G → G/K covers the holonomy bundle P → N. In this setting, J(N) is associated to G:

J(N) ≅ G ×K J (R2n)

and if K/H is a K-orbit in J(R2n) then the corresponding subbundle is G ×K K/H = G/H and the projection is just the coset fibration. Thus, the subbundles of J(N) are just the orbits of G in J(N).

Let j ∈ J (N). Then G · j is an almost complex submanifold of J (N) on which J is integrable iff j lies in the zero-set of the Nijenhuis tensor NJ.

This focusses our attention on the zero-set of NJ which we denote by Z. In favourable circumstances, the structure of this set can be completely described. We begin by assuming that N is of compact type so that G is compact and semi-simple. We also assume that N is inner i.e. that τ is an inner involution of G or, equivalently, that rankG = rankK. The class of inner symmetric spaces include the even-dimensional spheres, the Hermitian symmetric spaces, the quaternionic Kähler symmetric spaces and indeed all symmetric G-spaces for G = SO(2n+1), Sp(n), E7, E8, F4 and G2. Moreover, all inner symmetric spaces are necessarily even-dimensional and so fit into our framework.

Let N = G/K be a simply-connected inner Riemannian symmetric space of compact type. Then Z consists of a finite number of connected components on each of which G acts transitively. Moreover, any G-flag manifold is realised as such an orbit for some N.

The proof for the above requires a detour into the geometry of flag manifolds and reveals an interesting interaction between the complex geometry of flag manifolds and the real geometry of inner symmetric spaces. For this, we begin by noting that a coset space of the form G/C(T) admits several invariant Kählerian complex structures in general. Using a complex realisation of G/C(T) as follows: having fixed a complex structure, the complexified group GC acts transitively on G/C(T) by biholomorphisms with parabolic subgroups as stabilisers. Conversely, if P ⊂ GC is a parabolic subgroup then the action of G on GC/P is transitive and G ∩ P is the centraliser of a torus in G. For the infinitesimal situation: let F = G/C(T) be a flag manifold and let o ∈ F. We have a splitting of the Lie algebra of G

gC = h ⊕ m

with m ≅ ToF and h the Lie algebra of the stabiliser of o in G. An invariant complex structure on F induces an ad h-invariant splitting of mC into (1, 0) and (0, 1) spaces mC = m+ ⊕ m− with [m+, m+] ⊂ m+ by integrability. One can show that m+ and m are nilpotent subalgebras of gC and in fact hC ⊕ m is a parabolic subalgebra of gC with nilradical m. If P is the corresponding parabolic subgroup of GC then P is the stabiliser of o and we obtain a biholomorphism between the complex coset space GC/P and the flag manifold F.

Conversely, let P ⊂ GC be a parabolic subgroup with Lie algebra p and let n be the conjugate of the nilradical of p (with respect to the real form g). Then H = G ∩ P is the centraliser of a torus and we have orthogonal decompositions (with respect to the Killing inner product)

p = hC ⊕ n, gC = hC ⊕ n ⊕ n

which define an invariant complex structure on G/H realising the biholomorphism with GC/P.

The relationship between a flag manifold F = GC/P and an inner symmetric space comes from an examination of the central descending series of n. This is a filtration 0 = nk+1 ⊂ nk ⊂…⊂ n1 = n of n defined by ni = [n, ni−1].

We orthogonalise this filtration using the Killing inner product by setting

gi = ni+1 ∩ ni

for i ≥ 1 and extend this to a decomposition of gC by setting g0 = hC = (g ∩ p)C and g−i = gfor i ≥ 1. Then

gC = ∑gi

is an orthogonal decomposition with

p = ∑i≤0 gi, n = ∑i>0 g

The crucial property of this decomposition is that

[gi, gj] ⊂ gi+j

which can be proved by demonstrating the existence of an element ξ ∈ h with the property that, for each i, adξ has eigenvalue √−1i on gi. This element ξ (necessarily unique since g is semi-simple) is the canonical element of p. Since ad ξ has eigenvalues in √−1Z, ad exp πξ is an involution of g which we exponentiate to obtain an inner involution τξ of G and thus an inner symmetric space G/K where K = (Gτξ)0. Clearly, K has Lie algebra given by

k = g ∩ ∑i g2i

Priest’s Razor: Metaphysics. Note Quote.


The very idea that some mathematical piece employed to develop an empirical theory may furnish us information about unobservable reality requires some care and philosophical reflection. The greatest difficulty for the scientifically minded metaphysician consists in furnishing the means for a “reading off” of ontology from science. What can come in, and what can be left out? Different strategies may provide for different results, and, as we know, science does not wear its metaphysics on its sleeves. The first worry may be making the metaphysical piece compatible with the evidence furnished by the theory.

The strategy adopted by da Costa and de Ronde may be called top-down: investigating higher science and, by judging from the features of the objects described by the theory, one can look for the appropriate logic to endow it with just those features. In this case (quantum mechanics), there is the theory, apparently attributing contradictory properties to entities, so that a logic that does cope with such feature of objects is called forth. Now, even though we believe that this is in great measure the right methodology to pursue metaphysics within scientific theories, there are some further methodological principles that also play an important role in these kind of investigation, principles that seem to lessen the preferability of the paraconsistent approach over alternatives.

To begin with, let us focus on the paraconsistent property attribution principle. According to this principle, the properties corresponding to the vectors in a superposition are all attributable to the system, they are all real. The first problem with this rendering of properties (whether they are taken to be actual or just potential) is that such a superabundance of properties may not be justified: not every bit of a mathematical formulation of a theory needs to be reified. Some of the parts of the theory are just that: mathematics required to make things work, others may correspond to genuine features of reality. The greatest difficulty is to distinguish them, but we should not assume that every bit of it corresponds to an entity in reality. So, on the absence of any justified reason to assume superpositions as a further entity on the realms of properties for quantum systems, we may keep them as not representing actual properties (even if merely possible or potential ones).

That is, when one takes into account other virtues of a metaphysical theory, such as economy and simplicity, the paraconsistent approach seems to inflate too much the population of our world. In the presence of more economical candidates doing the same job and absence of other grounds on which to choose the competing proposals, the more economical approaches take advantage. Furthermore, considering economy and the existence of theories not postulating contradictions in quantum mechanics, it seems reasonable to employ Priest’s razor – the principle according to which one should not assume contradictions beyond necessity – and stick with the consistent approaches. Once again, a useful methodological principle seems to deem the interpretation of superposition as contradiction as unnecessary.

The paraconsistent approach could take advantage over its competitors, even in the face of its disadvantage in order to accommodate such theoretical virtues, if it could endow quantum mechanics with a better understanding of quantum phenomena, or even if it could add some explanatory power to the theory. In the face of some such kind of gain, we could allow for some ontological extravagances: in most cases explanatory power rules over matters of economy. However, it does not seem that the approach is indeed going to achieve some such result.

Besides that lack of additional explanatory power or enlightenment on the theory, there are some additional difficulties here. There is a complete lack of symmetry with the standard case of property attribution in quantum mechanics. As it is usually understood, by adopting the minimal property attribution principle, it is not contentious that when a system is in one eigenstate of an observable, then we may reasonably infer that the system has the property represented by the associated observable, so that the probability of obtaining the eigenvalue associated is 1. In the case of superpositions, if they represented properties of their own, there is a complete disanalogy with that situation: probabilities play a different role, a system has a contradictory property attributed by a superposition irrespective of probability attribution and the role of probabilities in determining measurement outcomes. In a superposition, according to the proposal we are analyzing, probabilities play no role, the system simply has a given contradictory property by the simple fact of being in a (certain) superposition.

For another disanalogy with the usual case, one does not expect to observe a sys- tem in such a contradictory state: every measurement gives us a system in particular state, never in a superposition. If that is a property in the same foot as any other, why can’t we measure it? Obviously, this does not mean that we put measurement as a sign of the real, but when doubt strikes, it may be a good advice not to assume too much on the unobservable side. As we have observed before, a new problem is created by this interpretation, because besides explaining what is it that makes a measurement give a specific result when the system measured is in a superposition (a problem usually addressed by the collapse postulate, which seems to be out of fashion now), one must also explain why and how the contradictory properties that do not get actualized vanish. That is, besides explaining how one particular property gets actual, one must explain how the properties posed by the system that did not get actual vanish.

Furthermore, even if states like 1/√2 (| ↑x ⟩ + | ↓x ⟩) may provide for an example of a  candidate of a contradictory property, because the system seems to have both spin up and down in a given direction, there are some doubts when the distribution of probabilities is different, in cases such as 2/√7 | ↑x ⟩ + √(3/7) | ↓x ⟩. What are we to think about that? Perhaps there is still a contradiction, but it is a little more inclined to | ↓x⟩ than to | ↑x⟩? That is, it is difficult to see how a contradiction arises in such cases. Or should we just ignore the probabilities and take the states composing the superposition as somehow opposed to form a contradiction anyway? That would put metaphysics way too much ahead of science, by leaving the role of probabilities unexplained in quantum mechanics in order to allow a metaphysical view of properties in.

Yield Curve Dynamics or Fluctuating Multi-Factor Rate Curves


The actual dynamics (as opposed to the risk-neutral dynamics) of the forward rate curve cannot be reduced to that of the short rate: the statistical evidence points out to the necessity of taking into account more degrees of freedom in order to represent in an adequate fashion the complicated deformations of the term structure. In particular, the imperfect correlation between maturities and the rich variety of term structure deformations shows that a one factor model is too rigid to describe yield curve dynamics.

Furthermore, in practice the value of the short rate is either fixed or at least strongly influenced by an authority exterior to the market (the central banks), through a mechanism different in nature from that which determines rates of higher maturities which are negotiated on the market. The short rate can therefore be viewed as an exogenous stochastic input which then gives rise to a deformation of the term structure as the market adjusts to its variations.

Traditional term structure models define – implicitly or explicitly – the random motion of an infinite number of forward rates as diffusions driven by a finite number of independent Brownian motions. This choice may appear surprising, since it introduces a lot of constraints on the type of evolution one can ascribe to each point of the forward rate curve and greatly reduces the dimensionality i.e. the number of degrees of freedom of the model, such that the resulting model is not able to reproduce any more the complex dynamics of the term structure. Multifactor models are usually justified by refering to the results of principal component analysis of term structure fluctuations. However, one should note that the quantities of interest when dealing with the term structure of interest rates are not the first two moments of the forward rates but typically involve expectations of non-linear functions of the forward rate curve: caps and floors are typical examples from this point of view. Hence, although a multifactor model might explain the variance of the forward rate itself, the same model may not be able to explain correctly the variability of portfolio positions involving non-linear combinations of the same forward rates. In other words, a principal component whose associated eigenvalue is small may have a non-negligible effect on the fluctuations of a non-linear function of forward rates. This question is especially relevant when calculating quantiles and Value-at-Risk measures.

In a multifactor model with k sources of randomness, one can use any k + 1 instruments to hedge a given risky payoff. However, this is not what traders do in real markets: a given interest-rate contingent payoff is hedged with bonds of the same maturity. These practices reflect the existence of a risk specific to instruments of a given maturity. The representation of a maturity-specific risk means that, in a continuous-maturity limit, one must also allow the number of sources of randomness to grow with the number of maturities; otherwise one loses the localization in maturity of the source of randomness in the model.

An important ingredient for the tractability of a model is its Markovian character. Non-Markov processes are difficult to simulate and even harder to manipulate analytically. Of course, any process can be transformed into a Markov process if it is imbedded into a space of sufficiently high dimension; this amounts to injecting a sufficient number of “state variables” into the model. These state variables may or may not be observable quantities; for example one such state variable may be the short rate itself but another one could be an economic variable whose value is not deducible from knowledge of the forward rate curve. If the state variables are not directly observed, they are obtainable in principle from the observed interest rates by a filtering process. Nevertheless the presence of unobserved state variables makes the model more difficult to handle both in terms of interpretation and statistical estimation. This drawback has motivated the development of so-called affine curve models models where one imposes that the state variables be affine functions of the observed yield curve. While the affine hypothesis is not necessarily realistic from an empirical point of view, it has the property of directly relating state variables to the observed term structure.

Another feature of term structure movements is that, as a curve, the forward rate curve displays a continuous deformation: configurations of the forward rate curve at dates not too far from each other tend to be similar. Most applications require the yield curve to have some degree of smoothness e.g. differentiability with respect to the maturity. This is not only a purely mathematical requirement but is reflected in market practices of hedging and arbitrage on fixed income instruments. Market practitioners tend to hedge an interest rate risk of a given maturity with instruments of the same maturity or close to it. This important observation means that the maturity is not simply a way of indexing the family of forward rates: market operators expect forward rates whose maturities are close to behave similarly. Moreover, the model should account for the observation that the volatility term structure displays a hump but that multiple humps are never observed.

Comment on Purely Random Correlations of the Matrix, or Studying Noise in Neural Networks


In the presence of two-body interactions the many-body Hamiltonian matrix elements vJα,α′ of good total angular momentum J in the shell-model basis |α⟩ generated by the mean field, can be expressed as follows:

vJα,α′ = ∑J’ii’ cJαα’J’ii’ gJ’ii’ —– (4)

The summation runs over all combinations of the two-particle states |i⟩ coupled to the angular momentum J′ and connected by the two-body interaction g. The analogy of this structure to the one schematically captured by the eq. (2) is evident. gJ’ii’ denote here the radial parts of the corresponding two-body matrix elements while cJαα’J’ii’ globally represent elements of the angular momentum recoupling geometry. gJ’ii’ are drawn from a Gaussian distribution while the geometry expressed by cJαα’J’ii’ enters explicitly. This originates from the fact that a quasi-random coupling of individual spins results in the so-called geometric chaoticity and thus cJαα’ coefficients are also Gaussian distributed. In this case, these two (gJ’ii’ and c) essentially random ingredients lead however to an order of magnitude larger separation of the ground state from the remaining states as compared to a pure Random Matrix Theory (RMT) limit. Due to more severe selection rules the effect of geometric chaoticity does not apply for J = 0. Consistently, the ground state energy gaps measured relative to the average level spacing characteristic for a given J is larger for J > 0 than for J = 0, and also J > 0 ground states are more orderly than those for J = 0, as it can be quantified in terms of the information entropy.

Interestingly, such reductions of dimensionality of the Hamiltonian matrix can also be seen locally in explicit calculations with realistic (non-random) nuclear interactions. A collective state, the one which turns out coherent with some operator representing physical external field, is always surrounded by a reduced density of states, i.e., it repells the other states. In all those cases, the global fluctuation characteristics remain however largely consistent with the corresponding version of the random matrix ensemble.

Recently, a broad arena of applicability of the random matrix theory opens in connection with the most complex systems known to exist in the universe. With no doubt, the most complex is the human’s brain and those phenomena that result from its activity. From the physics point of view the financial world, reflecting such an activity, is of particular interest because its characteristics are quantified directly in terms of numbers and a huge amount of electronically stored financial data is readily available. An access to a single brain activity is also possible by detecting the electric or magnetic fields generated by the neuronal currents. With the present day techniques of electro- or magnetoencephalography, in this way it is possible to generate the time series which resolve neuronal activity down to the scale of 1 ms.

One may debate over what is more complex, the human brain or the financial world, and there is no unique answer. It seems however to us that it is the financial world that is even more complex. After all, it involves the activity of many human brains and it seems even less predictable due to more frequent changes between different modes of action. Noise is of course owerwhelming in either of these systems, as it can be inferred from the structure of eigen-spectra of the correlation matrices taken across different space areas at the same time, or across different time intervals. There however always exist several well identifiable deviations, which, with help of reference to the universal characteristics of the random matrix theory, and with the methodology briefly reviewed above, can be classified as real correlations or collectivity. An easily identifiable gap between the corresponding eigenvalues of the correlation matrix and the bulk of its eigenspectrum plays the central role in this connection. The brain when responding to the sensory stimulations develops larger gaps than the brain at rest. The correlation matrix formalism in its most general asymmetric form allows to study also the time-delayed correlations, like the ones between the oposite hemispheres. The time-delay reflecting the maximum of correlation (time needed for an information to be transmitted between the different sensory areas in the brain is also associated with appearance of one significantly larger eigenvalue. Similar effects appear to govern formation of the heteropolymeric biomolecules. The ones that nature makes use of are separated by an energy gap from the purely random sequences.