Conjuncted: Noise Traders, Chartists and Fundamentalists

stocks-17-3

Let us leave traders’ decision-making processes and turn to the adjustment of the stock-market price. We assume the existence of a market maker, such as a specialist in the New York stock exchange. The role of the market maker is to give an execution price to incoming orders and to execute transactions. The market maker announces a price at the beginning of each trading period. Traders then determine their excess demand, based on the announced price and on their expected prices. When the market maker observes either excess demand or excess supply, he applies the so-called short-side rule to the demands and supplies, taking aggregate transactions for the stock to be equal to the minimum of total supply and demand. Thus traders on the short side of the market will realize their desired transactions. At the beginning of the next trading period, he announces a new price. If the excess demand in period t is positive (negative), the market maker raises (reduces) the price for the following period t + 1. The process then is repeated. Let κ and ξ be the fractions of chartists and of noise traders in the total number of traders, respectively. Then the process of price adjustment can be written as

pt+1 − pt = θn[(1 − κ − ξ)xtf + κxtc + ξxtn]

where θ denotes the speed of the adjustment of the price, and n the total number of traders.

Advertisement

Noise Traders

5min

The term used to describe an investor who makes decisions regarding buy and sell trades without the use of fundamental data. These investors generally have poor timing, follow trends, and over-react to good and bad news. Let us consider the noise traders’ decision making. They are assumed to base decisions on noise in the sense of a large number of small events. The behavior of a noise trader can be formalized as maximizing the quadratic utility function

W(xtn, ytn) = g(ytn + (pt + εt)xtn) – k(xtn)2 —– (1)

subject to the budget constraint

ytn + ptxtn = 0 —– (2)

where xtn and ytn represent the noise trader’s excess demand for stock and for money at time t, respectively. The noise εt is assumed to be an IID random variable. In probability theory and statistics, a sequence or other collection of random variables is independent and identically distributed (i.i.d. or iid or IID) if each random variable has the same probability distribution as the others and all are mutually independent. The excess demand function for stock is given as

xtn = γεt, γ = g/2k > 0 —– (3)

where γ denotes the strength of the reaction to noisy information. In short, noise traders try to buy stock if they believe the noise to be good news (εt > 0). Inversely, if they believe the noise to be bad news (εt < 0), they try to sell it.

Chartists

StockCharts.com-Inverse-Head-and-Shoulders

Chartists are assumed to have the same utility function as the fundamentalists. Their behavior is formalized as maximizing the utility function

v = α(yt + pt+1cxtc) + βxtc – (1+ βxtc) log (1+ βxtc) —– (1)

subject to the budget constraint

ytc + ptxtc = 0 —– (2)

where xtc and ytc represent the chartist’s excess demand for stock and for money at period t, and pt+1c denotes the price expected by him. The chartist’s excess demand function for the stock is given by

xtc = 1/β (exp α (pt+1– pt)/β – 1) —– (3)

His expectation formation is as follows: He is assumed to forecast the future price pt+1c using adaptive expectations,

pt+1c  = pt + μ (p– ptc) —– (4)

where the parameter µ(0 < µ < 1) is a so-called error correction coefficient. Chartists’ decisions are based on observation of the past price-data. This type of trader, who simply extrapolates patterns of past prices, is a common stylized example, currently in popular use in heterogeneous agent models. It follows that chartists try to buy stock when they anticipate a rising price for the next period, and, in contrast, try to sell stock when they expect a falling price.

Stocks and Fundamentalists’ Behavior

noise-trader

Let us consider a simple stock market with the following characteristics. A large amount of stock is traded. In the market, there are three typical groups of traders with different strategies: fundamentalists, chartists, and noise traders. Traders can invest either in money or in stock. Since the model is designed to describe stock price movements over short periods, such as one day, the dividend from stock and the interest rate for the risk-free asset will be omitted for simplicity. Traders are myopic and bent on maximizing utility. Their utility depends on the price change they expect, and on their excess demand for stock rather than simply their demand. Their excess demand is derived from utility maximization.

Let Ytf be the amount of money that a fundamentalist holds at a time t and Xtf be the number of shares purchased by a fundamentalist at time t. Let pt be the price per share at time t. The fundamentalist’s budget constrain is given by

Ytf + ptXtf = Yt-1f + ptXt-1f —– (1)

or equivalently

ytf + ptxtf = 0 —– (2)

where

ytf = Ytf – Yt-1f

denotes the fundamentalist’s excess demand for money, and

xtf = Xtf – Xt-1f

his excess demand for stock. Suppose that the fundamentalist’s preferences are represented by the utility function,

u = α(ytf + pt+1fxtf + βxtf – (1 + βxtf) log (1 + βxtf) —– (3)

where pt+1f denotes the fundamentalist’s expectation in period t about the price in the following period t + 1. The parameters α and β are assumed to be positive. Inserting (2) into (3) the fundamentalist’s utility maximization problem becomes:

maxxtf  u =  α(pt+1– pt)xtf  βxtf – (1 + βxtf) log (1 + βxtf) —– (4)

The utility function u satisfies the standard properties: u′ (|xtf|) > 0, u′′(|xtf|) < 0 ∀ |xf|t ≤ |xf*|, where |xf*| denotes the absolute value of xf producing a maximum utility. Thus, the utility function is strictly concave. It depends on the price change expected by fundamentalists (pt+1– pt) as well as fundamentalist’s excess demand for stock xtf. The first part α(pt+1– pt)xtf implies that a rise in the expected price change increases his utility. The remaining part expresses his attitude toward risk. Even if the expected price change is positive, he does not want to invest his total wealth in the stock, and vice versa. In this sense, fundamentalists are risk averse. β is the parameter that sets the lower limitation on excess demand. All excess demand for stock derived from the utility maximization is limited to -1/β. When the expected price change (pt+1– pt) is positive, the maximum value of the utility function is also positive. This means that fundamentalists try to buy stock. By analogy, when the expected price change (pt+1– pt) is negative, the maximum value of the utility function is negative, which means that they try to sell. The utility maximization problem (4) is solved for the fundamentalist’s excess demand,

xtf = 1/β(exp(α(pt+1– pt)/β) – 1) —– (5)

Excess demand increases as the expected price change (pt+1– pt) increases. It should be noticed that the optimal value of excess supply is limited to -1/β, while the optimal value of excess demand is not restricted. Since there is little loss of generality in fixing the parameter β at unity, below, we will assume β to be constant and equal to 1. Then let us think of the fundamentalist’s expectation formation. We assume that he form his price expectation according to a simple adaptive scheme:

pt+1f = p+ ν(p* – pt) —– (6)

We see from Equation (6) that fundamentalists believe that the price moves towards the fundamental price p* by factor ν. To sum up fundamentalists’ behavior: if the price pt is below their expected price, they will try to buy stock, because they consider the stock to be undervalued. On the contrary, if the price is above the expected value, they will try to sell, because they consider the stock to be overvalued.

Typicality. Cosmological Constant and Boltzmann Brains. Note Quote.

tumblr_nsafvjW8W31qg20oho1_1280

In a multiverse we would expect there to be relatively many universe domains with large values of the cosmological constant, but none of these allow gravitationally bound structures (such as our galaxy) to occur, so the likelihood of observing ourselves to be in one is essentially zero.

The cosmological constant has negative pressure, but positive energy.  The negative pressure ensures that as the volume expands then matter loses energy (photons get red shifted, particles slow down); this loss of energy by matter causes the expansion to slow down – but the increase in energy of the increased volume is more important .  The increase of energy associated with the extra space the cosmological constant fills has to be balanced by a decrease in the gravitational energy of the expansion – and this expansion energy is negative, allowing the universe to carry on expanding.  If you put all the terms on one side in the Friedmann equation – which is just an energy balancing equation – (with the other side equal to zero) you will see that the expansion energy is negative, whereas the cosmological constant and matter (including dark matter) all have positive energy.

nex6

However, as the cosmological constant is decreased, we eventually reach a transition point where it becomes just small enough for gravitational structures to occur. Reduce it a bit further still, and you now get universes resembling ours. Given the increased likelihood of observing such a universe, the chances of our universe being one of these will be near its peak. Theoretical physicist Steven Weinberg used this reasoning to correctly predict the order of magnitude of the cosmological constant well before the acceleration of our universe was even measured.

Unfortunately this argument runs into conceptually murky water. The multiverse is infinite and it is not clear whether we can calculate the odds for anything to happen in an infinite volume of space- time. All we have is the single case of our apparently small but positive value of the cosmological constant, so it’s hard to see how we could ever test whether or not Weinberg’s prediction was a lucky coincidence. Such questions concerning infinity, and what one can reasonably infer from a single data point, are just the tip of the philosophical iceberg that cosmologists face.

Another conundrum is where the laws of physics come from. Even if these laws vary across the multiverse, there must be, so it seems, meta-laws that dictate the manner in which they are distributed. How can we, inhabitants on a planet in a solar system in a galaxy, meaningfully debate the origin of the laws of physics as well as the origins of something, the very universe, that we are part of? What about the parts of space-time we can never see? These regions could infinitely outnumber our visible patch. The laws of physics could differ there, for all we know.

We cannot settle any of these questions by experiment, and this is where philosophers enter the debate. Central to this is the so-called observational-selection effect, whereby an observation is influenced by the observer’s “telescope”, whatever form that may take. But what exactly is it to be an observer, or more specifically a “typical” observer, in a system where every possible sort of observer will come about infinitely many times? The same basic question, centred on the role of observers, is as fundamental to the science of the indefinitely large (cosmology) as it is to that of the infinitesimally small (quantum theory).

This key issue of typicality also confronted Austrian physicist and philosopher Ludwig Boltzmann. In 1897 he posited an infinite space-time as a means to explain how extraordinarily well-ordered the universe is compared with the state of high entropy (or disorder) predicted by thermodynamics. Given such an arena, where every conceivable combination of particle position and momenta would exist somewhere, he suggested that the orderliness around us might be that of an incredibly rare fluctuation within an infinite space-time.

But Boltzmann’s reasoning was undermined by another, more absurd, conclusion. Rare fluctuations could also give rise to single momentary brains – self aware entities that spontaneously arises through random collisions of particles. Such “Boltzmann brains”, the argument goes, are far more likely to arise than the entire visible universe or even the solar system. Ludwig Boltzmann reasoned that brains and other complex, orderly objects on Earth were the result of random fluctuations. But why, then, do we see billions of other complex, orderly objects all around us? Why aren’t we like the lone being in the sea of nonsense?Boltzmann theorized that if random fluctuations create brains like ours, there should be Boltzmann brains floating around in space or sitting alone on uninhabited planets untold lightyears away. And in fact, those Boltzmann brains should be incredibly more common than the herds of complex, orderly objects we see here on Earth. So we have another paradox. If the only requirement of consciousness is a brain like the one in your head, why aren’t you a Boltzmann brain? If you were assigned to experience a random consciousness, you should almost certainly find yourself alone in the depths of space rather than surrounded by similar consciousnesses. The easy answers seem to all require a touch of magic. Perhaps consciousness doesn’t arise naturally from a brain like yours but requires some metaphysical endowment. Or maybe we’re not random fluctuations in a thermodynamic soup, and we were put here by an intelligent being. An infinity of space would therefore contain an infinitude of such disembodied brains, which would then be the “typical observer”, not us. OR. Starting at the very beginning: entropy must always stay the same or increase over time, according to the second law of thermodynamics. However, Boltzmann (the Ludwig one, not the brain one) formulated a version of the law of entropy that was statistical. What this means for what you’re asking is that while entropy almost always increases or stays the same, over billions of billions of billions of billions of billions…you get the idea years, entropy might go down a bit. This is called a fluctuation. So backing up a tad, if entropy always increases/stays the same, what is surprising for cosmologists is that the universe started in such a low-entropy state. So to (try) to explain this, Boltzmann said, hey, what if there’s a bigger universe that our universe is in, and it is in a state of the most possible entropy, or thermal equilibrium. Then, let’s say it exists for a long long time, those billions we talked about earlier. There’ll be statistical fluctuations, right? And those statistical fluctuations might be represented by the birth of universes. Ahem, our universe is one of them. So now, we get into the brains. Our universe must be a HUGE statistical fluctuation comparatively to other fluctuations. I mean, think about it. If it is so nuts for entropy to decrease by just a little tiny bit, how nuts would it be for it to decrease enough for the birth of a universe to happen!? So the question is, why aren’t we just brains? That is, why aren’t we a statistical fluctuation just big enough for intelligent life to develop, look around, see it exists, and melt back into goop. And it is this goopy-not-long-existing intelligent life that is a Boltzmann brain. This is a huge challenge to the Boltzmann (Ludwig) theory.

Can this bizarre vision possibly be real, or does it indicate something fundamentally wrong with our notion of “typicality”? Or is our notion of “the observer” flawed – can thermodynamic fluctuations that give rise to Boltzmann’s brains really suffice? Or could a futuristic supercomputer even play the Matrix-like role of a multitude of observers?

Time-Evolution in Quantum Mechanics is a “Flow” in the (Abstract) Space of Automorphisms of the Algebra of Observables

Spiral of life

In quantum mechanics, time is not a geometrical flow. Time-evolution is characterized as a transformation that preserves the algebraic relations between physical observables. If at a time t = 0 an observable – say the angular momentum L(0) – is defined as a certain combination (product and sum) of some other observables – for instance positions X(0), Y (0) and momenta PX (0), PY (0), that is to say

L(0) = X (0)PY (0) − Y (0)PX (0) —– (1)

then one asks that the same relation be satisfied at any other instant t (preceding or following t = 0),

L(t) = X (t)PY (t) − Y (t)PX (t) —– (2)

The quantum time-evolution is thus a map from an observable at time 0 to an observable at time t that preserves the algebraic form of the relation between observables. Technically speaking, one talks of an automorphism of the algebra of observables.

At first sight, this time-evolution has nothing to do with a flow. However there is still “something flowing”, although in an abstract mathematical space. Indeed, to any value of t (here time is an absolute parameter, as in Newton mechanics) is associated an automorphism αt that allows to deduce the observables at time t from the knowledge of the observables at time 0. Mathematically, one writes

L(t) = αt(L(0)), X(t) = αt(X(0)) —– (3)

and so on for the other observables. The term “group” is important for it precisely explains why it still makes sense to talk about a flow. Group refers to the property of additivity of the evolution: going from t to t′ is equivalent to going from t to t1, then from t1 to t′. Considering small variations of time (t′−t)/n where n is an integer, in the limit of large n one finds that going from t to t′ consists in flowing through n small variations, exactly as the geometric flow consists in going from a point x to a point y through a great number of infinitesimal variations (x−y)/n. That is why the time-evolution in quantum mechanics can be seen as a “flow” in the (abstract) space of automorphisms of the algebra of observables. To summarize, in quantum mechanics time is still “something that flows”, although in a less intuitive manner than in relativity. The idea of “flow of time” makes sense, as a flow in an abstract space rather than a geometrical flow.

“approximandum,” will not be General Theory of Relativity, but only its vacuum sector of spacetimes of topology Σ × R, or quantum gravity as a fecund ground for metaphysician. Note Quote.

1*Sr3ZCgKlan3YlW_n6oBc0w

In string theory as well as in Loop Quantum Gravity, and in other approaches to quantum gravity, indications are coalescing that not only time, but also space is no longer a fundamental entity, but merely an “emergent” phenomenon that arises from the basic physics. In the language of physics, spacetime theories such as GTR are “effective” theories and spacetime itself is “emergent”. However, unlike the notion that temperature is emergent, the idea that the universe is not in space and time arguably shocks our very idea of physical existence as profoundly as any scientific revolution ever did. It is not even clear whether we can coherently formulate a physical theory in the absence of space and time. Space disappears in LQG insofar as the physical structures it describes bear little, if any, resemblance to the spatial geometries found in GTR. These structures are discrete and not continuous as classical spacetimes are. They represent the fundamental constitution of our universe that correspond, somehow, to chunks of physical space and thus give rise – in a way yet to be elucidated – to the spatial geometries we find in GTR. The conceptual problem of coming to grasp how to do physics in the absence of an underlying spatio-temporal stage on which the physics can play out is closely tied to the technical difficulty of mathematically relating LQG back to GTR. Physicists have yet to fully understand how classical spacetimes emerge from the fundamental non-spatio-temporal structure of LQG, and philosophers are only just starting to study its conceptual foundations and the implications of quantum gravity in general and of the disappearance of space-time in particular. Even though the mathematical heavy-lifting will fall to the physicists, there is a role for philosophers here in exploring and mapping the landscape of conceptual possibilites, bringing to bear the immense philosophical literature in emergence and reduction which offers a variegated conceptual toolbox.

To understand how classical spacetime re-emerges from the fundamental quantum structure involves what the physicists call “taking the classical limit.” In a sense, relating the spin network states of LQG back to the spacetimes of GTR is a reversal of the quantization procedure employed to formulate the quantum theory in the first place. Thus, while the quantization can be thought of as the “context of discovery,” finding the classical limit that relates the quantum theory of gravity to GTR should be considered the “context of (partial) justification.” It should be emphasized that understanding how (classical) spacetime re-emerges by retrieving GTR as a low-energy limit of a more fundamental theory is not only important to “save the appearances” and to accommodate common sense – although it matters in these respects as well, but must also be considered a methodologically central part of the enterprise of quantum gravity. If it cannot be shown that GTR is indeed related to LQG in some mathematically well-understood way as the approximately correct theory when energies are sufficiently low or, equivalently, when scales are sufficiently large, then LQG cannot explain why GTR has been empirically as successful as it has been. But a successful theory can only be legitimately supplanted if the successor theory not only makes novel predictions or offers deeper explanations, but is also able to replicate the empirical success of the theory it seeks to replace.

Ultimately, of course, the full analysis will depend on the full articulation of the theory. But focusing on the kinematical level, and thus avoiding having to fully deal with the problem of time, lets apply the concepts to the problem of the emergence of full spacetime, rather than just time. Chris Isham and Butterfield identify three types of reductive relations between theories: definitional extension, supervenience, and emergence, of which only the last has any chance of working in the case at hand. For Butterfield and Isham, a theory T1 emerges from another theory T2 just in case there exists either a limiting or an approximating procedure to relate the two theories (or a combination of the two). A limiting procedure is taking the mathematical limit of some physically relevant parameters, in general in a particular order, of the underlying theory in order to arrive at the emergent theory. A limiting procedure won’t work, at least not by itself, due to technical problems concerning the maximal loop density as well as to what essentially amounts to the measurement problem familiar from non-relativistic quantum physics.

An approximating procedure designates the process of either neglecting some physical magni- tudes, and justifying such neglect, or selecting a proper subset of states in the state space of the approximating theory, and justifying such selection, or both, in order to arrive at a theory whose values of physical quantities remain sufficiently close to those of the theory to be approximated. Note that the “approximandum,” the theory to be approximated, in our case will not be GTR, but only its vacuum sector of spacetimes of topology Σ × R. One of the central questions will be how the selection of states will be justified. Such a justification would be had if we could identify a mechanism that “drives the system” to the right kind of states. Any attempt to finding such a mechanism will foist a host of issues known from the traditional problem of relating quantum to classical mechanics upon us. A candidate mechanism, here and there, is some form of “decoherence,” even though that standardly involves an “environment” with which the system at stake can interact. But the system of interest in our case is, of course, the universe, which makes it hard to see how there could be any outside environment with which the system could interact. The challenge then is to conceptualize decoherence is a way to circumvents this problem.

Once it is understood how classical space and time disappear in canonical quantum gravity and how they might be seen to re-emerge from the fundamental, non-spatiotemporal structure, the way in which classicality emerges from the quantum theory of gravity does not radically differ from the way it is believed to arise in ordinary quantum mechanics. The project of pursuing such an understanding is of relevance and interest for at least two reasons. First, important foundational questions concerning the interpretation of, and the relation between, theories are addressed, which can lead to conceptual clarification of the foundations of physics. Such conceptual progress may well prove to be the decisive stepping stone to a full quantum theory of gravity. Second, quantum gravity is a fertile ground for any metaphysician as it will inevitably yield implications for specifically philosophical, and particularly metaphysical, issues concerning the nature of space and time.

Mapping Fields. Quantum Field Gravity. Note Quote.

albrecht

Introducing a helpful taxonomic scheme, Chris Isham proposed to divide the many approaches to formulating a full, i.e. not semi-classical, quantum theory of gravity into four broad types of approaches: first, those quantizing GR; second, those “general-relativizing” quantum physics; third, construct a conventional quantum theory including gravity and regard GR as its low-energy limit; and fourth, consider both GR and conventional quantum theories of matter as low-energy limits of a radically novel fundamental theory.

The first family of strategies starts out from classical GR and seek to apply, in a mathematically rigorous and physically principled way, a “quantization” procedure, i.e. a recipe for cooking up a quantum theory from a classical theory such as GR. Of course, quantization proceeds, metaphysically speaking, backwards in that it starts out from the dubious classical theory – which is found to be deficient and hence in need of replacement – and tries to erect the sound building of a quantum theory of gravity on its ruin. But it should be understood, just like Wittgenstein’s ladder, as a methodologically promising means to an end. Quantization procedures have successfully been applied elsewhere in physics and produced, among others, important theories such as quantum electrodynamics.

The first family consists of two genera, the now mostly defunct covariant ansatz (Defunct because covariant quantizations of GR are not perturbatively renormalizable, a flaw usually considered fatal. This is not to say, however, that covariant techniques don’t play a role in contemporary quantum gravity.) and the vigorous canonical quantization approach. A canonical quantization requires that the theory to be quantized is expressed in a particular formalism, the so-called constrained Hamiltonian formalism. Loop quantum gravity (LQG) is the most prominent representative of this camp, but there are other approaches.

Secondly, there is to date no promising avenue to gaining a full quantum theory of gravity by “general-relativizing” quantum (field) theories, i.e. by employing techniques that permit the full incorporation of the lessons of GR into a quantum theory. The only existing representative of this approach consists of attempts to formulate a quantum field theory on a curved rather than the usual flat background spacetime. The general idea of this approach is to incorporate, in some local sense, GR’s principle of general covariance. It is important to note that, however, that the background spacetime, curved though it may be, is in no way dynamic. In other words, it cannot be interpreted, as it can in GR, to interact with the matter fields.

The third group also takes quantum physics as its vantage point, but instead of directly incorporating the lessons of GR, attempts to extend quantum physics with means as conventional as possible in order to include gravity. GR, it is hoped, will then drop out of the resulting theory in its low-energy limit. By far the most promising member of this family is string theory, which, however, goes well beyond conventional quantum field theory, both methodologically and in terms of ambition. Despite its extending the assumed boundaries of the family, string theory still takes conventional quantum field theory as its vantage point, both historically and systematically, and does not attempt to build a novel theory of quantum gravity dissociated from “old” physics. Again, there are other approaches in this family, such as topological quantum field theory, but none of them musters substantial support among physicists.

The fourth and final group of the Ishamian taxonomy is most aptly characterized by its iconoclastic attitude. For the heterodox approaches of this type, no known physics serves as starting point; rather, radically novel perspectives are considered in an attempt to formulate a quantum theory of gravity ab initio.

All these approaches have their attractions and hence their following. But all of them also have their deficiencies. To list them comprehensively would go well beyond the present endeavour. Apart from the two major challenges for LQG, a major problem common to all of them is their complete lack of a real connection to observations or experiments. Either the theory is too flexible so as to be able to accommodate almost any empirical data, such as string theory’s predictions of supersymmetric particles which have been constantly revised in light of particle detectors’ failures to find them at the predicted energies or as string theory’s embarras de richesses, the now notorious “landscape problem” of choosing among 10500 different models. Or the connection between the mostly understood data and the theories is highly tenuous and controversial, such as the issue of how and whether data narrowly confining possible violations of Lorentz symmetry relate to theories of quantum gravity predicting or assuming a discrete spacetime structure that is believed to violate, or at least modify, the Lorentz symmetry so well confirmed at larger scales. Or the predictions made by the theories are only testable in experimental regimes so far removed from present technological capacities, such as the predictions of LQG that spacetime is discrete at the Planck level at a quintillion (1018) times the energy scales probed by the Large Hadron Collider at CERN. Or simply no one remotely has a clue as to how the theory might connect to the empirical, such as is the case for the inchoate approaches of the fourth group like causal set theory.

BRICS Bank, New Development Bank: Peoples’ Perspectives. One-Day convention on 30th March, 2017 at Indian Social Institute, New Delhi.

NDB Poster

The Peoples’ Forum on BRICS is conducting a one-day convention on New development Bank, Peoples’ Perspectives to look at the various trends in Development Finance, mechanism to monitor trade and finance in BRICS and various stakes involved with the emergence of New Development Bank. The conference occurred a day before the official 2nd Annual NDB Meeting to be held at New Delhi from the 31st of March to the 2nd of April. The underlying philosophy of the official meeting is ‘Building a Sustainable Future’, where the role of the governments in development finance, and in particular sustainable infrastructure, some of the challenges to banking sector in some of NDB’s member countries as they face challenges to finance sustainable infrastructure, and creativity and innovation that could be brought by the banks to the table would be the focal point. Moreover, under the thematic of ‘Urban Planning and Sustainable Infrastructure Development’, an intense look into how urban development could improve the lives of the people , taking into account an ever-growing influence of long-term urban planning and investment in sustainable infrastructure in mega-cities of BRICS countries would be an allied material point of deliberations. 

The Peoples’ Forum on BRICS outrightly rejects these themes based on certain considerations and positions itself in looking at the NDB in an alliance complicity with other multi-lateral banks that have hitherto been more anti-people in practice than they would otherwise claim. I am chairing a session. 

NDB Flyer-Final

 

Extreme Value Theory

1469941517622

Standard estimators of the dependence between assets are the correlation coefficient or the Spearman’s rank correlation for instance. However, as stressed by [Embrechts et al. ], these kind of dependence measures suffer from many deficiencies. Moreoever, their values are mostly controlled by relatively small moves of the asset prices around their mean. To cure this problem, it has been proposed to use the correlation coefficients conditioned on large movements of the assets. But [Boyer et al.] have emphasized that this approach suffers also from a severe systematic bias leading to spurious strategies: the conditional correlation in general evolves with time even when the true non-conditional correlation remains constant. In fact, [Malevergne and Sornette] have shown that any approach based on conditional dependence measures implies a spurious change of the intrinsic value of the dependence, measured for instance by copulas. Recall that the copula of several random variables is the (unique) function which completely embodies the dependence between these variables, irrespective of their marginal behavior (see [Nelsen] for a mathematical description of the notion of copula).

In view of these limitations of the standard statistical tools, it is natural to turn to extreme value theory. In the univariate case, extreme value theory is very useful and provides many tools for investigating the extreme tails of distributions of assets returns. These new developments rest on the existence of a few fundamental results on extremes, such as the Gnedenko-Pickands-Balkema-de Haan theorem which gives a general expression for the distribution of exceedence over a large threshold. In this framework, the study of large and extreme co-movements requires the multivariate extreme values theory, which unfortunately does not provide strong results. Indeed, in constrast with the univariate case, the class of limiting extreme-value distributions is too broad and cannot be used to constrain accurately the distribution of large co-movements.

In the spirit of the mean-variance portfolio or of utility theory which establish an investment decision on a unique risk measure, we use the coefficient of tail dependence, which, to our knowledge, was first introduced in the financial context by [Embrechts et al.]. The coefficient of tail dependence between assets Xi and Xj is a very natural and easy to understand measure of extreme co-movements. It is defined as the probability that the asset Xi incurs a large loss (or gain) assuming that the asset Xj also undergoes a large loss (or gain) at the same probability level, in the limit where this probability level explores the extreme tails of the distribution of returns of the two assets. Mathematically speaking, the coefficient of lower tail dependence between the two assets Xi and Xj , denoted by λ−ij is defined by

λ−ij = limu→0 Pr{Xi<Fi−1(u)|Xj < Fj−1(u)} —– (1)

where Fi−1(u) and Fj−1(u) represent the quantiles of assets Xand Xj at level u. Similarly the coefficient of the upper tail dependence is

λ+ij = limu→1 Pr{Xi > Fi−1(u)|Xj > Fj−1(u)} —– (2)

λ−ij and λ+ij are of concern to investors with long (respectively short) positions. We refer to [Coles et al.] and references therein for a survey of the properties of the coefficient of tail dependence. Let us stress that the use of quantiles in the definition of λ−ij and λ+ij makes them independent of the marginal distribution of the asset returns: as a consequence, the tail dependence parameters are intrinsic dependence measures. The obvious gain is an “orthogonal” decomposition of the risks into (1) individual risks carried by the marginal distribution of each asset and (2) their collective risk described by their dependence structure or copula.

Being a probability, the coefficient of tail dependence varies between 0 and 1. A large value of λ−ij means that large losses occur almost surely together. Then, large risks can not be diversified away and the assets crash together. This investor and portfolio manager nightmare is further amplified in real life situations by the limited liquidity of markets. When λ−ij vanishes, these assets are said to be asymptotically independent, but this term hides the subtlety that the assets can still present a non-zero dependence in their tails. For instance, two normally distributed assets can be shown to have a vanishing coefficient of tail dependence. Nevertheless, unless their correlation coefficient is identically zero, these assets are never independent. Thus, asymptotic independence must be understood as the weakest dependence which can be quantified by the coefficient of tail dependence.

For practical implementations, a direct application of the definitions (1) and (2) fails to provide reasonable estimations due to the double curse of dimensionality and undersampling of extreme values, so that a fully non-parametric approach is not reliable. It turns out to be possible to circumvent this fundamental difficulty by considering the general class of factor models, which are among the most widespread and versatile models in finance. They come in two classes: multiplicative and additive factor models respectively. The multiplicative factor models are generally used to model asset fluctuations due to an underlying stochastic volatility for a survey of the properties of these models). The additive factor models are made to relate asset fluctuations to market fluctuations, as in the Capital Asset Pricing Model (CAPM) and its generalizations, or to any set of common factors as in Arbitrage Pricing Theory. The coefficient of tail dependence is known in close form for both classes of factor models, which allows for an efficient empirical estimation.