“approximandum,” will not be General Theory of Relativity, but only its vacuum sector of spacetimes of topology Σ × R, or quantum gravity as a fecund ground for metaphysician. Note Quote.

1*Sr3ZCgKlan3YlW_n6oBc0w

In string theory as well as in Loop Quantum Gravity, and in other approaches to quantum gravity, indications are coalescing that not only time, but also space is no longer a fundamental entity, but merely an “emergent” phenomenon that arises from the basic physics. In the language of physics, spacetime theories such as GTR are “effective” theories and spacetime itself is “emergent”. However, unlike the notion that temperature is emergent, the idea that the universe is not in space and time arguably shocks our very idea of physical existence as profoundly as any scientific revolution ever did. It is not even clear whether we can coherently formulate a physical theory in the absence of space and time. Space disappears in LQG insofar as the physical structures it describes bear little, if any, resemblance to the spatial geometries found in GTR. These structures are discrete and not continuous as classical spacetimes are. They represent the fundamental constitution of our universe that correspond, somehow, to chunks of physical space and thus give rise – in a way yet to be elucidated – to the spatial geometries we find in GTR. The conceptual problem of coming to grasp how to do physics in the absence of an underlying spatio-temporal stage on which the physics can play out is closely tied to the technical difficulty of mathematically relating LQG back to GTR. Physicists have yet to fully understand how classical spacetimes emerge from the fundamental non-spatio-temporal structure of LQG, and philosophers are only just starting to study its conceptual foundations and the implications of quantum gravity in general and of the disappearance of space-time in particular. Even though the mathematical heavy-lifting will fall to the physicists, there is a role for philosophers here in exploring and mapping the landscape of conceptual possibilites, bringing to bear the immense philosophical literature in emergence and reduction which offers a variegated conceptual toolbox.

To understand how classical spacetime re-emerges from the fundamental quantum structure involves what the physicists call “taking the classical limit.” In a sense, relating the spin network states of LQG back to the spacetimes of GTR is a reversal of the quantization procedure employed to formulate the quantum theory in the first place. Thus, while the quantization can be thought of as the “context of discovery,” finding the classical limit that relates the quantum theory of gravity to GTR should be considered the “context of (partial) justification.” It should be emphasized that understanding how (classical) spacetime re-emerges by retrieving GTR as a low-energy limit of a more fundamental theory is not only important to “save the appearances” and to accommodate common sense – although it matters in these respects as well, but must also be considered a methodologically central part of the enterprise of quantum gravity. If it cannot be shown that GTR is indeed related to LQG in some mathematically well-understood way as the approximately correct theory when energies are sufficiently low or, equivalently, when scales are sufficiently large, then LQG cannot explain why GTR has been empirically as successful as it has been. But a successful theory can only be legitimately supplanted if the successor theory not only makes novel predictions or offers deeper explanations, but is also able to replicate the empirical success of the theory it seeks to replace.

Ultimately, of course, the full analysis will depend on the full articulation of the theory. But focusing on the kinematical level, and thus avoiding having to fully deal with the problem of time, lets apply the concepts to the problem of the emergence of full spacetime, rather than just time. Chris Isham and Butterfield identify three types of reductive relations between theories: definitional extension, supervenience, and emergence, of which only the last has any chance of working in the case at hand. For Butterfield and Isham, a theory T1 emerges from another theory T2 just in case there exists either a limiting or an approximating procedure to relate the two theories (or a combination of the two). A limiting procedure is taking the mathematical limit of some physically relevant parameters, in general in a particular order, of the underlying theory in order to arrive at the emergent theory. A limiting procedure won’t work, at least not by itself, due to technical problems concerning the maximal loop density as well as to what essentially amounts to the measurement problem familiar from non-relativistic quantum physics.

An approximating procedure designates the process of either neglecting some physical magni- tudes, and justifying such neglect, or selecting a proper subset of states in the state space of the approximating theory, and justifying such selection, or both, in order to arrive at a theory whose values of physical quantities remain sufficiently close to those of the theory to be approximated. Note that the “approximandum,” the theory to be approximated, in our case will not be GTR, but only its vacuum sector of spacetimes of topology Σ × R. One of the central questions will be how the selection of states will be justified. Such a justification would be had if we could identify a mechanism that “drives the system” to the right kind of states. Any attempt to finding such a mechanism will foist a host of issues known from the traditional problem of relating quantum to classical mechanics upon us. A candidate mechanism, here and there, is some form of “decoherence,” even though that standardly involves an “environment” with which the system at stake can interact. But the system of interest in our case is, of course, the universe, which makes it hard to see how there could be any outside environment with which the system could interact. The challenge then is to conceptualize decoherence is a way to circumvents this problem.

Once it is understood how classical space and time disappear in canonical quantum gravity and how they might be seen to re-emerge from the fundamental, non-spatiotemporal structure, the way in which classicality emerges from the quantum theory of gravity does not radically differ from the way it is believed to arise in ordinary quantum mechanics. The project of pursuing such an understanding is of relevance and interest for at least two reasons. First, important foundational questions concerning the interpretation of, and the relation between, theories are addressed, which can lead to conceptual clarification of the foundations of physics. Such conceptual progress may well prove to be the decisive stepping stone to a full quantum theory of gravity. Second, quantum gravity is a fertile ground for any metaphysician as it will inevitably yield implications for specifically philosophical, and particularly metaphysical, issues concerning the nature of space and time.

Advertisement

Mapping Fields. Quantum Field Gravity. Note Quote.

albrecht

Introducing a helpful taxonomic scheme, Chris Isham proposed to divide the many approaches to formulating a full, i.e. not semi-classical, quantum theory of gravity into four broad types of approaches: first, those quantizing GR; second, those “general-relativizing” quantum physics; third, construct a conventional quantum theory including gravity and regard GR as its low-energy limit; and fourth, consider both GR and conventional quantum theories of matter as low-energy limits of a radically novel fundamental theory.

The first family of strategies starts out from classical GR and seek to apply, in a mathematically rigorous and physically principled way, a “quantization” procedure, i.e. a recipe for cooking up a quantum theory from a classical theory such as GR. Of course, quantization proceeds, metaphysically speaking, backwards in that it starts out from the dubious classical theory – which is found to be deficient and hence in need of replacement – and tries to erect the sound building of a quantum theory of gravity on its ruin. But it should be understood, just like Wittgenstein’s ladder, as a methodologically promising means to an end. Quantization procedures have successfully been applied elsewhere in physics and produced, among others, important theories such as quantum electrodynamics.

The first family consists of two genera, the now mostly defunct covariant ansatz (Defunct because covariant quantizations of GR are not perturbatively renormalizable, a flaw usually considered fatal. This is not to say, however, that covariant techniques don’t play a role in contemporary quantum gravity.) and the vigorous canonical quantization approach. A canonical quantization requires that the theory to be quantized is expressed in a particular formalism, the so-called constrained Hamiltonian formalism. Loop quantum gravity (LQG) is the most prominent representative of this camp, but there are other approaches.

Secondly, there is to date no promising avenue to gaining a full quantum theory of gravity by “general-relativizing” quantum (field) theories, i.e. by employing techniques that permit the full incorporation of the lessons of GR into a quantum theory. The only existing representative of this approach consists of attempts to formulate a quantum field theory on a curved rather than the usual flat background spacetime. The general idea of this approach is to incorporate, in some local sense, GR’s principle of general covariance. It is important to note that, however, that the background spacetime, curved though it may be, is in no way dynamic. In other words, it cannot be interpreted, as it can in GR, to interact with the matter fields.

The third group also takes quantum physics as its vantage point, but instead of directly incorporating the lessons of GR, attempts to extend quantum physics with means as conventional as possible in order to include gravity. GR, it is hoped, will then drop out of the resulting theory in its low-energy limit. By far the most promising member of this family is string theory, which, however, goes well beyond conventional quantum field theory, both methodologically and in terms of ambition. Despite its extending the assumed boundaries of the family, string theory still takes conventional quantum field theory as its vantage point, both historically and systematically, and does not attempt to build a novel theory of quantum gravity dissociated from “old” physics. Again, there are other approaches in this family, such as topological quantum field theory, but none of them musters substantial support among physicists.

The fourth and final group of the Ishamian taxonomy is most aptly characterized by its iconoclastic attitude. For the heterodox approaches of this type, no known physics serves as starting point; rather, radically novel perspectives are considered in an attempt to formulate a quantum theory of gravity ab initio.

All these approaches have their attractions and hence their following. But all of them also have their deficiencies. To list them comprehensively would go well beyond the present endeavour. Apart from the two major challenges for LQG, a major problem common to all of them is their complete lack of a real connection to observations or experiments. Either the theory is too flexible so as to be able to accommodate almost any empirical data, such as string theory’s predictions of supersymmetric particles which have been constantly revised in light of particle detectors’ failures to find them at the predicted energies or as string theory’s embarras de richesses, the now notorious “landscape problem” of choosing among 10500 different models. Or the connection between the mostly understood data and the theories is highly tenuous and controversial, such as the issue of how and whether data narrowly confining possible violations of Lorentz symmetry relate to theories of quantum gravity predicting or assuming a discrete spacetime structure that is believed to violate, or at least modify, the Lorentz symmetry so well confirmed at larger scales. Or the predictions made by the theories are only testable in experimental regimes so far removed from present technological capacities, such as the predictions of LQG that spacetime is discrete at the Planck level at a quintillion (1018) times the energy scales probed by the Large Hadron Collider at CERN. Or simply no one remotely has a clue as to how the theory might connect to the empirical, such as is the case for the inchoate approaches of the fourth group like causal set theory.

BRICS Bank, New Development Bank: Peoples’ Perspectives. One-Day convention on 30th March, 2017 at Indian Social Institute, New Delhi.

NDB Poster

The Peoples’ Forum on BRICS is conducting a one-day convention on New development Bank, Peoples’ Perspectives to look at the various trends in Development Finance, mechanism to monitor trade and finance in BRICS and various stakes involved with the emergence of New Development Bank. The conference occurred a day before the official 2nd Annual NDB Meeting to be held at New Delhi from the 31st of March to the 2nd of April. The underlying philosophy of the official meeting is ‘Building a Sustainable Future’, where the role of the governments in development finance, and in particular sustainable infrastructure, some of the challenges to banking sector in some of NDB’s member countries as they face challenges to finance sustainable infrastructure, and creativity and innovation that could be brought by the banks to the table would be the focal point. Moreover, under the thematic of ‘Urban Planning and Sustainable Infrastructure Development’, an intense look into how urban development could improve the lives of the people , taking into account an ever-growing influence of long-term urban planning and investment in sustainable infrastructure in mega-cities of BRICS countries would be an allied material point of deliberations. 

The Peoples’ Forum on BRICS outrightly rejects these themes based on certain considerations and positions itself in looking at the NDB in an alliance complicity with other multi-lateral banks that have hitherto been more anti-people in practice than they would otherwise claim. I am chairing a session. 

NDB Flyer-Final

 

Extreme Value Theory

1469941517622

Standard estimators of the dependence between assets are the correlation coefficient or the Spearman’s rank correlation for instance. However, as stressed by [Embrechts et al. ], these kind of dependence measures suffer from many deficiencies. Moreoever, their values are mostly controlled by relatively small moves of the asset prices around their mean. To cure this problem, it has been proposed to use the correlation coefficients conditioned on large movements of the assets. But [Boyer et al.] have emphasized that this approach suffers also from a severe systematic bias leading to spurious strategies: the conditional correlation in general evolves with time even when the true non-conditional correlation remains constant. In fact, [Malevergne and Sornette] have shown that any approach based on conditional dependence measures implies a spurious change of the intrinsic value of the dependence, measured for instance by copulas. Recall that the copula of several random variables is the (unique) function which completely embodies the dependence between these variables, irrespective of their marginal behavior (see [Nelsen] for a mathematical description of the notion of copula).

In view of these limitations of the standard statistical tools, it is natural to turn to extreme value theory. In the univariate case, extreme value theory is very useful and provides many tools for investigating the extreme tails of distributions of assets returns. These new developments rest on the existence of a few fundamental results on extremes, such as the Gnedenko-Pickands-Balkema-de Haan theorem which gives a general expression for the distribution of exceedence over a large threshold. In this framework, the study of large and extreme co-movements requires the multivariate extreme values theory, which unfortunately does not provide strong results. Indeed, in constrast with the univariate case, the class of limiting extreme-value distributions is too broad and cannot be used to constrain accurately the distribution of large co-movements.

In the spirit of the mean-variance portfolio or of utility theory which establish an investment decision on a unique risk measure, we use the coefficient of tail dependence, which, to our knowledge, was first introduced in the financial context by [Embrechts et al.]. The coefficient of tail dependence between assets Xi and Xj is a very natural and easy to understand measure of extreme co-movements. It is defined as the probability that the asset Xi incurs a large loss (or gain) assuming that the asset Xj also undergoes a large loss (or gain) at the same probability level, in the limit where this probability level explores the extreme tails of the distribution of returns of the two assets. Mathematically speaking, the coefficient of lower tail dependence between the two assets Xi and Xj , denoted by λ−ij is defined by

λ−ij = limu→0 Pr{Xi<Fi−1(u)|Xj < Fj−1(u)} —– (1)

where Fi−1(u) and Fj−1(u) represent the quantiles of assets Xand Xj at level u. Similarly the coefficient of the upper tail dependence is

λ+ij = limu→1 Pr{Xi > Fi−1(u)|Xj > Fj−1(u)} —– (2)

λ−ij and λ+ij are of concern to investors with long (respectively short) positions. We refer to [Coles et al.] and references therein for a survey of the properties of the coefficient of tail dependence. Let us stress that the use of quantiles in the definition of λ−ij and λ+ij makes them independent of the marginal distribution of the asset returns: as a consequence, the tail dependence parameters are intrinsic dependence measures. The obvious gain is an “orthogonal” decomposition of the risks into (1) individual risks carried by the marginal distribution of each asset and (2) their collective risk described by their dependence structure or copula.

Being a probability, the coefficient of tail dependence varies between 0 and 1. A large value of λ−ij means that large losses occur almost surely together. Then, large risks can not be diversified away and the assets crash together. This investor and portfolio manager nightmare is further amplified in real life situations by the limited liquidity of markets. When λ−ij vanishes, these assets are said to be asymptotically independent, but this term hides the subtlety that the assets can still present a non-zero dependence in their tails. For instance, two normally distributed assets can be shown to have a vanishing coefficient of tail dependence. Nevertheless, unless their correlation coefficient is identically zero, these assets are never independent. Thus, asymptotic independence must be understood as the weakest dependence which can be quantified by the coefficient of tail dependence.

For practical implementations, a direct application of the definitions (1) and (2) fails to provide reasonable estimations due to the double curse of dimensionality and undersampling of extreme values, so that a fully non-parametric approach is not reliable. It turns out to be possible to circumvent this fundamental difficulty by considering the general class of factor models, which are among the most widespread and versatile models in finance. They come in two classes: multiplicative and additive factor models respectively. The multiplicative factor models are generally used to model asset fluctuations due to an underlying stochastic volatility for a survey of the properties of these models). The additive factor models are made to relate asset fluctuations to market fluctuations, as in the Capital Asset Pricing Model (CAPM) and its generalizations, or to any set of common factors as in Arbitrage Pricing Theory. The coefficient of tail dependence is known in close form for both classes of factor models, which allows for an efficient empirical estimation.