Collateral Debt Obligations. Thought of the Day 111.0

A CDO is a general term that describes securities backed by a pool of fixed-income assets. These assets can be bank loans (CLOs), bonds (CBOs), residential mortgages (residential- mortgage–backed securities, or RMBSs), and many others. A CDO is a subset of asset- backed securities (ABS), which is a general term for a security backed by assets such as mortgages, credit card receivables, auto loans, or other debt.

To create a CDO, a bank or other entity transfers the underlying assets (“the collateral”) to a special-purpose vehicle (SPV) that is a separate legal entity from the issuer. The SPV then issues securities backed with cash flows generated by assets in the collateral pool. This general process is called securitization. The securities are separated into tranches, which differ primarily in the priority of their rights to the cash flows coming from the asset pool. The senior tranche has first priority, the mezzanine second, and the equity third. Allocation of cash flows to specific securities is called a “waterfall”. A waterfall is specified in the CDO’s indenture and governs both principal and interest payments.

Untitled

1: If coverage tests are not met, and to the extent not corrected with principal proceeds, the remaining interest proceeds will be used to redeem the most senior notes to bring the structure back into compliance with the coverage tests. Interest on the mezzanine securities may be deferred and compounded if cash flow is not available to pay current interest due.

One may observe that the creation of a CDO is a complex and costly process. Professionals such as bankers, lawyers, rating agencies, accountants, trustees, fund managers, and insurers all charge considerable fees to create and manage a CDO. In other words, the cash coming from the collateral is greater than the sum of the cash paid to all security holders. Professional fees to create and manage the CDO make up the difference.

CDOs are designed to offer asset exposure precisely tailored to the risk that investors desire, and they provide liquidity because they trade daily on the secondary market. This liquidity enables, for example, a finance minister from the Chinese government to gain exposure to the U.S. mortgage market and to buy or sell that exposure at will. However, because CDOs are more complex securities than corporate bonds, they are designed to pay slightly higher interest rates than correspondingly rated corporate bonds.

CDOs enable a bank that specializes in making loans to homeowners to make more loans than its capital would otherwise allow, because the bank can sell its loans to a third party. The bank can therefore originate more loans and take in more origination fees. As a result, consumers have more access to capital, banks can make more loans, and investors a world away can not only access the consumer loan market but also invest with precisely the level of risk they desire.

Untitled

1: To the extent not paid by interest proceeds.

2: To the extent senior note coverage tests are met and to the extent not already paid by interest proceeds. If coverage tests are not met, the remaining principal proceeds will be used to redeem the most senior notes to bring the structure back into compliance with the coverage tests. Interest on the mezzanine securities may be deferred and compounded if cash flow is not available to pay current interest due.

The Structured Credit Handbook provides an explanation of investors’ nearly insatiable appetite for CDOs:

Demand for [fixed income] assets is heavily bifurcated, with the demand concentrated at the two ends of the safety spectrum . . . Prior to the securitization boom, the universe of fixed-income instruments issued tended to cluster around the BBB rating, offering neither complete safety nor sizzling returns. For example, the number of AA and AAA-rated companies is quite small, as is debt issuance of companies rated B or lower. Structured credit technology has evolved essentially in order to match investors’ demands with the available profile of fixed-income assets. By issuing CDOs from portfolios of bonds or loans rated A, BBB, or BB, financial intermediaries can create a larger pool of AAA-rated securities and a small unrated or low-rated bucket where almost all the risk is concentrated.

CDOs have been around for more than twenty years, but their popularity skyrocketed during the late 1990s. CDO issuance nearly doubled in 2005 and then again in 2006, when it topped $500 billion for the first time. “Structured finance” groups at large investment banks (the division responsible for issuing and managing CDOs) became one of the fastest-growing areas on Wall Street. These divisions, along with the investment banking trading desks that made markets in CDOs, contributed to highly successful results for the banking sector during the 2003–2007 boom. Many CDOs became quite liquid because of their size, investor breadth, and rating agency coverage.

Rating agencies helped bring liquidity to the CDO market. They analyzed each tranche of a CDO and assigned ratings accordingly. Equity tranches were often unrated. The rating agencies had limited manpower and needed to gauge the risk on literally thousands of new CDO securities. The agencies also specialized in using historical models to predict risk. Although CDOs had been around for a long time, they did not exist in a significant number until recently. Historical models therefore couldn’t possibly capture the full picture. Still, the underlying collateral could be assessed with a strong degree of confidence. After all, banks have been making home loans for hundreds of years. The rating agencies simply had to allocate risk to the appropriate tranche and understand how the loans in the collateral base were correlated with each other – an easy task in theory perhaps, but not in practice.

The most difficult part of valuing a CDO tranche is determining correlation. If loans are uncorrelated, defaults will occur evenly over time and asset diversification can solve most problems. With low correlation, an AAA-rated senior tranche should be safe and the interest rate attached to this tranche should be close to the rate for AAA-rated corporate bonds. High correlation, however, creates nondiversifiable risk, in which case the senior tranche has a reasonable likelihood of becoming impaired. Correlation does not affect the price of the CDO in total because the expected value of each individual loan remains the same. Correlation does, however, affect the relative price of each tranche: Any increase in the yield of a senior tranche (to compensate for additional correlation) will be offset by a decrease in the yield of the junior tranches.

Credit Risk Portfolio. Note Quote.

maxresdefault

The recent development in credit markets is characterized by a flood of innovative credit risky structures. State-of-the-art portfolios contain derivative instruments ranging from simple, nearly commoditized contracts such as credit default swap (CDS), to first- generation portfolio derivatives such as first-to-default (FTD) baskets and collateralized debt obligation (CDO) tranches, up to complex structures involving spread options and different asset classes (hybrids). These new structures allow portfolio managers to implement multidimensional investment strategies, which seamlessly conform to their market view. Moreover, the exploding liquidity in credit markets makes tactical (short-term) overlay management very cost efficient. While the outperformance potential of an active portfolio management will put old-school investment strategies (such as buy-and-hold) under enormous pressure, managing a highly complex credit portfolio requires the introduction of new optimization technologies.

New derivatives allow the decoupling of business processes in the risk management industry (in banking, as well as in asset management), since credit treasury units are now able to manage specific parts of credit risk actively and independently. The traditional feedback loop between risk management and sales, which was needed to structure the desired portfolio characteristics only by selective business acquisition, is now outdated. Strategic cross asset management will gain in importance, as a cost-efficient overlay management can now be implemented by combining liquid instruments from the credit universe.

In any case, all these developments force portfolio managers to adopt an integrated approach. All involved risk factors (spread term structures including curve effects, spread correlations, implied default correlations, and implied spread volatilities) have to be captured and integrated into appropriate risk figures. We have a look on constant proportion debt obligations (CPDOs) as a leveraged exposure on credit indices, constant proportion portfolio insurance (CPPI) as a capital guaranteed instrument, CDO tranches to tap the correlation market, and equity futures to include exposure to stock markets in the portfolio.

For an integrated credit portfolio management approach, it is of central importance to aggregate risks over various instruments with different payoff characteristics. In this chapter, we will see that a state-of-the-art credit portfolio contains not only linear risks (CDS and CDS index contracts) but also nonlinear risks (such as FTD baskets, CDO tranches, or credit default swaptions). From a practitioner’s point of view there is a simple solution for this risk aggregation problem, namely delta-gamma management. In such a framework, one approximates the risks of all instruments in a portfolio by its first- and second-order sensitivities and aggregates these sensitivities to the portfolio level. Apparently, for a proper aggregation of risk factors, one has to take the correlation of these risk factors into account. However, for credit risky portfolios, a simplistic sensitivity approach will be inappropriate, as can be seen by the characteristics of credit portfolio risks shows:

  • Credit risky portfolios usually involve a larger number of reference entities. Hence, one has to take a large number of sensitivities into account. However, this is a phenomenon that is already well known from the management of stock portfolios. The solution is to split the risk for each constituent into a systematic risk (e.g., a beta with a portfolio hedging tool) and an alpha component which reflects the idiosyncratic part of the risk.

  • However, in contrast to equities, credit risk is not one dimensional (i.e., one risky security per issuer) but at least two dimensional (i.e., a set of instruments with different maturities). This is reflected in the fact that there is a whole term structure of credit spreads. Moreover, taking also different subordination levels (with different average recovery rates) into account, credit risk becomes a multidimensional object for each reference entity.
  • While most market risks can be satisfactorily approximated by diffusion processes, for credit risk the consideration of events (i.e., jumps) is imperative. The most apparent reason for this is that the dominating element of credit risk is event risk. However, in a market perspective, there are more events than the ultimate default event that have to be captured. Since one of the main drivers of credit spreads is the structure of the underlying balance sheet, a change (or the risk of a change) in this structure usually triggers a large movement in credit spreads. The best-known example for such an event is a leveraged buyout (LBO).
  • For credit market players, correlation is a very special topic, as a central pricing parameter is named implied correlation. However, there are two kinds of correlation parameters that impact a credit portfolio: price correlation and event correlation. While the former simply deals with the dependency between two price (i.e., spread) time series under normal market conditions, the latter aims at describing the dependency between two price time series in case of an event. In its simplest form, event correlation can be seen as default correlation: what is the risk that company B defaults given that company A has defaulted? While it is already very difficult to model this default correlation, for practitioners event correlation is even more complex, since there are other events than just the default event, as already mentioned above. Hence, we can modify the question above: what is the risk that spreads of company B blow out given that spreads of company A have blown out? In addition, the notion of event correlation can also be used to capture the risk in capital structure arbitrage trades (i.e., trading stock versus bonds of one company). In this example, the question might be: what is the risk that the stock price of company A jumps given that its bond spreads have blown out? The complicated task in this respect is that we do not only have to model the joint event probability but also the direction of the jumps. A brief example highlights why this is important. In case of a default event, spreads will blow out accompanied by a significant drop in the stock price. This means that there is a negative correlation between spreads and stock prices. However, in case of an LBO event, spreads will blow out (reflecting the deteriorated credit quality because of the higher leverage), while stock prices rally (because of the fact that the acquirer usually pays a premium to buy a majority of outstanding shares).

These show that a simple sensitivity approach – e.g., calculate and tabulate all deltas and gammas and let a portfolio manager play with – is not appropriate. Further risk aggregation (e.g., beta management) and risk factors that capture the event risk are needed. For the latter, a quick solution is the so-called instantaneous default loss (IDL). The IDL expresses the loss incurred in a credit risk instrument in case of a credit event. For single-name CDS, this is simply the loss given default (LGD). However, for a portfolio derivative such as a mezzanine tranche, this figure does not directly refer to the LGD of the defaulted item, but to the changed subordination of the tranche because of the default. Hence, this figure allows one to aggregate various instruments with respect to credit events.

Meillassoux’s Principle of Unreason Towards an Intuition of the Absolute In-itself. Note Quote.

geotime_usgs

The principle of reason such as it appears in philosophy is a principle of contingent reason: not only how philosophical reason concerns difference instead of identity, we but also why the Principle of Sufficient Reason can no longer be understood in terms of absolute necessity. In other words, Deleuze disconnects the Principle of Sufficient Reason from the ontotheological tradition no less than from its Heideggerian deconstruction. What remains then of Meillassoux’s criticism in After finitude: An Essay on the Necessity of Contigency that Deleuze no less than Hegel hypostatizes or absolutizes the correlation between thinking and being and thus brings back a vitalist version of speculative idealism through the back door?

At stake in Meillassoux’s criticism of the Principle of Sufficient Reason is a double problem: the conditions of possibility of thinking and knowing an absolute and subsequently the conditions of possibility of rational ideology critique. The first problem is primarily epistemological: how can philosophy justify scientific knowledge claims about a reality that is anterior to our relation to it and that is hence not given in the transcendental object of possible experience (the arche-fossil )? This is a problem for all post-Kantian epistemologies that hold that we can only ever know the correlate of being and thought. Instead of confronting this weak correlationist position head on, however, Meillassoux seeks a solution in the even stronger correlationist position that denies not only the knowability of the in itself, but also its very thinkability or imaginability. Simplified: if strong correlationists such as Heidegger or Wittgenstein insist on the historicity or facticity (non-necessity) of the correlation of reason and ground in order to demonstrate the impossibility of thought’s self-absolutization, then the very force of their argument, if it is not to contradict itself, implies more than they are willing to accept: the necessity of the contingency of the transcendental structure of the for itself. As a consequence, correlationism is incapable of demonstrating itself to be necessary. This is what Meillassoux calls the principle of factiality or the principle of unreason. It says that it is possible to think of two things that exist independently of thought’s relation to it: contingency as such and the principle of non-contradiction. The principle of unreason thus enables the intellectual intuition of something that is absolutely in itself, namely the absolute impossibility of a necessary being. And this in turn implies the real possibility of the completely random and unpredictable transformation of all things from one moment to the next. Logically speaking, the absolute is thus a hyperchaos or something akin to Time in which nothing is impossible, except it be necessary beings or necessary temporal experiences such as the laws of physics.

There is, moreover, nothing mysterious about this chaos. Contingency, and Meillassoux consistently refers to this as Hume’s discovery, is a purely logical and rational necessity, since without the principle of non-contradiction not even the principle of factiality would be absolute. It is thus a rational necessity that puts the Principle of Sufficient Reason out of action, since it would be irrational to claim that it is a real necessity as everything that is is devoid of any reason to be as it is. This leads Meillassoux to the surprising conclusion that [t]he Principle of Sufficient Reason is thus another name for the irrational… The refusal of the Principle of Sufficient Reason is not the refusal of reason, but the discovery of the power of chaos harboured by its fundamental principle (non-contradiction). (Meillassoux 2007: 61) The principle of factiality thus legitimates or founds the rationalist requirement that reality be perfectly amenable to conceptual comprehension at the same time that it opens up [a] world emancipated from the Principle of Sufficient Reason (Meillassoux) but founded only on that of non-contradiction.

This emancipation brings us to the practical problem Meillassoux tries to solve, namely the possibility of ideology critique. Correlationism is essentially a discourse on the limits of thought for which the deabsolutization of the Principle of Sufficient Reason marks reason’s discovery of its own essential inability to uncover an absolute. Thus if the Galilean-Copernican revolution of modern science meant the paradoxical unveiling of thought’s capacity to think what there is regardless of whether thought exists or not, then Kant’s correlationist version of the Copernican revolution was in fact a Ptolemaic counterrevolution. Since Kant and even more since Heidegger, philosophy has been adverse precisely to the speculative import of modern science as a formal, mathematical knowledge of nature. Its unintended consequence is therefore that questions of ultimate reasons have been dislocated from the domain of metaphysics into that of non-rational, fideist discourse. Philosophy has thus made the contemporary end of metaphysics complicit with the religious belief in the Principle of Sufficient Reason beyond its very thinkability. Whence Meillassoux’s counter-intuitive conclusion that the refusal of the Principle of Sufficient Reason furnishes the minimal condition for every critique of ideology, insofar as ideology cannot be identified with just any variety of deceptive representation, but is rather any form of pseudo-rationality whose aim is to establish that what exists as a matter of fact exists necessarily. In this way a speculative critique pushes skeptical rationalism’s relinquishment of the Principle of Sufficient Reason to the point where it affirms that there is nothing beneath or beyond the manifest gratuitousness of the given nothing, but the limitless and lawless power of its destruction, emergence, or persistence. Such an absolutizing even though no longer absolutist approach would be the minimal condition for every critique of ideology: to reject dogmatic metaphysics means to reject all real necessity, and a fortiori to reject the Principle of Sufficient Reason, as well as the ontological argument.

On the one hand, Deleuze’s criticism of Heidegger bears many similarities to that of Meillassoux when he redefines the Principle of Sufficient Reason in terms of contingent reason or with Nietzsche and Mallarmé: nothing rather than something such that whatever exists is a fiat in itself. His Principle of Sufficient Reason is the plastic, anarchic and nomadic principle of a superior or transcendental empiricism that teaches us a strange reason, that of the multiple, chaos and difference. On the other hand, however, the fact that Deleuze still speaks of reason should make us wary. For whereas Deleuze seeks to reunite chaotic being with systematic thought, Meillassoux revives the classical opposition between empiricism and rationalism precisely in order to attack the pre-Kantian, absolute validity of the Principle of Sufficient Reason. His argument implies a return to a non-correlationist version of Kantianism insofar as it relies on the gap between being and thought and thus upon a logic of representation that renders Deleuze’s Principle of Sufficient Reason unrecognizable, either through a concept of time, or through materialism.

Deleuzian Grounds. Thought of the Day 42.0

1g5bj4l_115_l

With difference or intensity instead of identity as the ultimate philosophical one could  arrive at the crux of Deleuze’s use of the Principle of Sufficient Reason in Difference and Repetition. At the beginning of the first chapter, he defines the quadruple yoke of conceptual representation identity, analogy, opposition, resemblance in correspondence with the four principle aspects of the Principle of Sufficient Reason: the form of the undetermined concept, the relation between ultimate determinable concepts, the relation between determinations within concepts, and the determined object of the concept itself. In other words, sufficient reason according to Deleuze is the very medium of representation, the element in which identity is conceptually determined. In itself, however, this medium or element remains different or unformed (albeit not formless): Difference is the state in which one can speak of determination as such, i.e. determination in its occurrent quality of a difference being made, or rather making itself in the sense of a unilateral distinction. It is with the event of difference that what appears to be a breakdown of representational reason is also a breakthrough of the rumbling ground as differential element of determination (or individuation). Deleuze illustrates this with an example borrowed from Nietzsche:

Instead of something distinguished from something else, imagine something which distinguishes itself and yet that from which it distinguishes itself, does not distinguish itself from it. Lightning, for example, distinguishes itself from the black sky but must also trail behind it . It is as if the ground rose to the surface without ceasing to be the ground.

Between the abyss of the indeterminate and the superficiality of the determined, there thus appears an intermediate element, a field potential or intensive depth, which perhaps in a way exceeds sufficient reason itself. This is a depth which Deleuze finds prefigured in Schelling’s and Schopenhauer’s differend conceptualization of the ground (Grund) as both ground (fond) and grounding (fondement). The ground attains an autonomous power that exceeds classical sufficient reason by including the grounding moment of sufficient reason for itself. Because this self-grounding ground remains groundless (sans-fond) in itself, however, Hegel famously ridiculed Schelling’s ground as the indeterminate night in which all cows are black. He opposed it to the surface of determined identities that are only negatively correlated to each other. By contrast, Deleuze interprets the self-grounding ground through Nietzsche’s eternal return of the same. Whereas the passive syntheses of habit (connective series) and memory (conjunctions of connective series) are the processes by which representational reason grounds itself in time, the eternal return (disjunctive synthesis of series) ungrounds (effonde) this ground by introducing the necessity of future becomings, i.e. of difference as ongoing differentiation. Far from being a denial of the Principle of Sufficient Reason, this threefold process of self-(un)grounding constitutes the positive, relational system that brings difference out of the night of the Identical, and with finer, more varied and more terrifying flashes of lightning than those of contradiction: progressivity.

The breakthrough of the ground in the process of ungrounding itself in sheer distinction-production of the multiple against the indistinguishable is what Deleuze calls violence or cruelty, as it determines being or nature in a necessary system of asymmetric relations of intensity by the acausal action of chance, like an ontological game in which the throw of the dice is the only rule or principle. But it is also the vigil, the insomnia of thought, since it is here that reason or thought achieves its highest power of determination. It becomes a pure creativity or virtuality in which no well-founded identity (God, World, Self) remains: [T]hought is that moment in which determination makes itself one, by virtue of maintaining a unilateral and precise relation to the indeterminate. Since it produces differential events without subjective or objective remainder, however, Deleuze argues that thought belongs to the pure and empty form of time, a time that is no longer subordinate to (cosmological, psychological, eternal) movement in space. Time qua form of transcendental synthesis is the ultimate ground of everything that is, reasons and acts. It is the formal element of multiple becoming, no longer in the sense of finite a priori conditioning, but in the sense of a transfinite a posteriori synthesizer: an empt interiority in ongoing formation and materialization. As Deleuze and Guattari define synthesizer in A Thousand Plateaus: The synthesizer, with its operation of consistency, has taken the place of the ground in a priori synthetic judgment: its synthesis is of the molecular and the cosmic, material and force, not form and matter, Grund and territory.

Some content on this page was disabled on May 4, 2020 as a result of a DMCA takedown notice from Columbia University Press. You can learn more about the DMCA here:

https://en.support.wordpress.com/copyright-and-the-dmca/

Meillassoux, Deleuze, and the Ordinal Relation Un-Grounding Hyper-Chaos. Thought of the Day 41.0

v1v2a

As Heidegger demonstrates in Kant and the Problem of Metaphysics, Kant limits the metaphysical hypostatization of the logical possibility of the absolute by subordinating the latter to a domain of real possibility circumscribed by reason’s relation to sensibility. In this way he turns the necessary temporal becoming of sensible intuition into the sufficient reason of the possible. Instead, the anti-Heideggerian thrust of Meillassoux’s intellectual intuition is that it absolutizes the a priori realm of pure logical possibility and disconnects the domain of mathematical intelligibility from sensibility. (Ray Brassier’s The Enigma of Realism: Robin Mackay – Collapse_ Philosophical Research and Development. Speculative Realism.) Hence the chaotic structure of his absolute time: Anything is possible. Whereas real possibility is bound to correlation and temporal becoming, logical possibility is bound only by non-contradiction. It is a pure or absolute possibility that points to a radical diachronicity of thinking and being: we can think of being without thought, but not of thought without being.

Deleuze clearly situates himself in the camp when he argues with Kant and Heidegger that time as pure auto-affection (folding) is the transcendental structure of thought. Whatever exists, in all its contingency, is grounded by the first two syntheses of time and ungrounded by the third, disjunctive synthesis in the implacable difference between past and future. For Deleuze, it is precisely the eternal return of the ordinal relation between what exists and what may exist that destroys necessity and guarantees contingency. As a transcendental empiricist, he thus agrees with the limitation of logical possibility to real possibility. On the one hand, he thus also agrees with Hume and Meillassoux that [r]eality is not the result of the laws which govern it. The law of entropy or degradation in thermodynamics, for example, is unveiled as nihilistic by Nietzsche s eternal return, since it is based on a transcendental illusion in which difference [of temperature] is the sufficient reason of change only to the extent that the change tends to negate difference. On the other hand, Meillassoux’s absolute capacity-to-be-other relative to the given (Quentin Meillassoux, Ray Brassier, Alain Badiou – After finitude: an essay on the necessity of contingency) falls away in the face of what is actual here and now. This is because although Meillassoux s hyper-chaos may be like time, it also contains a tendency to undermine or even reject the significance of time. Thus one may wonder with Jon Roffe (Time_and_Ground_A_Critique_of_Meillassou) how time, as the sheer possibility of any future or different state of affairs, can provide the (non-)ground for the realization of this state of affairs in actuality. The problem is less that Meillassoux’s contingency is highly improbable than that his ontology includes no account of actual processes of transformation or development. As Peter Hallward (Levi Bryant, Nick Srnicek and Graham Harman (editors) – The Speculative Turn: Continental Materialism and Realism) has noted, the abstract logical possibility of change is an empty and indeterminate postulate, completely abstracted from all experience and worldly or material affairs. For this reason, the difference between Deleuze and Meillassoux seems to come down to what is more important (rather than what is more originary): the ordinal sequences of sensible intuition or the logical lack of reason.

But for Deleuze time as the creatio ex nihilo of pure possibility is not just irrelevant in relation to real processes of chaosmosis, which are both chaotic and probabilistic, molecular and molar. Rather, because it puts the Principle of Sufficient Reason as principle of difference out of real action it is either meaningless with respecting to the real or it can only have a negative or limitative function. This is why Deleuze replaces the possible/real opposition with that of virtual/actual. Whereas conditions of possibility always relate asymmetrically and hierarchically to any real situation, the virtual as sufficient reason is no less real than the actual since it is first of all its unconditioned or unformed potential of becoming-other.

The Locus of Renormalization. Note Quote.

IMM-3-200-g005

Since symmetries and the related conservation properties have a major role in physics, it is interesting to consider the paradigmatic case where symmetry changes are at the core of the analysis: critical transitions. In these state transitions, “something is not preserved”. In general, this is expressed by the fact that some symmetries are broken or new ones are obtained after the transition (symmetry changes, corresponding to state changes). At the transition, typically, there is the passage to a new “coherence structure” (a non-trivial scale symmetry); mathematically, this is described by the non-analyticity of the pertinent formal development. Consider the classical para-ferromagnetic transition: the system goes from a disordered state to sudden common orientation of spins, up to the complete ordered state of a unique orientation. Or percolation, often based on the formation of fractal structures, that is the iteration of a statistically invariant motif. Similarly for the formation of a snow flake . . . . In all these circumstances, a “new physical object of observation” is formed. Most of the current analyses deal with transitions at equilibrium; the less studied and more challenging case of far form equilibrium critical transitions may require new mathematical tools, or variants of the powerful renormalization methods. These methods change the pertinent object, yet they are based on symmetries and conservation properties such as energy or other invariants. That is, one obtains a new object, yet not necessarily new observables for the theoretical analysis. Another key mathematical aspect of renormalization is that it analyzes point-wise transitions, that is, mathematically, the physical transition is seen as happening in an isolated mathematical point (isolated with respect to the interval topology, or the topology induced by the usual measurement and the associated metrics).

One can say in full generality that a mathematical frame completely handles the determination of the object it describes as long as no strong enough singularity (i.e. relevant infinity or divergences) shows up to break this very mathematical determination. In classical statistical fields (at criticality) and in quantum field theories this leads to the necessity of using renormalization methods. The point of these methods is that when it is impossible to handle mathematically all the interaction of the system in a direct manner (because they lead to infinite quantities and therefore to no relevant account of the situation), one can still analyze parts of the interactions in a systematic manner, typically within arbitrary scale intervals. This allows us to exhibit a symmetry between partial sets of “interactions”, when the arbitrary scales are taken as a parameter.

In this situation, the intelligibility still has an “upward” flavor since renormalization is based on the stability of the equational determination when one considers a part of the interactions occurring in the system. Now, the “locus of the objectivity” is not in the description of the parts but in the stability of the equational determination when taking more and more interactions into account. This is true for critical phenomena, where the parts, atoms for example, can be objectivized outside the system and have a characteristic scale. In general, though, only scale invariance matters and the contingent choice of a fundamental (atomic) scale is irrelevant. Even worse, in quantum fields theories, the parts are not really separable from the whole (this would mean to separate an electron from the field it generates) and there is no relevant elementary scale which would allow ONE to get rid of the infinities (and again this would be quite arbitrary, since the objectivity needs the inter-scale relationship).

In short, even in physics there are situations where the whole is not the sum of the parts because the parts cannot be summed on (this is not specific to quantum fields and is also relevant for classical fields, in principle). In these situations, the intelligibility is obtained by the scale symmetry which is why fundamental scale choices are arbitrary with respect to this phenomena. This choice of the object of quantitative and objective analysis is at the core of the scientific enterprise: looking only at molecules as the only pertinent observable of life is worse than reductionist, it is against the history of physics and its audacious unifications and invention of new observables, scale invariances and even conceptual frames.

As for criticality in biology, there exists substantial empirical evidence that living organisms undergo critical transitions. These are mostly analyzed as limit situations, either never really reached by an organism or as occasional point-wise transitions. Or also, as researchers nicely claim in specific analysis: a biological system, a cell genetic regulatory networks, brain and brain slices …are “poised at criticality”. In other words, critical state transitions happen continually.

Thus, as for the pertinent observables, the phenotypes, we propose to understand evolutionary trajectories as cascades of critical transitions, thus of symmetry changes. In this perspective, one cannot pre-give, nor formally pre-define, the phase space for the biological dynamics, in contrast to what has been done for the profound mathematical frame for physics. This does not forbid a scientific analysis of life. This may just be given in different terms.

As for evolution, there is no possible equational entailment nor a causal structure of determination derived from such entailment, as in physics. The point is that these are better understood and correlated, since the work of Noether and Weyl in the last century, as symmetries in the intended equations, where they express the underlying invariants and invariant preserving transformations. No theoretical symmetries, no equations, thus no laws and no entailed causes allow the mathematical deduction of biological trajectories in pre-given phase spaces – at least not in the deep and strong sense established by the physico-mathematical theories. Observe that the robust, clear, powerful physico-mathematical sense of entailing law has been permeating all sciences, including societal ones, economics among others. If we are correct, this permeating physico-mathematical sense of entailing law must be given up for unentailed diachronic evolution in biology, in economic evolution, and cultural evolution.

As a fundamental example of symmetry change, observe that mitosis yields different proteome distributions, differences in DNA or DNA expressions, in membranes or organelles: the symmetries are not preserved. In a multi-cellular organism, each mitosis asymmetrically reconstructs a new coherent “Kantian whole”, in the sense of the physics of critical transitions: a new tissue matrix, new collagen structure, new cell-to-cell connections . . . . And we undergo millions of mitosis each minute. More, this is not “noise”: this is variability, which yields diversity, which is at the core of evolution and even of stability of an organism or an ecosystem. Organisms and ecosystems are structurally stable, also because they are Kantian wholes that permanently and non-identically reconstruct themselves: they do it in an always different, thus adaptive, way. They change the coherence structure, thus its symmetries. This reconstruction is thus random, but also not random, as it heavily depends on constraints, such as the proteins types imposed by the DNA, the relative geometric distribution of cells in embryogenesis, interactions in an organism, in a niche, but also on the opposite of constraints, the autonomy of Kantian wholes.

In the interaction with the ecosystem, the evolutionary trajectory of an organism is characterized by the co-constitution of new interfaces, i.e. new functions and organs that are the proper observables for the Darwinian analysis. And the change of a (major) function induces a change in the global Kantian whole as a coherence structure, that is it changes the internal symmetries: the fish with the new bladder will swim differently, its heart-vascular system will relevantly change ….

Organisms transform the ecosystem while transforming themselves and they can stand/do it because they have an internal preserved universe. Its stability is maintained also by slightly, yet constantly changing internal symmetries. The notion of extended criticality in biology focuses on the dynamics of symmetry changes and provides an insight into the permanent, ontogenetic and evolutionary adaptability, as long as these changes are compatible with the co-constituted Kantian whole and the ecosystem. As we said, autonomy is integrated in and regulated by constraints, with an organism itself and of an organism within an ecosystem. Autonomy makes no sense without constraints and constraints apply to an autonomous Kantian whole. So constraints shape autonomy, which in turn modifies constraints, within the margin of viability, i.e. within the limits of the interval of extended criticality. The extended critical transition proper to the biological dynamics does not allow one to prestate the symmetries and the correlated phase space.

Consider, say, a microbial ecosystem in a human. It has some 150 different microbial species in the intestinal tract. Each person’s ecosystem is unique, and tends largely to be restored following antibiotic treatment. Each of these microbes is a Kantian whole, and in ways we do not understand yet, the “community” in the intestines co-creates their worlds together, co-creating the niches by which each and all achieve, with the surrounding human tissue, a task closure that is “always” sustained even if it may change by immigration of new microbial species into the community and extinction of old species in the community. With such community membership turnover, or community assembly, the phase space of the system is undergoing continual and open ended changes. Moreover, given the rate of mutation in microbial populations, it is very likely that these microbial communities are also co-evolving with one another on a rapid time scale. Again, the phase space is continually changing as are the symmetries.

Can one have a complete description of actual and potential biological niches? If so, the description seems to be incompressible, in the sense that any linguistic description may require new names and meanings for the new unprestable functions, where functions and their names make only sense in the newly co-constructed biological and historical (linguistic) environment. Even for existing niches, short descriptions are given from a specific perspective (they are very epistemic), looking at a purpose, say. One finds out a feature in a niche, because you observe that if it goes away the intended organisms dies. In other terms, niches are compared by differences: one may not be able to prove that two niches are identical or equivalent (in supporting life), but one may show that two niches are different. Once more, there are no symmetries organizing over time these spaces and their internal relations. Mathematically, no symmetry (groups) nor (partial-) order (semigroups) organize the phase spaces of phenotypes, in contrast to physical phase spaces.

Finally, here is one of the many logical challenges posed by evolution: the circularity of the definition of niches is more than the circularity in the definitions. The “in the definitions” circularity concerns the quantities (or quantitative distributions) of given observables. Typically, a numerical function defined by recursion or by impredicative tools yields a circularity in the definition and poses no mathematical nor logical problems, in contemporary logic (this is so also for recursive definitions of metabolic cycles in biology). Similarly, a river flow, which shapes its own border, presents technical difficulties for a careful equational description of its dynamics, but no mathematical nor logical impossibility: one has to optimize a highly non linear and large action/reaction system, yielding a dynamically constructed geodetic, the river path, in perfectly known phase spaces (momentum and space or energy and time, say, as pertinent observables and variables).

The circularity “of the definitions” applies, instead, when it is impossible to prestate the phase space, so the very novel interaction (including the “boundary conditions” in the niche and the biological dynamics) co-defines new observables. The circularity then radically differs from the one in the definition, since it is at the meta-theoretical (meta-linguistic) level: which are the observables and variables to put in the equations? It is not just within prestatable yet circular equations within the theory (ordinary recursion and extended non – linear dynamics), but in the ever changing observables, the phenotypes and the biological functions in a circularly co-specified niche. Mathematically and logically, no law entails the evolution of the biosphere.

Stationarity or Homogeneity of Random Fields

Untitled

Let (Ω, F, P) be a probability space on which all random objects will be defined. A filtration {Ft : t ≥ 0} of σ-algebras, is fixed and defines the information available at each time t.

Random field: A real-valued random field is a family of random variables Z(x) indexed by x ∈ Rd together with a collection of distribution functions of the form Fx1,…,xn which satisfy

Fx1,…,xn(b1,…,bn) = P[Z(x1) ≤ b1,…,Z(xn) ≤ bn], b1,…,bn ∈ R

The mean function of Z is m(x) = E[Z(x)] whereas the covariance function and the correlation function are respectively defined as

R(x, y) = E[Z(x)Z(y)] − m(x)m(y)

c(x, y) = R(x, x)/√(R(x, x)R(y, y))

Notice that the covariance function of a random field Z is a non-negative definite function on Rd × Rd, that is if x1, . . . , xk is any collection of points in Rd, and ξ1, . . . , ξk are arbitrary real constants, then

l=1kj=1k ξlξj R(xl, xj) = ∑l=1kj=1k ξlξj E(Z(xl) Z(xj)) = E (∑j=1k ξj Z(xj))2 ≥ 0

Without loss of generality, we assumed m = 0. The property of non-negative definiteness characterizes covariance functions. Hence, given any function m : Rd → R and a non-negative definite function R : Rd × Rd → R, it is always possible to construct a random field for which m and R are the mean and covariance function, respectively.

Bochner’s Theorem: A continuous function R from Rd to the complex plane is non-negative definite if and only if it is the Fourier-Stieltjes transform of a measure F on Rd, that is the representation

R(x) = ∫Rd eix.λ dF(λ)

holds for x ∈ Rd. Here, x.λ denotes the scalar product ∑k=1d xkλk and F is a bounded,  real-valued function satisfying ∫A dF(λ) ≥ 0 ∀ measurable A ⊂ Rd

The cross covariance function is defined as R12(x, y) = E[Z1(x)Z2(y)] − m1(x)m2(y)

, where m1 and m2 are the respective mean functions. Obviously, R12(x, y) = R21(y, x). A family of processes Zι with ι belonging to some index set I can be considered as a process in the product space (Rd, I).

A central concept in the study of random fields is that of homogeneity or stationarity. A random field is homogeneous or (second-order) stationary if E[Z(x)2] is finite ∀ x and

• m(x) ≡ m is independent of x ∈ Rd

• R(x, y) solely depends on the difference x − y

Thus we may consider R(h) = Cov(Z(x), Z(x+h)) = E[Z(x) Z(x+h)] − m2, h ∈ Rd,

and denote R the covariance function of Z. In this case, the following correspondence exists between the covariance and correlation function, respectively:

c(h) = R(h)/R(o)

i.e. c(h) ∝ R(h). For this reason, the attention is confined to either c or R. Two stationary random fields Z1, Z2 are stationarily correlated if their cross covariance function R12(x, y) depends on the difference x − y only. The two random fields are uncorrelated if R12 vanishes identically.

An interesting special class of homogeneous random fields that often arise in practice is the class of isotropic fields. These are characterized by the property that the covariance function R depends only on the length ∥h∥ of the vector h:

R(h) = R(∥h∥) .

In many applications, random fields are considered as functions of “time” and “space”. In this case, the parameter set is most conveniently written as (t,x) with t ∈ R+ and x ∈ Rd. Such processes are often homogeneous in (t, x) and isotropic in x in the sense that

E[Z(t, x)Z(t + h, x + y)] = R(h, ∥y∥) ,

where R is a function from R2 into R. In such a situation, the covariance function can be written as

R(t, ∥x∥) = ∫Rλ=0 eitu Hd (λ ∥x∥) dG(u, λ),

where

Hd(r) = (2/r)(d – 2)/2 Γ(d/2) J(d – 2)/2 (r)

and Jm is the Bessel function of the first kind of order m and G is a multiple of a distribution function on the half plane {(λ,u)|λ ≥ 0,u ∈ R}.

Anthropocosmism. Thought of the Day 20.0

Anthropocosmic

Russian cosmism appeared as sort of antithesis to the classical physicalist paradigm of thinking that was based on strict a differentiation of man and nature. It made an attempt to revive the ontology of an integral vision that organically unites man and cosmos. These problems were discussed both in the scientific and the religious form of cosmism. In the religious form N. Fedorov’s conception was the most significant one. Like other cosmists, he was not satisfied with the split of the Universe into man and nature as opposed entities. Such an opposition, in his opinion, condemned nature to thoughtlessness and destructiveness, and man to submission to the existing “evil world”. Fedorov maintained the ideas of a unity of man and nature, a connection between “soul” and cosmos in terms of regulation and resurrection. He offered a project of resurrection that was not understood only as a resurrection of ancestors, but contained at least two aspects: raising from the dead in a narrow, direct sense, and in a wider, metaphoric sense that includes nature’s ability of self-reconstruction. Fedorov’s resurrection project was connected with the idea of the human mind’s going to outer space. For him, “the Earth is not bound”, and “human activity cannot be restricted by the limits of the terrestrial planet”, which is only the starting point of this activity. One should critically look at the Utopian and fantastic elements of N. Fedorov’s views, which contain a considerable grain of mysticism, but nevertheless there are important rational moments of his conception: the quite clearly expressed idea of interconnection, the unity of man and cosmos, the idea of the correlation of the rational and moral elements of man, the ideal of the unity of humanity as planetary community of people.

But while religious cosmism was more notable for the fantastic and speculative character of its discourses, the natural scientific trend, solving the problem of interconnection between man and cosmos, paid special attention to the comprehension of scientific achievements that confirmed that interconnection. N. G. Kholodny developed these ideas in terms of anthropocosmism, opposing it to anthropocentrism. He wrote: “Having put himself in the place of God, man destroyed his natural connections with nature and condemned himself to a long solitary existence”. In Kholodny ́s opinion, anthropocentrism passed through several stages in its development: at the first stage man did not oppose himself to nature and did not oppose it, he rather “humanized” the natural forces. At the second stage man, extracting himself from nature, man looks at it as the object for research, the base of his well-being. At the next stage man uplifts himself over nature, basing himself in this activity on spiritual forces he studies the Universe. And, lastly, the next stage is characterized by a crisis of the anthropocentric worldview, which starts to collapse under the influence of the achievements of science and philosophy. N. G. Kholodny was right noting that in the past anthropocentrism had played a positive role; it freed man from his fright at nature by means of uplifting him over the latter. But gradually, beside anthropocentrism there appeared sprouts of the new vision – anthropocosmism. Kholodny regarded anthropocosmism as a certain line of development of the human intellect, will and feelings, which led people to their aims. An essential element in anthropocosmism was the attempt to reconsider the question of man ́s place in nature and of his interrelations with cosmos on the foundation of natural scientific knowledge.

Hedging. Part 1.

LongShortHedgeAllocation

Hedging a zero coupon bond denoted P(t,T) using other zero coupon bonds is accomplished by minimizing the residual variance of the hedged portfolio. The hedged portfolio Π(t) is represented as

Π(t) = P (t, T ) + ∑i=1NΔiP(t, Ti)

where ∆i denotes the amount of the ith bond P(t, Ti) included in the hedged portfolio. Notethe bonds P (t, T) and P (t, Ti) are determined by observing their market values at time t. It is the instantaneous change in the portfolio value that is stochastic. Therefore, the volatility of this change is computed to ascertain the efficacy of the hedge portfolio.

For starters, consider the variance of an individual bond in the field theory model. The definition P (t, T) = exp(-∫tT dxf(t, x)) for zero coupon bond prices implies that

dP(t, T)/P(t, T) = f(t, t)dt – ∫tTdxdf(t, x) = (r(t) – ∫tTdxα(t, x) – ∫tTdxσ(t, x)A(t, x))dt

and E[dP(t, T)/P(t, T) = r(t) – ∫tTdxα(t, x)dt since, E[A(t, x)] = 0. Therefore

dP(t, T)/P(t, T) – E[dP(t, T)/P(t, T) = – ∫tTdxσ(t, x)A(t, x))dt —– (1)

Squaring this expression and invoking the result that E[A(t, x)A(t, x′)] = δ(0)D(x, x′; t, TFR) = D(x, x′; t, TFR) /dt results in the instantaneous bond price variance

Var [dP(t, T)] = dt P2(t, T)∫tTdx ∫tT dx’σ(t, x) D(x, x′; t, TFR) σ(t, x’) —– (2)

As an intermediate step, the instantaneous variance of a bond portfolio is considered. For a portfolio of bonds, ∏ = ∑i=1NΔiP(t, Ti), the following results follow directly

d∏(t) – E[d∏(t)] = -dt ∑i=1NΔiP(t, Ti) ∫tTi dxσ(t, x)A(t, x) —– (3)

and

Var [d∏(t)] = dt ∑i=1Nj=1NΔiΔjP(t, Ti)P(t, Tj) ∫tTdx ∫tTj dx σ(t, x) D(x, x′; t, TFR) σ(t, x’) —– (4)

The (residual) variance of the hedged portfolio

Π(t) = P (t, T ) + ∑i=1NΔiP(t, Ti) ∫tTdx ∫tTdx’

may now be computed in a straightforward manner. For notational simplicity, the bonds P(t,Ti) (being used to hedge the original bond) and P(t,T) are denoted Pi and P respectively. Equation (4) implies the hedged portfolio’s variance equals the final result shown below

P2tTdx∫tT dx’ σ(t, x) σ(t, x’) D(x, x′; t, TFR) +2P ∑i=1NΔiPitTdx ∫tTdx’ + ∑i=1Nj=1NΔiΔjPiPjtTitTjdx’ σ(t, x) σ(t, x’) D(x, x′; t, TFR) —– (5)

Observe that the residual variance depends on the correlation between forward rates described by the propagator. Ultimately, the effectiveness of the hedge portfolio is an empirical question since perfect hedging is not possible without shorting the original bond. Minimizing the residual variance in equation (5) with respect to the hedge parameters Δi is an application of standard calculus.

Quantum Field Theory and Evolution of Forward Rates in Quantitative Finance. Note Quote.

algoquant_macrossover_demo010

Applications of physics to finance are well known, and the application of quantum mechanics to the theory of option pricing is well known. Hence it is natural to utilize the formalism of quantum field theory to study the evolution of forward rates. Quantum field theory models of the term structure originated with Baaquie. The intuition behind quantum field theory models of the term structure stems from allowing each forward rate maturity to both evolve randomly and be imperfectly correlated with every other maturity. This may also be accomplished by increasing the number of random factors in the original HJM towards infinity. However, the infinite number of factors in a field theory model are linked via a single function that governs the correlation between forward rate maturities. Thus, instead of estimating additional volatility functions in a multifactor HJM framework, one additional parameter is sufficient for a field theory model to instill imperfect correlation between every forward rate maturity. As the correlation between forward rate maturities approaches unity, field theory models reduce to the standard one1 factor HJM model. Therefore, the fundamental difference between finite factor HJM and field theory models is the minimal structure the latter requires to instill imperfect correlation between forward rates. The Heath-Jarrow-Morton framework refers to a class of models that are derived by directly modeling the dynamics of instantaneous forward-rates. The central insight of this framework is to recognize that there is an explicit relationship between the drift and volatility parameters of the forward-rate dynamics in a no-arbitrage world. The familiar short-rate models can be derived in the HJM framework but in general, however, HJM models are non-Markovian. As a result, it is not possible to use the PDE-based computational approach for pricing derivatives. Instead, discrete-time HJM models and Monte-Carlo methods are often used in practice. Monte Carlo methods (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. Their essential idea is using randomness to solve problems that might be deterministic in principle.

A Lagrangian is introduced to describe the field. The Lagrangian has the advantage over Brownian motion of being able to control fluctuations in the field, hence forward rates, with respect to maturity through the addition of a maturity dependent gradient as detailed in the definition below. The action of the field integrates the Lagrangian over time and when exponentiated and normalized serves as the probability distribution for forward rate curves. The propagator measures the correlation in the field and captures the effect the field at time t and maturity x has on maturity x′ at time t′. In the one factor HJM model, the propagator equals one which allows the quick recovery of one factor HJM results. Previous research has begun with the propagator or “correlation” function for the field instead of deriving this quantity from the Lagrangian. More importantly, the Lagrangian and its associated action generate a path integral that facilitates the solution of contingent claims and hedge parameters. However, previous term structure models have not defined the Lagrangian and are therefore unable to utilize the path integral in their applications. The Feynman path integral, path integral in short, is a fundamental quantity that provides a generating function for forward rate curves. Although crucial for pricing and hedging, the path integral has not appeared in previous term structure models with generalized continuous random processes.

Notation

Let t0 denote the current time and T the set of forward rate maturities with t0 ≤ T . The upper bound on the forward rate maturities is the constant TFR which constrains the forward rate maturities T to lie within the interval [t0, t0 + TFR].

To illustrate the field theory approach, the original finite factor HJM model is derived using field theory principles in appendix A. In the case of a one factor model, the derivation does not involve the propagator as the propagator is identically one when forward rates are perfectly correlated. However, the propagator is non trivial for field theory models as it governs the imperfect correlation between forward rate maturities. Let A(t,x) be a two dimensional field driving the evolution of forward rates f (t, x) through time. Following Baaquie, the Lagrangian of the field is defined as

Definition:

The Lagrangian of the field equals

L[A] = -1/2TFR  {A2(t, x) + 1/μ2(∂A(t,x)∂x)2} —– (1)

Definition is not unique, other Lagrangians exist and would imply different propagators. However, the Lagrangian in the definition is sufficient to explain the contribution of field theory ∂A(t,x)∂x  that controls field fluctuations in the direction of the forward rate maturity. The constant μ measures the strength of the fluctuations in the maturity direction. The Lagrangian in the definition implies the field is continuous, Gaussian, and Markovian. Forward rates involving the field are expressed below where the drift and volatility functions satisfy the usual regularity conditions.

∂f(t,x)/∂t = α (t, x) + σ (t, x)A(t, x) —– (2)

The forward rate process in equation (2) incorporates existing term structure research on Brown- ian sheets, stochastic strings, etc that have been used in previous continuous term structure models. Note that equation (2) is easily generalized to the K factor case by introducing K independent and identical fields Ai(t, x). Forward rates could then be defined as

∂f(t, x)/∂t = α (t, x) + ∑i=1K σi(t, x)Ai(t, x) —– (3)

However, a multifactor HJM model can be reproduced without introducing multiple fields. In fact, under specific correlation functions, the field theory model reduces to a multifactor HJM model without any additional fields to proxy for additional Brownian motions.

Proposition:

Lagrangian of Multifactor HJM

The Lagrangian describing the random process of a K-factor HJM model is given by

L[A] = −1/2 A(t, x)G−1(t, x, x′)A(t, x′) —– (4)

where

∂f(t, x)/∂t = α(t, x) + A(t, x)

and G−1(t, x, x′)A(t, x′) denotes the inverse of the function.

G(t, x, x′) = ∑i=1K σi(t, x) σi(t, x’) —– (5)

The above proposition is an interesting academic exercise to illustrate the parallel between field theory and traditional multifactor HJM models. However, multifactor HJM models have the disadvantages associated with a finite dimensional basis. Therefore, this approach is not pursued in later empirical work. In addition, it is possible for forward rates to be perfectly correlated within a segment of the forward rate curve but imperfectly correlated with forward rates in other segments. For example, one could designate short, medium, and long maturities of the forward rate curve. This situation is not identical to the multifactor HJM model but justifies certain market practices that distinguish between short, medium, and long term durations when hedging. However, more complicated correlation functions would be required; compromising model parsimony and reintroducing the same conceptual problems of finite factor models. Furthermore, there is little economic intuition to justify why the correlation between forward rates should be discontinuous.