Homotopically Truncated Spaces.

The Eckmann–Hilton dual of the Postnikov decomposition of a space is the homology decomposition (or Moore space decomposition) of a space.

A Postnikov decomposition for a simply connected CW-complex X is a commutative diagram

Untitled

such that pn∗ : πr(X) → πr(Pn(X)) is an isomorphism for r ≤ n and πr(Pn(X)) = 0 for r > n. Let Fn be the homotopy fiber of qn. Then the exact sequence

πr+1(PnX) →qn∗ πr+1(Pn−1X) → πr(Fn) → πr(PnX) →qn∗ πr(Pn−1X)

shows that Fn is an Eilenberg–MacLane space K(πnX, n). Constructing Pn+1(X) inductively from Pn(X) requires knowing the nth k-invariant, which is a map of the form kn : Pn(X) → Yn. The space Pn+1(X) is then the homotopy fiber of kn. Thus there is a homotopy fibration sequence

K(πn+1X, n+1) → Pn+1(X) → Pn(X) → Yn

This means that K(πn+1X, n+1) is homotopy equivalent to the loop space ΩYn. Consequently,

πr(Yn) ≅ πr−1(ΩYn) ≅ πr−1(K(πn+1X, n+1) = πn+1X, r = n+2,

= 0, otherwise.

and we see that Yn is a K(πn+1X, n+2). Thus the nth k-invariant is a map kn : Pn(X) → K(πn+1X, n+2)

Note that it induces the zero map on all homotopy groups, but is not necessarily homotopic to the constant map. The original space X is weakly homotopy equivalent to the inverse limit of the Pn(X).

Applying the paradigm of Eckmann–Hilton duality, we arrive at the homology decomposition principle from the Postnikov decomposition principle by changing:

    • the direction of all arrows
    • π to H
    • loops Ω to suspensions S
    • fibrations to cofibrations and fibers to cofibers
    • Eilenberg–MacLane spaces K(G, n) to Moore spaces M(G, n)
    • inverse limits to direct limits

A homology decomposition (or Moore space decomposition) for a simply connected CW-complex X is a commutative diagram

Untitled

such that jn∗ : Hr(X≤n) → Hr(X) is an isomorphism for r ≤ n and Hr(X≤n) = 0 for

r > n. Let Cn be the homotopy cofiber of in. Then the exact sequence

Hr(X≤n−1) →in∗ Hr(X≤n) → Hr(Cn) →in∗ Hr−1(X≤n−1) → Hr−1(X≤n)

shows that Cn is a Moore space M(HnX, n). Constructing X≤n+1 inductively from X≤n requires knowing the nth k-invariant, which is a map of the form kn : Yn → X≤n.

The space X≤n+1 is then the homotopy cofiber of kn. Thus there is a homotopy cofibration sequence

Ynkn X≤nin+1 X≤n+1 → M(Hn+1X, n+1)

This means that M(Hn+1X, n+1) is homotopy equivalent to the suspension SYn. Consequently,

H˜r(Yn) ≅ Hr+1(SYn) ≅ Hr+1(M(Hn+1X, n+1)) = Hn+1X, r = n,

= 0, otherwise

and we see that Yn is an M(Hn+1X, n). Thus the nth k-invariant is a map kn : M(Hn+1X, n) → X≤n

It induces the zero map on all reduced homology groups, which is a nontrivial statement to make in degree n:

kn∗ : Hn(M(Hn+1X, n)) ∼= Hn+1(X) → Hn(X) ∼= Hn(X≤n)

The original space X is homotopy equivalent to the direct limit of the X≤n. The Eckmann–Hilton duality paradigm, while being a very valuable organizational principle, does have its natural limitations. Postnikov approximations possess rather good functorial properties: Let pn(X) : X → Pn(X) be a stage-n Postnikov approximation for X, that is, pn(X) : πr(X) → πr(Pn(X)) is an isomorphism for r ≤ n and πr(Pn(X)) = 0 for r > n. If Z is a space with πr(Z) = 0 for r > n, then any map g : X → Z factors up to homotopy uniquely through Pn(X). In particular, if f : X → Y is any map and pn(Y) : Y → Pn(Y) is a stage-n Postnikov approximation for Y, then, taking Z = Pn(Y) and g = pn(Y) ◦ f, there exists, uniquely up to homotopy, a map pn(f) : Pn(X) → Pn(Y) such that

Untitled

homotopy commutes. Let X = S22 e3 be a Moore space M(Z/2,2) and let Y = X ∨ S3. If X≤2 and Y≤2 denote stage-2 Moore approximations for X and Y, respectively, then X≤2 = X and Y≤2 = X. We claim that whatever maps i : X≤2 → X and j : Y≤2 → Y such that i : Hr(X≤2) → Hr(X) and j : Hr(Y≤2) → Hr(Y) are isomorphisms for r ≤ 2 one takes, there is always a map f : X → Y that cannot be compressed into the stage-2 Moore approximations, i.e. there is no map f≤2 : X≤2 → Y≤2 such that

Untitled

commutes up to homotopy. We shall employ the universal coefficient exact sequence for homotopy groups with coefficients. If G is an abelian group and M(G, n) a Moore space, then there is a short exact sequence

0 → Ext(G, πn+1Y) →ι [M(G, n), Y] →η Hom(G, πnY) → 0,

where Y is any space and [−,−] denotes pointed homotopy classes of maps. The map η is given by taking the induced homomorphism on πn and using the Hurewicz isomorphism. This universal coefficient sequence is natural in both variables. Hence, the following diagram commutes:

Untitled

Here we will briefly write E2(−) = Ext(Z/2,−) so that E2(G) = G/2G, and EY (−) = Ext(−, π3Y). By the Hurewicz theorem, π2(X) ∼= H2(X) ∼= Z/2, π2(Y) ∼= H2(Y) ∼= Z/2, and π2(i) : π2(X≤2) → π2(X), as well as π2(j) : π2(Y≤2) → π2(Y), are isomorphisms, hence the identity. If a homomorphism φ : A → B of abelian groups is onto, then E2(φ) : E2(A) = A/2A → B/2B = E2(B) remains onto. By the Hurewicz theorem, Hur : π3(Y) → H3(Y) = Z is onto. Consequently, the induced map E2(Hur) : E23Y) → E2(H3Y) = E2(Z) = Z/2 is onto. Let ξ ∈ E2(H3Y) be the generator. Choose a preimage x ∈ E23Y), E2(Hur)(x) = ξ and set [f] = ι(x) ∈ [X,Y]. Suppose there existed a homotopy class [f≤2] ∈ [X≤2, Y≤2] such that

j[f≤2] = i[f].

Then

η≤2[f≤2] = π2(j)η≤2[f≤2] = ηj[f≤2] = ηi[f] = π2(i)η[f] = π2(i)ηι(x) = 0.

Thus there is an element ε ∈ E23Y≤2) such that ι≤2(ε) = [f≤2]. From ιE2π3(j)(ε) = jι≤2(ε) = j[f≤2] = i[f] = iι(x) = ιEY π2(i)(x)

we conclude that E2π3(j)(ε) = x since ι is injective. By naturality of the Hurewicz map, the square

Untitled

commutes and induces a commutative diagram upon application of E2(−):

Untitled

It follows that

ξ = E2(Hur)(x) = E2(Hur)E2π3(j)(ε) = E2H3(j)E2(Hur)(ε) = 0,

a contradiction. Therefore, no compression [f≤2] of [f] exists.

Given a cellular map, it is not always possible to adjust the extra structure on the source and on the target of the map so that the map preserves the structures. Thus the category theoretic setup automatically, and in a natural way, singles out those continuous maps that can be compressed into homologically truncated spaces.

Advertisement

Credit Risk Portfolio. Note Quote.

maxresdefault

The recent development in credit markets is characterized by a flood of innovative credit risky structures. State-of-the-art portfolios contain derivative instruments ranging from simple, nearly commoditized contracts such as credit default swap (CDS), to first- generation portfolio derivatives such as first-to-default (FTD) baskets and collateralized debt obligation (CDO) tranches, up to complex structures involving spread options and different asset classes (hybrids). These new structures allow portfolio managers to implement multidimensional investment strategies, which seamlessly conform to their market view. Moreover, the exploding liquidity in credit markets makes tactical (short-term) overlay management very cost efficient. While the outperformance potential of an active portfolio management will put old-school investment strategies (such as buy-and-hold) under enormous pressure, managing a highly complex credit portfolio requires the introduction of new optimization technologies.

New derivatives allow the decoupling of business processes in the risk management industry (in banking, as well as in asset management), since credit treasury units are now able to manage specific parts of credit risk actively and independently. The traditional feedback loop between risk management and sales, which was needed to structure the desired portfolio characteristics only by selective business acquisition, is now outdated. Strategic cross asset management will gain in importance, as a cost-efficient overlay management can now be implemented by combining liquid instruments from the credit universe.

In any case, all these developments force portfolio managers to adopt an integrated approach. All involved risk factors (spread term structures including curve effects, spread correlations, implied default correlations, and implied spread volatilities) have to be captured and integrated into appropriate risk figures. We have a look on constant proportion debt obligations (CPDOs) as a leveraged exposure on credit indices, constant proportion portfolio insurance (CPPI) as a capital guaranteed instrument, CDO tranches to tap the correlation market, and equity futures to include exposure to stock markets in the portfolio.

For an integrated credit portfolio management approach, it is of central importance to aggregate risks over various instruments with different payoff characteristics. In this chapter, we will see that a state-of-the-art credit portfolio contains not only linear risks (CDS and CDS index contracts) but also nonlinear risks (such as FTD baskets, CDO tranches, or credit default swaptions). From a practitioner’s point of view there is a simple solution for this risk aggregation problem, namely delta-gamma management. In such a framework, one approximates the risks of all instruments in a portfolio by its first- and second-order sensitivities and aggregates these sensitivities to the portfolio level. Apparently, for a proper aggregation of risk factors, one has to take the correlation of these risk factors into account. However, for credit risky portfolios, a simplistic sensitivity approach will be inappropriate, as can be seen by the characteristics of credit portfolio risks shows:

  • Credit risky portfolios usually involve a larger number of reference entities. Hence, one has to take a large number of sensitivities into account. However, this is a phenomenon that is already well known from the management of stock portfolios. The solution is to split the risk for each constituent into a systematic risk (e.g., a beta with a portfolio hedging tool) and an alpha component which reflects the idiosyncratic part of the risk.

  • However, in contrast to equities, credit risk is not one dimensional (i.e., one risky security per issuer) but at least two dimensional (i.e., a set of instruments with different maturities). This is reflected in the fact that there is a whole term structure of credit spreads. Moreover, taking also different subordination levels (with different average recovery rates) into account, credit risk becomes a multidimensional object for each reference entity.
  • While most market risks can be satisfactorily approximated by diffusion processes, for credit risk the consideration of events (i.e., jumps) is imperative. The most apparent reason for this is that the dominating element of credit risk is event risk. However, in a market perspective, there are more events than the ultimate default event that have to be captured. Since one of the main drivers of credit spreads is the structure of the underlying balance sheet, a change (or the risk of a change) in this structure usually triggers a large movement in credit spreads. The best-known example for such an event is a leveraged buyout (LBO).
  • For credit market players, correlation is a very special topic, as a central pricing parameter is named implied correlation. However, there are two kinds of correlation parameters that impact a credit portfolio: price correlation and event correlation. While the former simply deals with the dependency between two price (i.e., spread) time series under normal market conditions, the latter aims at describing the dependency between two price time series in case of an event. In its simplest form, event correlation can be seen as default correlation: what is the risk that company B defaults given that company A has defaulted? While it is already very difficult to model this default correlation, for practitioners event correlation is even more complex, since there are other events than just the default event, as already mentioned above. Hence, we can modify the question above: what is the risk that spreads of company B blow out given that spreads of company A have blown out? In addition, the notion of event correlation can also be used to capture the risk in capital structure arbitrage trades (i.e., trading stock versus bonds of one company). In this example, the question might be: what is the risk that the stock price of company A jumps given that its bond spreads have blown out? The complicated task in this respect is that we do not only have to model the joint event probability but also the direction of the jumps. A brief example highlights why this is important. In case of a default event, spreads will blow out accompanied by a significant drop in the stock price. This means that there is a negative correlation between spreads and stock prices. However, in case of an LBO event, spreads will blow out (reflecting the deteriorated credit quality because of the higher leverage), while stock prices rally (because of the fact that the acquirer usually pays a premium to buy a majority of outstanding shares).

These show that a simple sensitivity approach – e.g., calculate and tabulate all deltas and gammas and let a portfolio manager play with – is not appropriate. Further risk aggregation (e.g., beta management) and risk factors that capture the event risk are needed. For the latter, a quick solution is the so-called instantaneous default loss (IDL). The IDL expresses the loss incurred in a credit risk instrument in case of a credit event. For single-name CDS, this is simply the loss given default (LGD). However, for a portfolio derivative such as a mezzanine tranche, this figure does not directly refer to the LGD of the defaulted item, but to the changed subordination of the tranche because of the default. Hence, this figure allows one to aggregate various instruments with respect to credit events.

Energy Trading: Asian Options.

Untitled

Consider a risky asset (stock, commodity, a unit of energy) with the price S(t), where t ∈ [0, T], for a given T > 0. Consider an option with the payoff

Fu = Φ(u(·), S(·)) —– (1)

This payoff depends on a control process u(·) that is selected by an option holder from a certain class of admissible controls U. The mapping Φ : U × S → R is given; S is the set of paths of S(t). All processes from U has to be adapted to the current information flow, i.e., adapted to some filtration Ft that describes this information flow. We call the corresponding options controlled options.

For simplicity, we assume that all options give the right on the corresponding payoff of the amount Fu in cash rather than the right to buy or sell stock or commodities.

Consider a risky asset with the price S(t). Let T > 0 be given, and let g : R → R and f : R × [0, T] → R be some functions. Consider an option with the payoff at time T

Fu = g(∫0u(t) f (S(t), t)dt) —– (2)

Here u(t) is the control process that is selected by the option holder. The process u(t) has to be adapted to the filtration Ft describing the information flow. In addition, it has to be selected such that

0T u(t)dt = 1

A possible modification is the option with the payoff

Fu = ∫0T u(t) f(S(t), t)dt + (1 – ∫0T u(t)dt) f(S(T), T)

In this case, the unused u(t) are accumulated and used at the terminal time. Let us consider some examples of possible selection of f and g. We denote x+ = max(0, x)

Important special cases are the options with g(x) = x, g(x) = (x − k)+, g(x) = (K − x)+,

g(x) = min(M, x), where M > 0 is the cap for benefits, and with

f(x, t) = x, f(x, t) = (x − K)+, f(x, t) = (K − x)+ —– (3)

or

f(x, t) = er(T−t)(x − K)+, f(x, t) = er(T−t)(K − x)+ —– (4)

where K > 0 is given and where r > 0 is the risk-free rate. Options (3) correspond to the case when the payments are made at current time t ∈ [0, T], and options (4) correspond to the case when the payment is made at terminal time T. This takes into account accumulation of interest up to time T on any payoff.

The option with payoff (2) with f(x, t) ≡ x represents a generalization of Asian option where the weight u(t) is selected by the holder. It needs to be noted that an Asian option , which is also called an average option, is an option whose payoff depends on the average price of the underlying asset over a certain period of time as opposed to at maturity. The option with payoff (2) with g(x) ≡ x represents a limit version of the multi-exercise options, when the distribution of exercise time approaches a continuous distribution. An additional restriction on |u(t)| ≤ const would represent the continuous analog of the requirement for multi-exercise options that exercise times must be on some distance from each other. For an analog of the model without this condition, strategies may approach delta-functions.

These options can be used, for instance, for energy trading with u(t) representing the quantity of energy purchased at time t for the fixed price K when the market price is above K. In this case, the option represents a modification of the multi-exercise call option with continuously distributed payoff time. For this model, the total amount of energy that can be purchased is limited per option. Therefore, the option holder may prefer to postpone the purchase if she expects better opportunities in future.