Whitehead and Peirce’s Synchronicity with Hegel’s Capital Error. Thought of the Day 97.0

6a00d83451aec269e201b8d28c1a40970c-800wi

The focus on experience ensures that Whitehead’s metaphysics is grounded. Otherwise the narrowness of approach would only culminate in sterile measurement. This becomes especially evident with regard to the science of history. Whitehead gives a lucid example of such ‘sterile measurement’ lacking the immediacy of experience.

Consider, for example, the scientific notion of measurement. Can we elucidate the turmoil of Europe by weighing its dictators, its prime ministers, and its editors of newspapers? The idea is absurd, although some relevant information might be obtained. (Alfred North Whitehead – Modes of Thought)

The wealth of experience leaves us with the problem of how to cope with it. Selection of data is required. This selection is done by a value judgment – the judgment of importance. Although Whitehead opposes the dichotomy of the two notions ‘importance’ and ‘matter of fact’, it is still necessary to distinguish grades and types of importance, which enables us to structure our experience, to focus it. This is very similar to hermeneutical theories in Schleiermacher, Gadamer and Habermas: the horizon of understanding structures the data. Therefore, we not only need judgment but the process of concrescence implicitly requires an aim. Whitehead explains that

By this term ‘aim’ is meant the exclusion of the boundless wealth of alternative potentiality and the inclusion of that definite factor of novelty which constitutes the selected way of entertaining those data in that process of unification.

The other idea that underlies experience is “matter of fact.”

There are two contrasted ideas which seem inevitably to underlie all width of experience, one of them is the notion of importance, the sense of importance, the presupposition of importance. The other is the notion of matter of fact. There is no escape from sheer matter of fact. It is the basis of importance; and importance is important because of the inescapable character of matter of fact.

By stressing the “alien character” of feeling that enters into the privately felt feeling of an occasion, Whitehead is able to distinguish the responsive and the supplemental stages of concrescence. The responsive stage being a purely receptive phase, the latter integrating the former ‘alien elements’ into a unity of feeling. The alien factor in the experiencing subjects saves Whitehead’s concept from being pure Spirit (Geist) in a Hegelian sense. There are more similarities between Hegelian thinking and Whitehead’s thought than his own comments on Hegel may suggest. But, his major criticism could probably be stated with Peirce, who wrote that

The capital error of Hegel which permeates his whole system in every part of it is that he almost altogether ignores the Outward clash. (The Essential Peirce 1)

Whitehead refers to that clash as matter of fact. Although, even there, one has to keep in mind that matter-of-fact is an abstraction. 

Matter of fact is an abstraction, arrived at by confining thought to purely formal relations which then masquerade as the final reality. This is why science, in its perfection, relapses into the study of differential equations. The concrete world has slipped through the meshes of the scientific net.

Whitehead clearly keeps the notion of prehension in his late writings as developed in Process and Reality. Just to give one example, 

I have, in my recent writings, used the word ‘prehension’ to express this process of appropriation. Also I have termed each individual act of immediate self-enjoyment an ‘occasion of experience’. I hold that these unities of existence, these occasions of experience, are the really real things which in their collective unity compose the evolving universe, ever plunging into the creative advance. 

Process needs an aim in Process and Reality as much as in Modes of Thought:

We must add yet another character to our description of life. This missing characteristic is ‘aim’. By this term ‘aim’ is meant the exclusion of the boundless wealth of alternative potentiality, and the inclusion of that definite factor of novelty which constitutes the selected way of entertaining those data in that process of unification. The aim is at that complex of feeling which is the enjoyment of those data in that way. ‘That way of enjoyment’ is selected from the boundless wealth of alternatives. It has been aimed at for actualization in that process.

Advertisements

Credit Default Swaps.

sg2008112876743

Credit default swaps are the most liquid instruments in the credit derivatives markets, accounting for nearly half of the total outstanding notional worldwide, and up to 85% of total outstanding notional of contracts with reference to emerging market issuers. In a CDS, the protection buyer pays a premium to the protection seller in exchange for a contingent payment in case a credit event involving a reference security occurs during the contract period.

Untitled

The premium (default swap spread) reflects the credit risk of the bond issuer, and is usually quoted as a spread over a reference rate such as LIBOR or the swap rate, to be paid either up front, quarterly, or semiannually. The contingent payment can be settled either by physical delivery of the reference security or an equivalent asset, or in cash. With physical settlement, the protection buyer delivers the reference security (or equivalent one) to the protection seller and receives the par amount. With cash settlement, the protection buyer receives a payment equal to the difference between par and the recovery value of the reference security, the latter determined from a dealer poll or from price quote services. Contracts are typically subject to physical settlement. This allows protection sellers to benefit from any rebound in prices caused by the rush to purchase deliverable bonds by protection buyers after the realization of the credit event.

In mature markets, trading is highly concentrated on 5 year contracts, and to certain extent, market participants consider these contracts a ‘‘commodity.’’ Usual contract maturities are 1, 2, 5, and 10 years. The coexistence of markets for default swaps and bonds raises the issue on whether prices in the former merely mirrors market expectations already reflected in bond prices. If credit risk were the only factor affecting the CDS spread, with credit risk characterized by the probability of default and the expected loss given default, the CDS spread and the bond spread should be approximately similar, as a portfolio of a default swap contract and a defaultable bond is essentially a risk-free asset.

However, market frictions and some embedded options in the CDS contract, such as the cheapest-to-deliver option, cause CDS spreads and bond spreads to diverge. The difference between these two spreads is referred to as the default swap basis. The default swap basis is positive when the CDS spread trades at a premium relative to the bond spread, and negative when the CDS spread trades at a discount.

Several factors contribute to the widening of the basis, either by widening the CDS spread or tightening the bond spread. Factors that tend to widen the CDS spread include: (1) the cheapest-to-deliver option, since protection sellers must charge a higher premium to account for the possibility of being delivered a less valuable asset in physically settled contracts; (2) the issuance of new bonds and/or loans, as increased hedging by market makers in the bond market pushes up the price of protection, and the number of potential cheapest-to-deliver assets increases; (3) the ability to short default swaps rather than bonds when the bond issuer’s credit quality deteriorates, leading to increased protection buying in the market; and (4) bond prices trading less than par, since the protection seller is guaranteeing the recovery of the par amount rather than the lower current bond price.

Factors that tend to tighten bond spreads include: (1) bond clauses allowing the coupon to step up if the issue is downgraded, as they provide additional benefits to the bondholder not enjoyed by the protection buyer and (2) the zero-lower bound for default swap premiums causes the basis to be positive when bond issuers can trade below the LIBOR curve, as is often the case for higher rated issues.

Similarly, factors that contribute to the tightening of the basis include: (1) existence of greater counterparty risk to the protection buyer than to the protection seller, so buyers are compensated by paying less than the bond spread; (2) the removal of funding risk for the protection seller, as selling protection is equivalent to funding the asset at LIBOR. Less risk demands less compensation and hence, a tightening in the basis; and (3) the increased supply of structured products such as CDS-backed collateralized debt obligations (CDOs), as they increase the supply of protection in the market.

Movements in the basis depend also on whether the market is mainly dominated by high cost investors or low cost investors. A long credit position, i.e., holding the credit risk, can be obtained either by selling protection or by financing the purchase of the risky asset. The CDS remains a viable alternative if its premium does not exceed the difference between the asset yield and the funding cost. The higher the funding cost, the lower the premium and hence, the tighter the basis. Thus, when the market share of low cost investors is relatively high and the average funding costs are below LIBOR, the basis tends to widen. Finally, relative liquidity also plays a role in determining whether the basis narrows or widens, as investors need to be compensated by wider spreads in the less liquid market. Hence, if the CDS market is more liquid than the corresponding underlying bond market (cash market), the basis will narrow and vice versa.

Accelerating the Synthetic Credit. Thought of the Day 96.0

hqdefault

The structural change in the structured credit universe continues to accelerate. While the market for synthetic structures is already pretty well established, many real money accounts remain outsiders owing to regulatory hurdles and technical limitations, e.g., to participate in the correlation market. Therefore, banks are continuously establishing new products to provide real money accounts with access to the structured market, with Constant proportion debt obligation (CPDOs) recently having been popular. Against this background, three vehicles which offer easy access to structured products for these investors have gained in importance: CDPCs (Credit Derivatives Product Company), PCVs (permanent capital vehicle), and SIVs (structured investment vehicles).

A CDPC is a rated company which buys credit risk via all types of credit derivative instruments, primarily super senior tranches, and sells this risk to investors via preferred shares (equity) or subordinated notes (debt). Hence, the vehicle uses super senior risk to create equity risk. The investment strategy is a buy-and-hold approach, while the aim is to offer high returns to investors and keep default risk limited. Investors are primarily exposed to rating migration risk, to mark-to-market risk, and, finally, to the capability of the external manager. The rating agencies assign, in general, an AAA-rating on the business model of the CDPC, which is a bankruptcy remote vehicle (special purpose vehicle [SPV]). The business models of specific CDPCs are different from each other in terms of investments and thresholds given to the manager. The preferred asset classes CDPC invested in are predominantly single-name CDS (credit default swaps), bespoke synthetic tranches, ABS (asset-backed security), and all kinds of CDOs (collateralized debt obligations). So far, CDPCs main investments are allocated to corporate credits, but CDPCs are extending their universe to ABS (Asset Backed Securities) and CDO products, which provide further opportunities in an overall tight spread environment. The implemented leverage is given through the vehicle and can be in the range of 15–60x. On average, the return target was typically around a 15% return on equity, paid in the form of dividends to the shareholders.

In contrast to CDPCs, PCVs do not invest in the top of the capital structure, but in equity pieces (mostly CDO equity pieces). The leverage is not implemented in the vehicle itself as it is directly related to the underlying instruments. PCVs are also set up as SPVs (special purpose vehicles) and listed on a stock exchange. They use the equity they receive from investors to purchase the assets, while the return on their investment is allocated to the shareholders via dividends. The target return amounts, in general, to around 10%. The portfolio is managed by an external manager and is marked-to-market. The share price of the company depends on the NAV (net asset value) of the portfolio and on the expected dividend payments.

In general, an SIV invests in the top of the capital structure of structured credits and ABS in line with CDPCs. In addition, SIVs also buy subordinated debt of financial institutions, and the portfolio is marked-to-market. SIVs are leveraged credit investment companies and bankruptcy remote. The vehicle issues typically investment-grade rated commercial paper, MTNs (medium term notes), and capital notes to its investors. The leverage depends on the character of the issued note and the underlying assets, ranging from 3 to 5 (bank loans) up to 14 (structured credits).

Homotopically Truncated Spaces.

The Eckmann–Hilton dual of the Postnikov decomposition of a space is the homology decomposition (or Moore space decomposition) of a space.

A Postnikov decomposition for a simply connected CW-complex X is a commutative diagram

Untitled

such that pn∗ : πr(X) → πr(Pn(X)) is an isomorphism for r ≤ n and πr(Pn(X)) = 0 for r > n. Let Fn be the homotopy fiber of qn. Then the exact sequence

πr+1(PnX) →qn∗ πr+1(Pn−1X) → πr(Fn) → πr(PnX) →qn∗ πr(Pn−1X)

shows that Fn is an Eilenberg–MacLane space K(πnX, n). Constructing Pn+1(X) inductively from Pn(X) requires knowing the nth k-invariant, which is a map of the form kn : Pn(X) → Yn. The space Pn+1(X) is then the homotopy fiber of kn. Thus there is a homotopy fibration sequence

K(πn+1X, n+1) → Pn+1(X) → Pn(X) → Yn

This means that K(πn+1X, n+1) is homotopy equivalent to the loop space ΩYn. Consequently,

πr(Yn) ≅ πr−1(ΩYn) ≅ πr−1(K(πn+1X, n+1) = πn+1X, r = n+2,

= 0, otherwise.

and we see that Yn is a K(πn+1X, n+2). Thus the nth k-invariant is a map kn : Pn(X) → K(πn+1X, n+2)

Note that it induces the zero map on all homotopy groups, but is not necessarily homotopic to the constant map. The original space X is weakly homotopy equivalent to the inverse limit of the Pn(X).

Applying the paradigm of Eckmann–Hilton duality, we arrive at the homology decomposition principle from the Postnikov decomposition principle by changing:

    • the direction of all arrows
    • π to H
    • loops Ω to suspensions S
    • fibrations to cofibrations and fibers to cofibers
    • Eilenberg–MacLane spaces K(G, n) to Moore spaces M(G, n)
    • inverse limits to direct limits

A homology decomposition (or Moore space decomposition) for a simply connected CW-complex X is a commutative diagram

Untitled

such that jn∗ : Hr(X≤n) → Hr(X) is an isomorphism for r ≤ n and Hr(X≤n) = 0 for

r > n. Let Cn be the homotopy cofiber of in. Then the exact sequence

Hr(X≤n−1) →in∗ Hr(X≤n) → Hr(Cn) →in∗ Hr−1(X≤n−1) → Hr−1(X≤n)

shows that Cn is a Moore space M(HnX, n). Constructing X≤n+1 inductively from X≤n requires knowing the nth k-invariant, which is a map of the form kn : Yn → X≤n.

The space X≤n+1 is then the homotopy cofiber of kn. Thus there is a homotopy cofibration sequence

Ynkn X≤nin+1 X≤n+1 → M(Hn+1X, n+1)

This means that M(Hn+1X, n+1) is homotopy equivalent to the suspension SYn. Consequently,

H˜r(Yn) ≅ Hr+1(SYn) ≅ Hr+1(M(Hn+1X, n+1)) = Hn+1X, r = n,

= 0, otherwise

and we see that Yn is an M(Hn+1X, n). Thus the nth k-invariant is a map kn : M(Hn+1X, n) → X≤n

It induces the zero map on all reduced homology groups, which is a nontrivial statement to make in degree n:

kn∗ : Hn(M(Hn+1X, n)) ∼= Hn+1(X) → Hn(X) ∼= Hn(X≤n)

The original space X is homotopy equivalent to the direct limit of the X≤n. The Eckmann–Hilton duality paradigm, while being a very valuable organizational principle, does have its natural limitations. Postnikov approximations possess rather good functorial properties: Let pn(X) : X → Pn(X) be a stage-n Postnikov approximation for X, that is, pn(X) : πr(X) → πr(Pn(X)) is an isomorphism for r ≤ n and πr(Pn(X)) = 0 for r > n. If Z is a space with πr(Z) = 0 for r > n, then any map g : X → Z factors up to homotopy uniquely through Pn(X). In particular, if f : X → Y is any map and pn(Y) : Y → Pn(Y) is a stage-n Postnikov approximation for Y, then, taking Z = Pn(Y) and g = pn(Y) ◦ f, there exists, uniquely up to homotopy, a map pn(f) : Pn(X) → Pn(Y) such that

Untitled

homotopy commutes. Let X = S22 e3 be a Moore space M(Z/2,2) and let Y = X ∨ S3. If X≤2 and Y≤2 denote stage-2 Moore approximations for X and Y, respectively, then X≤2 = X and Y≤2 = X. We claim that whatever maps i : X≤2 → X and j : Y≤2 → Y such that i : Hr(X≤2) → Hr(X) and j : Hr(Y≤2) → Hr(Y) are isomorphisms for r ≤ 2 one takes, there is always a map f : X → Y that cannot be compressed into the stage-2 Moore approximations, i.e. there is no map f≤2 : X≤2 → Y≤2 such that

Untitled

commutes up to homotopy. We shall employ the universal coefficient exact sequence for homotopy groups with coefficients. If G is an abelian group and M(G, n) a Moore space, then there is a short exact sequence

0 → Ext(G, πn+1Y) →ι [M(G, n), Y] →η Hom(G, πnY) → 0,

where Y is any space and [−,−] denotes pointed homotopy classes of maps. The map η is given by taking the induced homomorphism on πn and using the Hurewicz isomorphism. This universal coefficient sequence is natural in both variables. Hence, the following diagram commutes:

Untitled

Here we will briefly write E2(−) = Ext(Z/2,−) so that E2(G) = G/2G, and EY (−) = Ext(−, π3Y). By the Hurewicz theorem, π2(X) ∼= H2(X) ∼= Z/2, π2(Y) ∼= H2(Y) ∼= Z/2, and π2(i) : π2(X≤2) → π2(X), as well as π2(j) : π2(Y≤2) → π2(Y), are isomorphisms, hence the identity. If a homomorphism φ : A → B of abelian groups is onto, then E2(φ) : E2(A) = A/2A → B/2B = E2(B) remains onto. By the Hurewicz theorem, Hur : π3(Y) → H3(Y) = Z is onto. Consequently, the induced map E2(Hur) : E23Y) → E2(H3Y) = E2(Z) = Z/2 is onto. Let ξ ∈ E2(H3Y) be the generator. Choose a preimage x ∈ E23Y), E2(Hur)(x) = ξ and set [f] = ι(x) ∈ [X,Y]. Suppose there existed a homotopy class [f≤2] ∈ [X≤2, Y≤2] such that

j[f≤2] = i[f].

Then

η≤2[f≤2] = π2(j)η≤2[f≤2] = ηj[f≤2] = ηi[f] = π2(i)η[f] = π2(i)ηι(x) = 0.

Thus there is an element ε ∈ E23Y≤2) such that ι≤2(ε) = [f≤2]. From ιE2π3(j)(ε) = jι≤2(ε) = j[f≤2] = i[f] = iι(x) = ιEY π2(i)(x)

we conclude that E2π3(j)(ε) = x since ι is injective. By naturality of the Hurewicz map, the square

Untitled

commutes and induces a commutative diagram upon application of E2(−):

Untitled

It follows that

ξ = E2(Hur)(x) = E2(Hur)E2π3(j)(ε) = E2H3(j)E2(Hur)(ε) = 0,

a contradiction. Therefore, no compression [f≤2] of [f] exists.

Given a cellular map, it is not always possible to adjust the extra structure on the source and on the target of the map so that the map preserves the structures. Thus the category theoretic setup automatically, and in a natural way, singles out those continuous maps that can be compressed into homologically truncated spaces.

Credit Risk Portfolio. Note Quote.

maxresdefault

The recent development in credit markets is characterized by a flood of innovative credit risky structures. State-of-the-art portfolios contain derivative instruments ranging from simple, nearly commoditized contracts such as credit default swap (CDS), to first- generation portfolio derivatives such as first-to-default (FTD) baskets and collateralized debt obligation (CDO) tranches, up to complex structures involving spread options and different asset classes (hybrids). These new structures allow portfolio managers to implement multidimensional investment strategies, which seamlessly conform to their market view. Moreover, the exploding liquidity in credit markets makes tactical (short-term) overlay management very cost efficient. While the outperformance potential of an active portfolio management will put old-school investment strategies (such as buy-and-hold) under enormous pressure, managing a highly complex credit portfolio requires the introduction of new optimization technologies.

New derivatives allow the decoupling of business processes in the risk management industry (in banking, as well as in asset management), since credit treasury units are now able to manage specific parts of credit risk actively and independently. The traditional feedback loop between risk management and sales, which was needed to structure the desired portfolio characteristics only by selective business acquisition, is now outdated. Strategic cross asset management will gain in importance, as a cost-efficient overlay management can now be implemented by combining liquid instruments from the credit universe.

In any case, all these developments force portfolio managers to adopt an integrated approach. All involved risk factors (spread term structures including curve effects, spread correlations, implied default correlations, and implied spread volatilities) have to be captured and integrated into appropriate risk figures. We have a look on constant proportion debt obligations (CPDOs) as a leveraged exposure on credit indices, constant proportion portfolio insurance (CPPI) as a capital guaranteed instrument, CDO tranches to tap the correlation market, and equity futures to include exposure to stock markets in the portfolio.

For an integrated credit portfolio management approach, it is of central importance to aggregate risks over various instruments with different payoff characteristics. In this chapter, we will see that a state-of-the-art credit portfolio contains not only linear risks (CDS and CDS index contracts) but also nonlinear risks (such as FTD baskets, CDO tranches, or credit default swaptions). From a practitioner’s point of view there is a simple solution for this risk aggregation problem, namely delta-gamma management. In such a framework, one approximates the risks of all instruments in a portfolio by its first- and second-order sensitivities and aggregates these sensitivities to the portfolio level. Apparently, for a proper aggregation of risk factors, one has to take the correlation of these risk factors into account. However, for credit risky portfolios, a simplistic sensitivity approach will be inappropriate, as can be seen by the characteristics of credit portfolio risks shows:

  • Credit risky portfolios usually involve a larger number of reference entities. Hence, one has to take a large number of sensitivities into account. However, this is a phenomenon that is already well known from the management of stock portfolios. The solution is to split the risk for each constituent into a systematic risk (e.g., a beta with a portfolio hedging tool) and an alpha component which reflects the idiosyncratic part of the risk.

  • However, in contrast to equities, credit risk is not one dimensional (i.e., one risky security per issuer) but at least two dimensional (i.e., a set of instruments with different maturities). This is reflected in the fact that there is a whole term structure of credit spreads. Moreover, taking also different subordination levels (with different average recovery rates) into account, credit risk becomes a multidimensional object for each reference entity.
  • While most market risks can be satisfactorily approximated by diffusion processes, for credit risk the consideration of events (i.e., jumps) is imperative. The most apparent reason for this is that the dominating element of credit risk is event risk. However, in a market perspective, there are more events than the ultimate default event that have to be captured. Since one of the main drivers of credit spreads is the structure of the underlying balance sheet, a change (or the risk of a change) in this structure usually triggers a large movement in credit spreads. The best-known example for such an event is a leveraged buyout (LBO).
  • For credit market players, correlation is a very special topic, as a central pricing parameter is named implied correlation. However, there are two kinds of correlation parameters that impact a credit portfolio: price correlation and event correlation. While the former simply deals with the dependency between two price (i.e., spread) time series under normal market conditions, the latter aims at describing the dependency between two price time series in case of an event. In its simplest form, event correlation can be seen as default correlation: what is the risk that company B defaults given that company A has defaulted? While it is already very difficult to model this default correlation, for practitioners event correlation is even more complex, since there are other events than just the default event, as already mentioned above. Hence, we can modify the question above: what is the risk that spreads of company B blow out given that spreads of company A have blown out? In addition, the notion of event correlation can also be used to capture the risk in capital structure arbitrage trades (i.e., trading stock versus bonds of one company). In this example, the question might be: what is the risk that the stock price of company A jumps given that its bond spreads have blown out? The complicated task in this respect is that we do not only have to model the joint event probability but also the direction of the jumps. A brief example highlights why this is important. In case of a default event, spreads will blow out accompanied by a significant drop in the stock price. This means that there is a negative correlation between spreads and stock prices. However, in case of an LBO event, spreads will blow out (reflecting the deteriorated credit quality because of the higher leverage), while stock prices rally (because of the fact that the acquirer usually pays a premium to buy a majority of outstanding shares).

These show that a simple sensitivity approach – e.g., calculate and tabulate all deltas and gammas and let a portfolio manager play with – is not appropriate. Further risk aggregation (e.g., beta management) and risk factors that capture the event risk are needed. For the latter, a quick solution is the so-called instantaneous default loss (IDL). The IDL expresses the loss incurred in a credit risk instrument in case of a credit event. For single-name CDS, this is simply the loss given default (LGD). However, for a portfolio derivative such as a mezzanine tranche, this figure does not directly refer to the LGD of the defaulted item, but to the changed subordination of the tranche because of the default. Hence, this figure allows one to aggregate various instruments with respect to credit events.

Energy Trading: Asian Options.

Untitled

Consider a risky asset (stock, commodity, a unit of energy) with the price S(t), where t ∈ [0, T], for a given T > 0. Consider an option with the payoff

Fu = Φ(u(·), S(·)) —– (1)

This payoff depends on a control process u(·) that is selected by an option holder from a certain class of admissible controls U. The mapping Φ : U × S → R is given; S is the set of paths of S(t). All processes from U has to be adapted to the current information flow, i.e., adapted to some filtration Ft that describes this information flow. We call the corresponding options controlled options.

For simplicity, we assume that all options give the right on the corresponding payoff of the amount Fu in cash rather than the right to buy or sell stock or commodities.

Consider a risky asset with the price S(t). Let T > 0 be given, and let g : R → R and f : R × [0, T] → R be some functions. Consider an option with the payoff at time T

Fu = g(∫0u(t) f (S(t), t)dt) —– (2)

Here u(t) is the control process that is selected by the option holder. The process u(t) has to be adapted to the filtration Ft describing the information flow. In addition, it has to be selected such that

0T u(t)dt = 1

A possible modification is the option with the payoff

Fu = ∫0T u(t) f(S(t), t)dt + (1 – ∫0T u(t)dt) f(S(T), T)

In this case, the unused u(t) are accumulated and used at the terminal time. Let us consider some examples of possible selection of f and g. We denote x+ = max(0, x)

Important special cases are the options with g(x) = x, g(x) = (x − k)+, g(x) = (K − x)+,

g(x) = min(M, x), where M > 0 is the cap for benefits, and with

f(x, t) = x, f(x, t) = (x − K)+, f(x, t) = (K − x)+ —– (3)

or

f(x, t) = er(T−t)(x − K)+, f(x, t) = er(T−t)(K − x)+ —– (4)

where K > 0 is given and where r > 0 is the risk-free rate. Options (3) correspond to the case when the payments are made at current time t ∈ [0, T], and options (4) correspond to the case when the payment is made at terminal time T. This takes into account accumulation of interest up to time T on any payoff.

The option with payoff (2) with f(x, t) ≡ x represents a generalization of Asian option where the weight u(t) is selected by the holder. It needs to be noted that an Asian option , which is also called an average option, is an option whose payoff depends on the average price of the underlying asset over a certain period of time as opposed to at maturity. The option with payoff (2) with g(x) ≡ x represents a limit version of the multi-exercise options, when the distribution of exercise time approaches a continuous distribution. An additional restriction on |u(t)| ≤ const would represent the continuous analog of the requirement for multi-exercise options that exercise times must be on some distance from each other. For an analog of the model without this condition, strategies may approach delta-functions.

These options can be used, for instance, for energy trading with u(t) representing the quantity of energy purchased at time t for the fixed price K when the market price is above K. In this case, the option represents a modification of the multi-exercise call option with continuously distributed payoff time. For this model, the total amount of energy that can be purchased is limited per option. Therefore, the option holder may prefer to postpone the purchase if she expects better opportunities in future.

10 or 11 Dimensions? Phenomenological Conundrum. Drunken Risibility.

supersymmetry_experienc_2014_02

It is not the fact that we are living in a ten-dimensional world which forces string theory to a ten-dimensional description. It is that perturbative string theories are only anomaly-free in ten dimensions; and they contain gravitons only in a ten-dimensional formulation. The resulting question, how the four-dimensional spacetime of phenomenology comes off from ten-dimensional perturbative string theories (or its eleven-dimensional non-perturbative extension: the mysterious M theory), led to the compactification idea and to the braneworld scenarios.

It is not the fact that empirical indications for supersymmetry were found, that forces consistent string theories to include supersymmetry. Without supersymmetry, string theory has no fermions and no chirality, but there are tachyons which make the vacuum instable; and supersymmetry has certain conceptual advantages: it leads very probably to the finiteness of the perturbation series, thereby avoiding the problem of non-renormalizability which haunted all former attempts at a quantization of gravity; and there is a close relation between supersymmetry and Poincaré invariance which seems reasonable for quantum gravity. But it is clear that not all conceptual advantages are necessarily part of nature – as the example of the elegant, but unsuccessful Grand Unified Theories demonstrates.

Apart from its ten (or eleven) dimensions and the inclusion of supersymmetry – both have more or less the character of only conceptually, but not empirically motivated ad-hoc assumptions – string theory consists of a rather careful adaptation of the mathematical and model-theoretical apparatus of perturbative quantum field theory to the quantized, one-dimensionally extended, oscillating string (and, finally, of a minimal extension of its methods into the non-perturbative regime for which the declarations of intent exceed by far the conceptual successes). Without any empirical data transcending the context of our established theories, there remains for string theory only the minimal conceptual integration of basic parts of the phenomenology already reproduced by these established theories. And a significant component of this phenomenology, namely the phenomenology of gravitation, was already used up in the selection of string theory as an interesting approach to quantum gravity. Only, because string theory – containing gravitons as string states – reproduces in a certain way the phenomenology of gravitation, it is taken seriously.

But consistency requirements, the minimal inclusion of basic phenomenological constraints, and the careful extension of the model-theoretical basis of quantum field theory are not sufficient to establish an adequate theory of quantum gravity. Shouldn’t the landscape scenario of string theory be understood as a clear indication, not only of fundamental problems with the reproduction of the gauge invariances of the standard model of quantum field theory (and the corresponding phenomenology), but of much more severe conceptual problems? Almost all attempts at a solution of the immanent and transcendental problems of string theory seem to end in the ambiguity and contingency of the multitude of scenarios of the string landscape. That no physically motivated basic principle is known for string theory and its model-theoretical procedures might be seen as a problem which possibly could be overcome in future developments. But, what about the use of a static background spacetime in string theory which falls short of the fundamental insights of general relativity and which therefore seems to be completely unacceptable for a theory of quantum gravity?

At least since the change of context (and strategy) from hadron physics to quantum gravity, the development of string theory was dominated by immanent problems which led with their attempted solutions deeper. The result of this successively increasing self- referentiality is a more and more enhanced decoupling from phenomenological boundary conditions and necessities. The contact with the empirical does not increase, but gets weaker and weaker. The result of this process is a labyrinthic mathematical structure with a completely unclear physical relevance