Tranche Declension.

800px-CDO_-_FCIC_and_IMF_Diagram

With the CDO (collateralized debt obligation) market picking up, it is important to build a stronger understanding of pricing and risk management models. The role of the Gaussian copula model, has well-known deficiencies and has been criticized, but it continues to be fundamental as a starter. Here, we draw attention to the applicability of Gaussian inequalities in analyzing tranche loss sensitivity to correlation parameters for the Gaussian copula model.

We work with an RN-valued Gaussian random variable X = (X1, … , XN), where each Xj is normalized to mean 0 and variance 1, and study the equity tranche loss

L[0,a] = ∑m=1Nlm1[xm≤cm] – {∑m=1Nlm1[xm≤cm] – a}

where l1 ,…, lN > 0, a > 0, and c1,…, cN ∈ R are parameters. We thus establish an identity between the sensitivity of E[L[0,a]] to the correlation rjk = E[XjXk] and the parameters cj and ck, from where subsequently we come to the inequality

∂E[L[0,a]]/∂rjk ≤ 0

Applying this inequality to a CDO containing N names whose default behavior is governed by the Gaussian variables Xj shows that an increase in name-to-name correlation decreases expected loss in an equity tranche. This is a generalization of the well-known result for Gaussian copulas with uniform correlation.

Consider a CDO consisting of N names, with τj denoting the (random) default time of the jth name. Let

Xj = φj-1(Fjj))

where Fj is the distribution function of τj (relative to the market pricing measure), assumed to be continuous and strictly increasing, and φj is the standard Gaussian distribution function. Then for any x ∈ R we have

P[Xj ≤ x] = P[τj ≤ Fj-1j(x))] = Fj(Fj-1j(x))) = φj(x)

which means that Xj has standard Gaussian distribution. The Gaussian copula model posits that the joint distribution of the Xj is Gaussian; thus,

X = (X1, …., Xn)

is an RN-valued Gaussian variable whose marginals are all standard Gaussian. The correlation

τj = E[XjXk]

reflects the default correlation between the names j and k. Now let

pj = E[τj ≤ T] = P[Xj ≤ cj]

be the probability that the jth name defaults within a time horizon T, which is held constant, and

cj = φj−1(Fj(T))

is the default threshold of the jth name.

In schematics, when we explore the essential phenomenon, the default of name j, which happens if the default time τis within the time horizon T, results in a loss of amount lj > 0 in the CDO portfolio. Thus, the total loss during the time period [0, T] is

L = ∑m=1Nlm1[xm≤cm]

This is where we are essentially working with a one-period CDO, and ignoring discounting from the random time of actual default. A tranche is simply a range of loss for the portfolio; it is specified by a closed interval [a, b] with 0 ≤ a ≤ b. If the loss x is less than a, then this tranche is unaffected, whereas if x ≥ b then the entire tranche value b − a is eaten up by loss; in between, if a ≤ x ≤ b, the loss to the tranche is x − a. Thus, the tranche loss function t[a, b] is given by

t[a, b](x) = 0 if x < a; = x – a, if x ∈ [a, b]; = b – a; if x > b

or compactly,

t[a, b](x) = (x – a)+ – (x – b)+

From this, it is clear that t[a, b](x) is continuous in (a, b, x), and we see that it is a non-decreasing function of x. Thus, the loss in an equity tranche [0, a] is given by

t[0,a](L) = L − (L − a)+

with a > 0.

Knowledge Limited for Dummies….Didactics.

header_Pipes

Bertrand Russell with Alfred North Whitehead, in the Principia Mathematica aimed to demonstrate that “all pure mathematics follows from purely logical premises and uses only concepts defined in logical terms.” Its goal was to provide a formalized logic for all mathematics, to develop the full structure of mathematics where every premise could be proved from a clear set of initial axioms.

Russell observed of the dense and demanding work, “I used to know of only six people who had read the later parts of the book. Three of those were Poles, subsequently (I believe) liquidated by Hitler. The other three were Texans, subsequently successfully assimilated.” The complex mathematical symbols of the manuscript required it to be written by hand, and its sheer size – when it was finally ready for the publisher, Russell had to hire a panel truck to send it off – made it impossible to copy. Russell recounted that “every time that I went out for a walk I used to be afraid that the house would catch fire and the manuscript get burnt up.”

Momentous though it was, the greatest achievement of Principia Mathematica was realized two decades after its completion when it provided the fodder for the metamathematical enterprises of an Austrian, Kurt Gödel. Although Gödel did face the risk of being liquidated by Hitler (therefore fleeing to the Institute of Advanced Studies at Princeton), he was neither a Pole nor a Texan. In 1931, he wrote a treatise entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which demonstrated that the goal Russell and Whitehead had so single-mindedly pursued was unattainable.

The flavor of Gödel’s basic argument can be captured in the contradictions contained in a schoolboy’s brainteaser. A sheet of paper has the words “The statement on the other side of this paper is true” written on one side and “The statement on the other side of this paper is false” on the reverse. The conflict isn’t resolvable. Or, even more trivially, a statement like; “This statement is unprovable.” You cannot prove the statement is true, because doing so would contradict it. If you prove the statement is false, then that means its converse is true – it is provable – which again is a contradiction.

The key point of contradiction for these two examples is that they are self-referential. This same sort of self-referentiality is the keystone of Gödel’s proof, where he uses statements that imbed other statements within them. This problem did not totally escape Russell and Whitehead. By the end of 1901, Russell had completed the first round of writing Principia Mathematica and thought he was in the homestretch, but was increasingly beset by these sorts of apparently simple-minded contradictions falling in the path of his goal. He wrote that “it seemed unworthy of a grown man to spend his time on such trivialities, but . . . trivial or not, the matter was a challenge.” Attempts to address the challenge extended the development of Principia Mathematica by nearly a decade.

Yet Russell and Whitehead had, after all that effort, missed the central point. Like granite outcroppings piercing through a bed of moss, these apparently trivial contradictions were rooted in the core of mathematics and logic, and were only the most readily manifest examples of a limit to our ability to structure formal mathematical systems. Just four years before Gödel had defined the limits of our ability to conquer the intellectual world of mathematics and logic with the publication of his Undecidability Theorem, the German physicist Werner Heisenberg’s celebrated Uncertainty Principle had delineated the limits of inquiry into the physical world, thereby undoing the efforts of another celebrated intellect, the great mathematician Pierre-Simon Laplace. In the early 1800s Laplace had worked extensively to demonstrate the purely mechanical and predictable nature of planetary motion. He later extended this theory to the interaction of molecules. In the Laplacean view, molecules are just as subject to the laws of physical mechanics as the planets are. In theory, if we knew the position and velocity of each molecule, we could trace its path as it interacted with other molecules, and trace the course of the physical universe at the most fundamental level. Laplace envisioned a world of ever more precise prediction, where the laws of physical mechanics would be able to forecast nature in increasing detail and ever further into the future, a world where “the phenomena of nature can be reduced in the last analysis to actions at a distance between molecule and molecule.”

What Gödel did to the work of Russell and Whitehead, Heisenberg did to Laplace’s concept of causality. The Uncertainty Principle, though broadly applied and draped in metaphysical context, is a well-defined and elegantly simple statement of physical reality – namely, the combined accuracy of a measurement of an electron’s location and its momentum cannot vary far from a fixed value. The reason for this, viewed from the standpoint of classical physics, is that accurately measuring the position of an electron requires illuminating the electron with light of a very short wavelength. But the shorter the wavelength the greater the amount of energy that hits the electron, and the greater the energy hitting the electron the greater the impact on its velocity.

What is true in the subatomic sphere ends up being true – though with rapidly diminishing significance – for the macroscopic. Nothing can be measured with complete precision as to both location and velocity because the act of measuring alters the physical properties. The idea that if we know the present we can calculate the future was proven invalid – not because of a shortcoming in our knowledge of mechanics, but because the premise that we can perfectly know the present was proven wrong. These limits to measurement imply limits to prediction. After all, if we cannot know even the present with complete certainty, we cannot unfailingly predict the future. It was with this in mind that Heisenberg, ecstatic about his yet-to-be-published paper, exclaimed, “I think I have refuted the law of causality.”

The epistemological extrapolation of Heisenberg’s work was that the root of the problem was man – or, more precisely, man’s examination of nature, which inevitably impacts the natural phenomena under examination so that the phenomena cannot be objectively understood. Heisenberg’s principle was not something that was inherent in nature; it came from man’s examination of nature, from man becoming part of the experiment. (So in a way the Uncertainty Principle, like Gödel’s Undecidability Proposition, rested on self-referentiality.) While it did not directly refute Einstein’s assertion against the statistical nature of the predictions of quantum mechanics that “God does not play dice with the universe,” it did show that if there were a law of causality in nature, no one but God would ever be able to apply it. The implications of Heisenberg’s Uncertainty Principle were recognized immediately, and it became a simple metaphor reaching beyond quantum mechanics to the broader world.

This metaphor extends neatly into the world of financial markets. In the purely mechanistic universe of classical physics, we could apply Newtonian laws to project the future course of nature, if only we knew the location and velocity of every particle. In the world of finance, the elementary particles are the financial assets. In a purely mechanistic financial world, if we knew the position each investor has in each asset and the ability and willingness of liquidity providers to take on those assets in the event of a forced liquidation, we would be able to understand the market’s vulnerability. We would have an early-warning system for crises. We would know which firms are subject to a liquidity cycle, and which events might trigger that cycle. We would know which markets are being overrun by speculative traders, and thereby anticipate tactical correlations and shifts in the financial habitat. The randomness of nature and economic cycles might remain beyond our grasp, but the primary cause of market crisis, and the part of market crisis that is of our own making, would be firmly in hand.

The first step toward the Laplacean goal of complete knowledge is the advocacy by certain financial market regulators to increase the transparency of positions. Politically, that would be a difficult sell – as would any kind of increase in regulatory control. Practically, it wouldn’t work. Just as the atomic world turned out to be more complex than Laplace conceived, the financial world may be similarly complex and not reducible to a simple causality. The problems with position disclosure are many. Some financial instruments are complex and difficult to price, so it is impossible to measure precisely the risk exposure. Similarly, in hedge positions a slight error in the transmission of one part, or asynchronous pricing of the various legs of the strategy, will grossly misstate the total exposure. Indeed, the problems and inaccuracies in using position information to assess risk are exemplified by the fact that major investment banking firms choose to use summary statistics rather than position-by-position analysis for their firmwide risk management despite having enormous resources and computational power at their disposal.

Perhaps more importantly, position transparency also has implications for the efficient functioning of the financial markets beyond the practical problems involved in its implementation. The problems in the examination of elementary particles in the financial world are the same as in the physical world: Beyond the inherent randomness and complexity of the systems, there are simply limits to what we can know. To say that we do not know something is as much a challenge as it is a statement of the state of our knowledge. If we do not know something, that presumes that either it is not worth knowing or it is something that will be studied and eventually revealed. It is the hubris of man that all things are discoverable. But for all the progress that has been made, perhaps even more exciting than the rolling back of the boundaries of our knowledge is the identification of realms that can never be explored. A sign in Einstein’s Princeton office read, “Not everything that counts can be counted, and not everything that can be counted counts.”

The behavioral analogue to the Uncertainty Principle is obvious. There are many psychological inhibitions that lead people to behave differently when they are observed than when they are not. For traders it is a simple matter of dollars and cents that will lead them to behave differently when their trades are open to scrutiny. Beneficial though it may be for the liquidity demander and the investor, for the liquidity supplier trans- parency is bad. The liquidity supplier does not intend to hold the position for a long time, like the typical liquidity demander might. Like a market maker, the liquidity supplier will come back to the market to sell off the position – ideally when there is another investor who needs liquidity on the other side of the market. If other traders know the liquidity supplier’s positions, they will logically infer that there is a good likelihood these positions shortly will be put into the market. The other traders will be loath to be the first ones on the other side of these trades, or will demand more of a price concession if they do trade, knowing the overhang that remains in the market.

This means that increased transparency will reduce the amount of liquidity provided for any given change in prices. This is by no means a hypothetical argument. Frequently, even in the most liquid markets, broker-dealer market makers (liquidity providers) use brokers to enter their market bids rather than entering the market directly in order to preserve their anonymity.

The more information we extract to divine the behavior of traders and the resulting implications for the markets, the more the traders will alter their behavior. The paradox is that to understand and anticipate market crises, we must know positions, but knowing and acting on positions will itself generate a feedback into the market. This feedback often will reduce liquidity, making our observations less valuable and possibly contributing to a market crisis. Or, in rare instances, the observer/feedback loop could be manipulated to amass fortunes.

One might argue that the physical limits of knowledge asserted by Heisenberg’s Uncertainty Principle are critical for subatomic physics, but perhaps they are really just a curiosity for those dwelling in the macroscopic realm of the financial markets. We cannot measure an electron precisely, but certainly we still can “kind of know” the present, and if so, then we should be able to “pretty much” predict the future. Causality might be approximate, but if we can get it right to within a few wavelengths of light, that still ought to do the trick. The mathematical system may be demonstrably incomplete, and the world might not be pinned down on the fringes, but for all practical purposes the world can be known.

Unfortunately, while “almost” might work for horseshoes and hand grenades, 30 years after Gödel and Heisenberg yet a third limitation of our knowledge was in the wings, a limitation that would close the door on any attempt to block out the implications of microscopic uncertainty on predictability in our macroscopic world. Based on observations made by Edward Lorenz in the early 1960s and popularized by the so-called butterfly effect – the fanciful notion that the beating wings of a butterfly could change the predictions of an otherwise perfect weather forecasting system – this limitation arises because in some important cases immeasurably small errors can compound over time to limit prediction in the larger scale. Half a century after the limits of measurement and thus of physical knowledge were demonstrated by Heisenberg in the world of quantum mechanics, Lorenz piled on a result that showed how microscopic errors could propagate to have a stultifying impact in nonlinear dynamic systems. This limitation could come into the forefront only with the dawning of the computer age, because it is manifested in the subtle errors of computational accuracy.

The essence of the butterfly effect is that small perturbations can have large repercussions in massive, random forces such as weather. Edward Lorenz was testing and tweaking a model of weather dynamics on a rudimentary vacuum-tube computer. The program was based on a small system of simultaneous equations, but seemed to provide an inkling into the variability of weather patterns. At one point in his work, Lorenz decided to examine in more detail one of the solutions he had generated. To save time, rather than starting the run over from the beginning, he picked some intermediate conditions that had been printed out by the computer and used those as the new starting point. The values he typed in were the same as the values held in the original simulation at that point, so the results the simulation generated from that point forward should have been the same as in the original; after all, the computer was doing exactly the same operations. What he found was that as the simulated weather pattern progressed, the results of the new run diverged, first very slightly and then more and more markedly, from those of the first run. After a point, the new path followed a course that appeared totally unrelated to the original one, even though they had started at the same place.

Lorenz at first thought there was a computer glitch, but as he investigated further, he discovered the basis of a limit to knowledge that rivaled that of Heisenberg and Gödel. The problem was that the numbers he had used to restart the simulation had been reentered based on his printout from the earlier run, and the printout rounded the values to three decimal places while the computer carried the values to six decimal places. This rounding, clearly insignificant at first, promulgated a slight error in the next-round results, and this error grew with each new iteration of the program as it moved the simulation of the weather forward in time. The error doubled every four simulated days, so that after a few months the solutions were going their own separate ways. The slightest of changes in the initial conditions had traced out a wholly different pattern of weather.

Intrigued by his chance observation, Lorenz wrote an article entitled “Deterministic Nonperiodic Flow,” which stated that “nonperiodic solutions are ordinarily unstable with respect to small modifications, so that slightly differing initial states can evolve into considerably different states.” Translation: Long-range weather forecasting is worthless. For his application in the narrow scientific discipline of weather prediction, this meant that no matter how precise the starting measurements of weather conditions, there was a limit after which the residual imprecision would lead to unpredictable results, so that “long-range forecasting of specific weather conditions would be impossible.” And since this occurred in a very simple laboratory model of weather dynamics, it could only be worse in the more complex equations that would be needed to properly reflect the weather. Lorenz discovered the principle that would emerge over time into the field of chaos theory, where a deterministic system generated with simple nonlinear dynamics unravels into an unrepeated and apparently random path.

The simplicity of the dynamic system Lorenz had used suggests a far-reaching result: Because we cannot measure without some error (harking back to Heisenberg), for many dynamic systems our forecast errors will grow to the point that even an approximation will be out of our hands. We can run a purely mechanistic system that is designed with well-defined and apparently well-behaved equations, and it will move over time in ways that cannot be predicted and, indeed, that appear to be random. This gets us to Santa Fe.

The principal conceptual thread running through the Santa Fe research asks how apparently simple systems, like that discovered by Lorenz, can produce rich and complex results. Its method of analysis in some respects runs in the opposite direction of the usual path of scientific inquiry. Rather than taking the complexity of the world and distilling simplifying truths from it, the Santa Fe Institute builds a virtual world governed by simple equations that when unleashed explode into results that generate unexpected levels of complexity.

In economics and finance, institute’s agenda was to create artificial markets with traders and investors who followed simple and reasonable rules of behavior and to see what would happen. Some of the traders built into the model were trend followers, others bought or sold based on the difference between the market price and perceived value, and yet others traded at random times in response to liquidity needs. The simulations then printed out the paths of prices for the various market instruments. Qualitatively, these paths displayed all the richness and variation we observe in actual markets, replete with occasional bubbles and crashes. The exercises did not produce positive results for predicting or explaining market behavior, but they did illustrate that it is not hard to create a market that looks on the surface an awful lot like a real one, and to do so with actors who are following very simple rules. The mantra is that simple systems can give rise to complex, even unpredictable dynamics, an interesting converse to the point that much of the complexity of our world can – with suitable assumptions – be made to appear simple, summarized with concise physical laws and equations.

The systems explored by Lorenz were deterministic. They were governed definitively and exclusively by a set of equations where the value in every period could be unambiguously and precisely determined based on the values of the previous period. And the systems were not very complex. By contrast, whatever the set of equations are that might be divined to govern the financial world, they are not simple and, furthermore, they are not deterministic. There are random shocks from political and economic events and from the shifting preferences and attitudes of the actors. If we cannot hope to know the course of the deterministic systems like fluid mechanics, then no level of detail will allow us to forecast the long-term course of the financial world, buffeted as it is by the vagaries of the economy and the whims of psychology.

Credit Risk Portfolio. Note Quote.

maxresdefault

The recent development in credit markets is characterized by a flood of innovative credit risky structures. State-of-the-art portfolios contain derivative instruments ranging from simple, nearly commoditized contracts such as credit default swap (CDS), to first- generation portfolio derivatives such as first-to-default (FTD) baskets and collateralized debt obligation (CDO) tranches, up to complex structures involving spread options and different asset classes (hybrids). These new structures allow portfolio managers to implement multidimensional investment strategies, which seamlessly conform to their market view. Moreover, the exploding liquidity in credit markets makes tactical (short-term) overlay management very cost efficient. While the outperformance potential of an active portfolio management will put old-school investment strategies (such as buy-and-hold) under enormous pressure, managing a highly complex credit portfolio requires the introduction of new optimization technologies.

New derivatives allow the decoupling of business processes in the risk management industry (in banking, as well as in asset management), since credit treasury units are now able to manage specific parts of credit risk actively and independently. The traditional feedback loop between risk management and sales, which was needed to structure the desired portfolio characteristics only by selective business acquisition, is now outdated. Strategic cross asset management will gain in importance, as a cost-efficient overlay management can now be implemented by combining liquid instruments from the credit universe.

In any case, all these developments force portfolio managers to adopt an integrated approach. All involved risk factors (spread term structures including curve effects, spread correlations, implied default correlations, and implied spread volatilities) have to be captured and integrated into appropriate risk figures. We have a look on constant proportion debt obligations (CPDOs) as a leveraged exposure on credit indices, constant proportion portfolio insurance (CPPI) as a capital guaranteed instrument, CDO tranches to tap the correlation market, and equity futures to include exposure to stock markets in the portfolio.

For an integrated credit portfolio management approach, it is of central importance to aggregate risks over various instruments with different payoff characteristics. In this chapter, we will see that a state-of-the-art credit portfolio contains not only linear risks (CDS and CDS index contracts) but also nonlinear risks (such as FTD baskets, CDO tranches, or credit default swaptions). From a practitioner’s point of view there is a simple solution for this risk aggregation problem, namely delta-gamma management. In such a framework, one approximates the risks of all instruments in a portfolio by its first- and second-order sensitivities and aggregates these sensitivities to the portfolio level. Apparently, for a proper aggregation of risk factors, one has to take the correlation of these risk factors into account. However, for credit risky portfolios, a simplistic sensitivity approach will be inappropriate, as can be seen by the characteristics of credit portfolio risks shows:

  • Credit risky portfolios usually involve a larger number of reference entities. Hence, one has to take a large number of sensitivities into account. However, this is a phenomenon that is already well known from the management of stock portfolios. The solution is to split the risk for each constituent into a systematic risk (e.g., a beta with a portfolio hedging tool) and an alpha component which reflects the idiosyncratic part of the risk.

  • However, in contrast to equities, credit risk is not one dimensional (i.e., one risky security per issuer) but at least two dimensional (i.e., a set of instruments with different maturities). This is reflected in the fact that there is a whole term structure of credit spreads. Moreover, taking also different subordination levels (with different average recovery rates) into account, credit risk becomes a multidimensional object for each reference entity.
  • While most market risks can be satisfactorily approximated by diffusion processes, for credit risk the consideration of events (i.e., jumps) is imperative. The most apparent reason for this is that the dominating element of credit risk is event risk. However, in a market perspective, there are more events than the ultimate default event that have to be captured. Since one of the main drivers of credit spreads is the structure of the underlying balance sheet, a change (or the risk of a change) in this structure usually triggers a large movement in credit spreads. The best-known example for such an event is a leveraged buyout (LBO).
  • For credit market players, correlation is a very special topic, as a central pricing parameter is named implied correlation. However, there are two kinds of correlation parameters that impact a credit portfolio: price correlation and event correlation. While the former simply deals with the dependency between two price (i.e., spread) time series under normal market conditions, the latter aims at describing the dependency between two price time series in case of an event. In its simplest form, event correlation can be seen as default correlation: what is the risk that company B defaults given that company A has defaulted? While it is already very difficult to model this default correlation, for practitioners event correlation is even more complex, since there are other events than just the default event, as already mentioned above. Hence, we can modify the question above: what is the risk that spreads of company B blow out given that spreads of company A have blown out? In addition, the notion of event correlation can also be used to capture the risk in capital structure arbitrage trades (i.e., trading stock versus bonds of one company). In this example, the question might be: what is the risk that the stock price of company A jumps given that its bond spreads have blown out? The complicated task in this respect is that we do not only have to model the joint event probability but also the direction of the jumps. A brief example highlights why this is important. In case of a default event, spreads will blow out accompanied by a significant drop in the stock price. This means that there is a negative correlation between spreads and stock prices. However, in case of an LBO event, spreads will blow out (reflecting the deteriorated credit quality because of the higher leverage), while stock prices rally (because of the fact that the acquirer usually pays a premium to buy a majority of outstanding shares).

These show that a simple sensitivity approach – e.g., calculate and tabulate all deltas and gammas and let a portfolio manager play with – is not appropriate. Further risk aggregation (e.g., beta management) and risk factors that capture the event risk are needed. For the latter, a quick solution is the so-called instantaneous default loss (IDL). The IDL expresses the loss incurred in a credit risk instrument in case of a credit event. For single-name CDS, this is simply the loss given default (LGD). However, for a portfolio derivative such as a mezzanine tranche, this figure does not directly refer to the LGD of the defaulted item, but to the changed subordination of the tranche because of the default. Hence, this figure allows one to aggregate various instruments with respect to credit events.

Optimal Hedging…..

hedging

Risk management is important in the practices of financial institutions and other corporations. Derivatives are popular instruments to hedge exposures due to currency, interest rate and other market risks. An important step of risk management is to use these derivatives in an optimal way. The most popular derivatives are forwards, options and swaps. They are basic blocks for all sorts of other more complicated derivatives, and should be used prudently. Several parameters need to be determined in the processes of risk management, and it is necessary to investigate the influence of these parameters on the aims of the hedging policies and the possibility of achieving these goals.

The problem of determining the optimal strike price and optimal hedging ratio is considered, where a put option is used to hedge market risk under a constraint of budget. The chosen option is supposed to finish in-the-money at maturity in the, such that the predicted loss of the hedged portfolio is different from the realized loss. The aim of hedging is to minimize the potential loss of investment under a specified level of confidence. In other words, the optimal hedging strategy is to minimize the Value-at-Risk (VaR) under a specified level of risk.

A stock is supposed to be bought at time zero with price S0, and to be sold at time T with uncertain price ST. In order to hedge the market risk of the stock, the company decides to choose one of the available put options written on the same stock with maturity at time τ, where τ is prior and close to T, and the n available put options are specified by their strike prices Ki (i = 1, 2,··· , n). As the prices of different put options are also different, the company needs to determine an optimal hedge ratio h (0 ≤ h ≤ 1) with respect to the chosen strike price. The cost of hedging should be less than or equal to the predetermined hedging budget C. In other words, the company needs to determine the optimal strike price and hedging ratio under the constraint of hedging budget.

Suppose the market price of the stock is S0 at time zero, the hedge ratio is h, the price of the put option is P0, and the riskless interest rate is r. At time T, the time value of the hedging portfolio is

S0erT + hP0erT —– (1)

and the market price of the portfolio is

ST + h(K − Sτ)+ er(T−τ) —– (2)

therefore the loss of the portfolio is

L = (S0erT + hP0erT) − (ST +h(K−Sτ)+ er(T−τ)) —– (3)

where x+ = max(x, 0), which is the payoff function of put option at maturity.

For a given threshold v, the probability that the amount of loss exceeds v is denoted as

α = Prob{L ≥ v} —– (4)

in other words, v is the Value-at-Risk (VaR) at α percentage level. There are several alternative measures of risk, such as CVaR (Conditional Value-at-Risk), ESF (Expected Shortfall), CTE (Conditional Tail Expectation), and other coherent risk measures. The criterion of optimality is to minimize the VaR of the hedging strategy.

The mathematical model of stock price is chosen to be a geometric Brownian motion, i.e.

dSt/St = μdt + σdBt —– (5)

where St is the stock price at time t (0 < t ≤ T), μ and σ are the drift and the volatility of stock price, and Bt is a standard Brownian motion. The solution of the stochastic differential equation is

St = S0 eσBt + (μ−1/2σ2)t —– (6)

where B0 = 0, and St is lognormally distributed.

Proposition:

For a given threshold of loss v, the probability that the loss exceeds v is

Prob {L ≥ v} = E [I{X ≤ c1} FY (g(X) − X)] + E [I{X ≥ c1} FY (c2 − X)] —– (7)

where E[X] is the expectation of random variable X. I{X < c} is the index function of X such that I{X < c} = 1 when {X < c} is true, otherwise I{X < c} = 0. FY (y) is the cumulative distribution function of random variable Y , and

c1 = 1/σ [ln(K/S0) − (μ−1/2σ2)τ] ,

g(X) = 1/σ [(ln (S0 + hP0)erT − h (K − f(X)) er(T−τ) −v)/S0 − (μ − 1/2σ2) T],

f(X) = S0 eσX + (μ−1/2σ2)τ,

c2 = 1/σ [(ln (S0 + hP0) erT − v)/S0 − (μ− 1/2σ2) T

X and Y are both normally distributed, where X ∼ N(0,√τ), Y ∼ N(0,√(T−τ).

For a specified hedging strategy, Q(v) = Prob {L ≥ v} is a decreasing function of v. The VaR under α level can be obtained from equation

Q(v) = α —– (8)

The expectations in Proposition can be calculated with Monte Carlo simulation methods, and the optimal hedging strategy which has the smallest VaR can be obtained from equation (8) by numerical searching methods….