Lévy Process as Combination of a Brownian Motion with Drift and Infinite Sum of Independent Compound Poisson Processes: Introduction to Martingales. Part 4.

Every piecewise constant Lévy process Xt0 can be represented in the form for some Poisson random measure with intensity measure of the form ν(dx)dt where ν is a finite measure, defined by

ν(A) = E[#{t ∈ [0,1] : ∆Xt0 ≠ 0, ∆Xt0 ∈ A}], A ∈ B(Rd) —– (1)

Given a Brownian motion with drift γt + Wt, independent from X0, the sum Xt = Xt0 + γt + Wt defines another Lévy process, which can be decomposed as:

Xt = γt + Wt + ∑s∈[0,t] ΔXs = γt + Wt + ∫[0,t]xRd xJX (ds x dx) —– (2)

where JX is a Poisson random measure on [0,∞[×Rd with intensity ν(dx)dt.

Can every Lévy process be represented in this form? Given a Lévy process Xt, we can still define its Lévy measure ν as above. ν(A) is still finite for any compact set A such that 0 ∉ A: if this were not true, the process would have an infinite number of jumps of finite size on [0, T], which contradicts the cadlag property. So ν defines a Radon measure on Rd \ {0}. But ν is not necessarily a finite measure: the above restriction still allows it to blow up at zero and X may have an infinite number of small jumps on [0, T]. In this case the sum of the jumps becomes an infinite series and its convergence imposes some conditions on the measure ν, under which we obtain a decomposition of X.

Let (Xt)t≥0 be a Lévy process on Rd and ν its Lévy measure.

ν is a Radon measure on Rd \ {0} and verifies:

|x|≤1 |x|2 v(dx) < ∞

The jump measure of X, denoted by JX, is a Poisson random measure on [0,∞[×Rd with intensity measure ν(dx)dt.

∃ a vector γ and a d-dimensional Brownian motion (Bt)t≥0 with covariance matrix A such that

Xt = γt + Bt + Xtl + limε↓0 X’εt —– (3)

where

Xtl = ∫|x|≥1,s∈[0,t] xJX (ds x dx)

X’εt = ∫ε≤|x|<1,s∈[0,t] x{JX (ds x dx) – ν(dx)ds}

≡ ∫ε≤|x|<1,s∈[0,t] xJ’X (ds x dx)

The terms in (3) are independent and the convergence in the last term is almost sure and uniform in t on [0,T].

The Lévy-Itô decomposition entails that for every Lévy process ∃ a vector γ, a positive definite matrix A and a positive measure ν that uniquely determine its distribution. The triplet (A,ν,γ) is called characteristic tripletor Lévy triplet of the process Xt. γt + Bt is a continuous Gaussian Lévy process and every Gaussian Lévy process is continuous and can be written in this form and can be described by two parameters: the drift γ and the covariance matrix of Brownian motion, denoted by A. The other two terms are discontinuous processes incorporating the jumps of Xt and are described by the Lévy measure ν. The condition ∫|y|≥1 ν(dy) < ∞ means that X has a finite number of jumps with absolute value larger than 1. So the sum

Xtl = ∑|∆Xs|≥10≤s≤t ∆Xs

contains almost surely a finite number of terms and Xtl is a compound Poisson process. There is nothing special about the threshold ∆X = 1: for any ε > 0, the sum of jumps with amplitude between ε and 1:

Xεt = ∑1>|∆Xs|≥ε0≤s≤t ∆Xs = ∫ε≤|x|≤1,s∈[0,t] xJX(ds x dx) —– (4)

is again a well-defined compound Poisson process. However, contrarily to the compound Poisson case, ν can have a singularity at zero: there can be infinitely many small jumps and their sum does not necessarily converge. This prevents us from making ε go to 0 directly in (4). In order to obtain convergence we have to center the remainder term, i.e., replace the jump integral by its compensated version,

X’εt = ∫ε≤|x|≤1,s∈[0,t] xJ’X (ds x dx) —– (5)

which, is a martingale. While Xε can be interpreted as an infinite superposition of independent Poisson processes, X’εshould be seen as an infinite superposition of independent compensated, i.e., centered Poisson processes to which a central-limit type argument can be applied to show convergence. An important implication of the Lévy-Itô decomposition is that every Lévy process is a combination of a Brownian motion with drift and a possibly infinite sum of independent compound Poisson processes. This also means that every Lévy process can be approximated with arbitrary precision by a jump-diffusion process, that is by the sum of Brownian motion with drift and a compound Poisson process.

Advertisement

Convergence in Probability Implying Stochastic Continuity. Part 3.

A compound Poisson process with intensity λ > 0 and jump size distribution f is a stochastic process Xt defined as

Xt = ∑i=1NtYi

where jumps sizes Yi are independent and identically distributed with distribution f and (Nt) is a Poisson process with intensity λ, independent from (Yi)i≥1.

The following properties of a compound Poisson process are now deduced

  1. The sample paths of X are cadlag piecewise constant functions.
  2. The jump times (Ti)i≥1 have the same law as the jump times of the Poisson process Nt: they can be expressed as partial sums of independent exponential random variables with parameter λ.
  3. The jump sizes (Yi)i≥1 are independent and identically distributed with law f.

The Poisson process itself can be seen as a compound Poisson process on R such that Yi ≡ 1. This explains the origin of term “compound Poisson” in the definition.

Let R(n),n ≥ 0 be a random walk with step size distribution f: R(n) = ∑i=0n Yi. The compound Poisson process Xt can be obtained by changing the time of R with an independent Poisson process Nt: Xt = R(Nt). Xt thus describes the position of a random walk after a random number of time steps, given by Nt. Compound Poisson processes are Lévy processes (and part 2) and they are the only Lévy processes with piecewise constant sample paths.

(Xt)t≥0 is compound Poisson process if and only if it is a Lévy process and its sample paths are piecewise constant functions.

Let (Xt)t≥0 be a Lévy process with piecewise constant paths. We can construct, path by path, a process (Nt, t ≥ 0) which counts the jumps of X:

Nt =# {0 < s ≤ t: Xs− ≠ Xs} —– (1)

Since the trajectories of X are piecewise constant, X has a finite number of jumps in any finite interval which entails that Nt is finite ∀ finite t. Hence, it is a counting process. Let h < t. Then

Nt − Nh = #{h < s ≤ t: Xs− ≠ Xs} = #{h < s ≤ t: Xs− − Xh ≠ Xs − Xh}

Hence, Nt − Nh depends only on (Xs − Xh), h ≤ s ≤ t. Therefore, from the independence and stationarity of increments of (Xt) it follows that (Nt) also has independent and stationary increments. Using the process N, we can compute the jump sizes of X: Yn = XSn − XSn− where Sn = inf{t: Nt ≥ n}. First lets see how the increments of X conditionally on the trajectory of N are independent? Let t > s and consider the following four sets:

A1 ∈ σ(Xs)

A2 ∈ σ(Xt − Xs)

B1 ∈ σ(Nr, r ≤ s)

B2 ∈ σ(Nr − Ns, r > s)

such that P(B1) > 0 and P(B2) > 0. The independence of increments of X implies that processes (Xr − Xs, r > s) and (Xr, r ≤ s) are independent. Hence,

P[A1 ∩ B1 ∩ A2 ∩ B2] = P[A1 ∩ B1]P[A2 ∩ B2]

Moreover,

– A1 and B1 are independent from B2.

– A2 and B2 are independent from B1.

– B1 and B2 are independent from each other.

Therefore, the conditional probability of interest can be expressed as:

P[A1 ∩ A2 | B1 ∩ B2] = (P[A1 ∩ B1]P[A2 ∩ B2])/P[B1]P[B2]

= (P[A1 ∩ B1 ∩ B2]P[A2 ∩ B1 ∩ B2])/P[B1]2P[B2]2 = P[A1 | B1 ∩ B2]P[A2 | B1 ∩ B2]

This proves that Xt − Xs and Xs are independent conditionally on the trajectory of N. In particular, choosing B1 = {Ns = 1} and B2 = {Nt − Ns = 1}

we obtain that Y1 and Y2 are independent. Since we could have taken any number of increments of X and not just two of them, this proves that (Yi)i≥1 are independent. The jump sizes have the same law, in that the two-dimensional process (Xt, Nt) has stationary increments. Therefore, for every n ≥ 0 and for every s > h > 0,

E[ƒ(Xh) | Nh = 1, Nh – Ns = n] = E[ƒ(Xs+h – Xs) | Ns+h – Ns = 1, Ns – Nh = n],

Where ƒ is any bounded Borel function. This entails that for every n ≥ 0, Y1 and Yn+2 have the same law.

Let (Xt)t≥0 be a compound Poisson process.

Independence of increments. Let 0 < r < s and let ƒ and g be bounded Borel functions on Rd. To ease the notation, we prove only that Xr is independent from Xs − Xr, but the same reasoning applies to any finite number of increments. We must show that

E[ƒ(Xr)g(Xs − Xr)] = E[ƒ(Xr)]E[g(Xs − Xr)]

From the representation Xr = ∑i=1Nr and Xs − Xr = ∑i=Nr+1Ns Yi the following observations are made:

– Conditionally on the trajectory of Nt for t ∈ [0, s], Xr and Xs − Xr are independent because the first expression only depends on Yi for i ≤ Nr and the second expression only depends on Yi for i > Nr.
– The expectation E[ƒ(Xr) Nt, t ≤ s] depends only on Nr and the expectation E[g(Xs − Xr) Nt, t ≤ s] depends only on Ns − Nr.

On using the independence of increments of the Poisson process, we can write:

E[ƒ(Xr)g(Xs – Xr)] = E[E[ƒ(Xr)g(Xs – Xr) | Nt, t ≤ s]]

= E[E[ƒ(Xr) | Nt, t ≤ s] E[g(Xs – Xr) |  Nt, t ≤ s]]

= E[E[ƒ(Xr) | Nt, t ≤ s]] E[E[g(Xs – Xr) |  Nt, t ≤ s]]

= E[ƒ(Xr)] E[g(Xs – Xr)]

Stationarity of increments. Let 0 < r < s and let ƒ be a bounded Borel function.

E[ƒ(Xs – Xr)] = E[E[ ∑i=Nr+1Ns Yi | Nt, t ≤ s]]

= E[E[∑i=1Ns-Nr Yi | Nt, t ≤ s]] = E[E[∑i=1Ns-r Yi | Nt, t ≤ s]] = E[ƒ(Xs-r)]

Stochastic continuity. Xt only jumps if Nt does.

P(Ns →s<ts→t Nt) = 1

Hence, for every t > 0,

P(Xs →s<ts→t Xt) = 1

Since almost sure convergence entails convergence in probability, this implies stochastic continuity. Also, since any cadlag function may be approximated by a piecewise constant function, one may expect that general Lévy processes can be well approximated by compound Poisson ones and that by studying compound Poisson processes one can gain an insight into the properties of Lévy processes.

Stochasticities. Lévy processes. Part 2.

Define the characteristic function of Xt:

Φt(z) ≡ ΦXt(z) ≡ E[eiz.Xt], z ∈ Rd

For t > s, by writing Xt+s = Xs + (Xt+s − Xs) and using the fact that Xt+s − Xs is independent of Xs, we obtain that t ↦ Φt(z) is a multiplicative function.

Φt+s(z) = ΦXt+s(z) = ΦXs(z) ΦXt+s − Xs(z) = ΦXs(z) ΦXt(z) = ΦsΦt

The stochastic continuity of t ↦ Xt implies in particular that Xt → Xs in distribution when s → t. Therefore, ΦXs(z) → ΦXt(z) when s → t so t ↦ Φt(z) is a continuous function of t. Together with the multiplicative property Φs+t(z) = Φs(z).Φt(z), this implies that t ↦ Φt(z) is an exponential function.

Let (Xt)t≥0 be a Lévy process on Rd. ∃ a continuous function ψ : Rd ↦ R called the characteristic exponent of X, such that:

E[eiz.Xt] = etψ(z), z ∈ Rd

ψ is the cumulant generating function of X1 : ψ = ΨX1 and that the cumulant generating function of Xt varies linearly in t: ΨXt = tΨX1 = tψ. The law of Xt is therefore determined by the knowledge of the law of X1 : the only degree of freedom we have in specifying a Lévy process is to specify the distribution of Xt for a single time (say, t = 1).

This lecture covers stochastic processes, including continuous-time stochastic processes and standard Brownian motion by Choongbum Lee

Cadlag Stochasticities: Lévy Processes. Part 1.

Untitled

A compound Poisson process with a Gaussian distribution of jump sizes, and a jump diffusion of a Lévy process with Gaussian component and finite jump intensity.

A cadlag stochastic process (Xt)t≥0 on (Ω,F,P) with values in Rd such that X0 = 0 is called a Lévy process if it possesses the following properties:

1. Independent increments: for every increasing sequence of times t0 . . . tn, the random variables Xt0, Xt1 − Xt0 , . . . , Xtn − Xtn−1 are independent.

2. Stationary increments: the law of Xt+h − Xt does not depend on t.

3. Stochastic continuity: ∀ε > 0, limh→0 P(|Xt+h − Xt| ≥ ε) = 0.

A sample function x on a well-ordered set T is cadlag if it is continuous from the right and limited from the left at every point. That is, for every t0 ∈ T, t ↓ t0 implies x(t) → x(t0), and for t ↑ t0, limt↑t0 x(t)exists, but need not be x(t0). A stochastic process X is cadlag if almost all its sample paths are cadlag.

The third condition does not imply in any way that the sample paths are continuous, and is verified by the Poisson process. It serves to exclude processes with jumps at fixed (nonrandom) times, which can be regarded as “calendar effects” and means that for given time t, the probability of seeing a jump at t is zero: discontinuities occur at random times.

If we sample a Lévy process at regular time intervals 0, ∆, 2∆, . . ., we obtain a random walk: defining Sn(∆) ≡ Xn∆, we can write Sn(∆) = ∑k=0n−1 Yk where Yk = X(k+1)∆ − Xk∆ are independent and identically dependent random variables whose distribution is the same as the distribution of X. Since this can be done for any sampling interval ∆ we see that by specifying a Lévy process one can specify a whole family of random walks Sn(∆).

Choosing n∆ = t, we see that for any t > 0 and any n ≥ 1, Xt = Sn(∆) can be represented as a sum of n independent and identically distributed random variables whose distribution is that of Xt/n: Xt can be “divided” into n independent and identically distributed parts. A distribution having this property is said to be infinitely divisible.

A probability distribution F on Rd is said to be infinitely divisible if for any integer n ≥ 2, ∃ n independent and identically distributed random variables Y1, …Yn such that Y1 + … + Yn has distribution F.

Since the distribution of independent and identically distributed sums is given by convolution of the distribution of the summands, denoting by μ the distribution of Yk-s, F = μ ∗ μ ∗ ··· ∗ μ is the nth convolution of μ. So an infinitely divisible distribution can also be defined as a distribution F for which the nth convolution root is still a probability distribution, for any n ≥ 2.

fig1

Thus, if X is a Lévy process, for any t > 0 the distribution of Xt is infinitely divisible. This puts a constraint on the possible choices of distributions for Xt: whereas the increments of a discrete-time random walk can have arbitrary distribution, the distribution of increments of a Lévy process has to be infinitely divisible.

The most common examples of infinitely divisible laws are: the Gaussian distribution, the gamma distribution, α-stable distributions and the Poisson distribution: a random variable having any of these distributions can be decomposed into a sum of n independent and identically distributed parts having the same distribution but with modified parameters. Conversely, given an infinitely divisible distribution F, it is easy to see that for any n ≥ 1 by chopping it into n independent and identically distributed components we can construct a random walk model on a time grid with step size 1/n such that the law of the position at t = 1 is given by F. In the limit, this procedure can be used to construct a continuous time Lévy process (Xt)t≥0 such that the law of X1 if given by F. Let (Xt)t≥0 be a Lévy process. Then for every t, Xt has an infinitely divisible distribution. Conversely, if F is an infinitely divisible distribution then ∃ a Lévy process (Xt) such that the distribution of X1 is given by F.

Revisiting Financing Blue Economy

ci_14368810_(1)

Blue Economy has suffered a definitional crisis ever since it started doing the rounds almost around the turn of the century. So much has it been plagued by this crisis, that even a working definition is acceptable only contextually, and is liable to paradigmatic shifts both littorally and political-economically. 

The United Nations defines Blue Economy as: 

A range of economic sectors and related policies that together determine whether the use of oceanic resources is sustainable. The “Blue Economy” concept seeks to promote economic growth, social inclusion, and the preservation or improvement of livelihoods while at the same time ensuring environmental sustainability of the oceans and coastal areas. 

This definition is subscribed to by even the World Bank, and is commonly accepted as a standardized one since 2017. However, in 2014, United Nations Conference on Trade and Development (UNCTAD) had called Blue Economy as

The improvement of human well-being and social equity, while significantly reducing environmental risks and ecological scarcities…the concept of an oceans economy also embodies economic and trade activities that integrate the conservation and sustainable use and management of biodiversity including marine ecosystems, and genetic resources.

Preceding this by three years, the Pacific Small Islands Developing States (Pacific SIDS) referred to Blue Economy as the 

Sustainable management of ocean resources to support livelihoods, more equitable benefit-sharing, and ecosystem resilience in the face of climate change, destructive fishing practices, and pressures from sources external to the fisheries sector. 

image1

As is noteworthy, these definitions across almost a decade have congruences and cohesion towards promoting economic growth, social inclusion and the preservation or improvement of livelihoods while ensuring environmental sustainability of oceanic and coastal areas, though are markedly mitigated in domains, albeit, only definitionally, for the concept since 2011 till it has been standardized in 2017 doesn’t really knock out any of the diverse components, but rather adds on. Marine biotechnology and bioprospecting, seabed mining and extraction, aquaculture, and offshore renewable energy supplement the established traditional oceanic industries like fisheries, tourism, and maritime transportation into a giant financial and economic appropriation of resources the concept endorses and encompasses. But, a term that threads through the above definitions is sustainability, which unfortunately happens to be another definitional dead-end. But, mapping the contours of sustainability in a theoretical fashion would at least contextualize the working definition of Blue Economy, to which initiatives of financial investments, legal frameworks, ecological deflections, economic zones and trading lines, fisheries, biotechnology and bioprospecting could be approvingly applied to. Though, as a caveat, such applications would be far from being exhaustive, they, at least potentially cohere onto underlying economic directions, and opening up a spectra of critiques. 

If one were to follow global multinational institutions like the UN and the World Bank, prefixing sustainable to Blue Economy brings into perspective coastal economy that balances itself with long-term capacity of assets, goods and services and marine ecosystems towards a global driver of economic, social and environmental prosperity accruing direct and indirect benefits to communities, both regionally and globally. Assuming this to be true, what guarantees financial investments as healthy, and thus proving no risks to oceanic health and rolling back such growth-led development into peril? This is the question that draws paramount importance, and is a hotbed for constructive critique of the whole venture. The question of finance, or financial viability for Blue Economy, or the viability thereof. What is seemingly the underlying principle of Blue Economy is the financialization of natural resources, which is nothing short of replacing environmental regulations with market-driven regulations. This commodification of the ocean is then packaged and traded on the markets often amounting to transferring the stewardship of commons for financial interests. Marine ecology as a natural resource isn’t immune to commodification, and an array of financial agents are making it their indispensable destination, thrashing out new alliances converging around specific ideas about how maritime and coastal resources should be organized, and to whose benefit, under which terms and to what end? A systemic increase in financial speculation on commodities mainly driven by deregulation of derivative markets, increasing involvement of investment banks, hedge funds and other institutional investors in commodity speculation and the emergence of new instruments such as index funds and exchange-traded funds. Financial deregulation has successfully transformed commodities into financial assets, and has matured its penetration into commodity markets and their functioning. This maturity can be gauged from the fact that speculative capital is structurally intertwined with productive capital, which in the case of Blue Economy are commodities and natural resources, most generically. 

But despite these fissures existing, the international organizations are relentlessly following up on attracting finances, and in a manner that could at best be said to follow principles of transparency, accountability, compliance and right to disclosure. The European Commission (EC) is partnering with World Wildlife Fund (WWF) in bringing together public and private financing institutions to develop a set of Principles of Sustainable Investment within a Blue Economy Development Framework. But, the question remains: how stringently are these institutions tied to adhering to these Principles? 

Investors and policymakers are increasingly turning to the ocean for new opportunities and resources. According to OECD projections, by 2030 the “blue economy” could outperform the growth of the global economy as a whole, both in terms of value added and employment. But to get there, there will need to be a framework for ocean-related investment that is supported by policy incentives along the most sustainable pathways. Now, this might sound a bit rhetorical, and thus calls for unraveling. the international community has time and again reaffirmed its strong commitment to conserve and sustainably use the ocean and its resources, for which the formations like G7 and G20 acknowledge scaling up finance and ensuring sustainability of such investments as fundamental to meeting their needs. Investment capital, both public and private is therefore fundamental to unlocking Blue Economy. Even if there is a growing recognition that following “business s usual” trajectory neglects impacts on marine ecosystems entailing risks, these global bodies are of the view that investment decisions that incorporate sustainability elements ensure environmentally, economically and socially sustainable outcomes securing long-term health and integrity of the oceans furthering shared social, ecological and economic functions that are dependent on it. That financial institutions and markets can play this pivotal role only complicates the rhetorics further. Even if financial markets and institutions expressly intend to implement Sustainable Development Goals (SDGs), in particular Goal 14 which deals with conservation and sustainable use of the oceans, such intentions to be compliant with IFC performance Standards and EIB Environmental and Social Principles and Standards. 

So far, what is being seen is small ticket size deals, but there is a potential that it will shift on its axis. With mainstream banking getting engaged, capital flows will follow the projects, and thus the real challenge lies in building the pipeline. But, here is a catch: there might be private capital in plentiful seeking impact solutions and a financing needs by projects on the ground, but private capital is seeking private returns, and the majority of ocean-related projects are not private but public goods. For public finance, there is an opportunity to allocate more proceeds to sustainable ocean initiatives through a bond route, such as sovereign and municipal bonds in order to finance coastal resilience projects. but such a route could also encounter a dead-end, in that many a countries that are ripe for coastal infrastructure are emerging economies and would thus incur a high cost of funding. A de-risking is possible, if institutions like the World Bank, or the Overseas Private Investment Corporation undertake credit enhancements, a high probability considering these institutions have been engineering Blue Economy on a priority basis. Global banks are contenders for financing the Blue Economy because of their geographic scope, but then are also likely to be exposed to a new playing field. The largest economies by Exclusive Economic Zones, which are sea zones determined by the UN don’t always stand out as world’s largest economies, a fact that is liable to drawing in domestic banks to collaborate based on incentives offered  to be part of the solution. A significant challenge for private sector will be to find enough cash-flow generating projects to bundle them in a liquid, at-scale investment vehicle. One way of resolving this challenge is through creating a specialized financial institution, like an Ocean Sustainability Bank, which can be modeled on lines of European Bank for Reconstruction and Development (EBRD). The plus envisaged by such a creation is arriving at scales rather quickly. An example of this is by offering a larger institutional-sized approach by considering a coastal area as a single investment zone, thus bringing in integrated infrastructure-based financing approach. With such an approach, insurance companies would get attracted by looking at innovative financing for coastal resiliency, which is a part and parcel of climate change concerns, food security, health, poverty reduction and livelihoods. Projects having high social impact but low/no Internal Rate of Return (IRR) may be provided funding, in convergence with Governmental schemes. IRR is a metric used in capital budgeting to estimate the profitability of potential investments. It is a discount rate that makes the net present value (NPV) of all cash flows from a particular project equal to zero. NPV is the difference between the present value of cash inflows and present value of cash outflows over a period of time. IRR is sometimes referred to as “economic rate of return” or “discounted cash flow rate of return.” The use of “internal” refers to the omission of external factors, such as the cost of capital or inflation, from the calculation. The biggest concern, however appears in the form of immaturity of financial markets in emerging economies, which are purported to be major beneficiaries of Blue Economy. 

The question then is, how far viable or sustainable are these financial interventions? Financialization produces effects which can create long-term trends (such as those on functional income distribution) but can also change across different periods of economic growth, slowdown and recession. Interpreting the implications of financialization for sustainability, therefore, requires a methodological diverse and empirical dual-track approach which combines different methods of investigations. Even times of prosperity, despite their fragile and vulnerable nature, can endure for several years before collapsing due to high levels of indebtedness, which in turn amplify the real effects of a financial crisis and hinder the economic growth. Things begin to get a bit more complicated when financialization interferes with environment and natural resources, for then the losses are not just merely on a financial platform alone. Financialization has played a significant role in the recent price shocks in food and energy markets, while the wave of speculative investment in natural resources has and is likely to produce perverse environmental and social impact. Moreover, the so-called financialization of environmental conservation tends to enhance the financial value of environmental resources but it is selective: not all stakeholders have the same opportunities and not all uses and values of natural resources and services are accounted for. 

Incomplete Markets and Calibrations for Coherence with Hedged Portfolios. Thought of the Day 154.0

 

comnatSWD_2018_0252_FIN2.ENG.xhtml.SWD_2018_0252_FIN2_ENG_01027.jpg

In complete market models such as the Black-Scholes model, probability does not really matter: the “objective” evolution of the asset is only there to define the set of “impossible” events and serves to specify the class of equivalent measures. Thus, two statistical models P1 ∼ P2 with equivalent measures lead to the same option prices in a complete market setting.

This is not true anymore in incomplete markets: probabilities matter and model specification has to be taken seriously since it will affect hedging decisions. This situation is more realistic but also more challenging and calls for an integrated approach between option pricing methods and statistical modeling. In incomplete markets, not only does probability matter but attitudes to risk also matter: utility based methods explicitly incorporate these into the hedging problem via utility functions. While these methods are focused on hedging with the underlying asset, common practice is to use liquid call/put options to hedge exotic options. In incomplete markets, options are not redundant assets; therefore, if options are available as hedging instruments they can and should be used to improve hedging performance.

While the lack of liquidity in the options market prevents in practice from using dynamic hedges involving options, options are commonly used for static hedging: call options are frequently used for dealing with volatility or convexity exposures and for hedging barrier options.

What are the implications of hedging with options for the choice of a pricing rule? Consider a contingent claim H and assume that we have as hedging instruments a set of benchmark options with prices Ci, i = 1 . . . n and terminal payoffs Hi, i = 1 . . . n. A static hedge of H is a portfolio composed from the options Hi, i = 1 . . . n and the numeraire, in order to match as closely as possible the terminal payoff of H:

H = V0 + ∑i=1n xiHi + ∫0T φdS + ε —– (1)

where ε is an hedging error representing the nonhedgeable risk. Typically Hi are payoffs of call or put options and are not possible to replicate using the underlying so adding them to the hedge portfolio increases the span of hedgeable claims and reduces residual risk.

Consider a pricing rule Q. Assume that EQ[ε] = 0 (otherwise EQ[ε] can be added to V0). Then the claim H is valued under Q as:

e-rTEQ[H] = V0 ∑i=1n xe-rTEQ[Hi] —– (2)

since the stochastic integral term, being a Q-martingale, has zero expectation. On the other hand, the cost of setting up the hedging portfolio is:

V0 + ∑i=1n xCi —– (3)

So the value of the claim given by the pricing rule Q corresponds to the cost of the hedging portfolio if the model prices of the benchmark options Hi correspond to their market prices Ci:

∀i = 1, …, n

e-rTEQ[Hi] = Ci∗ —– (4)

This condition is called calibration, where a pricing rule verifies the calibration of the option prices Ci, i = 1, . . . , n. This condition is necessary to guarantee the coherence between model prices and the cost of hedging with portfolios and if the model is not calibrated then the model price for a claim H may have no relation with the effective cost of hedging it using the available options Hi. If a pricing rule Q is specified in an ad hoc way, the calibration conditions will not be verified, and thus one way to ensure them is to incorporate them as constraints in the choice of the pricing measure Q.

Self-Financing and Dynamically Hedged Portfolio – Robert Merton’s Option Pricing. Thought of the Day 153.0

hedge2

As an alternative to the riskless hedging approach, Robert Merton derived the option pricing equation via the construction of a self-financing and dynamically hedged portfolio containing the risky asset, option and riskless asset (in the form of money market account). Let QS(t) and QV(t) denote the number of units of asset and option in the portfolio, respectively, and MS(t) and MV(t) denote the currency value of QS(t) units of asset and QV(t) units of option, respectively. The self-financing portfolio is set up with zero initial net investment cost and no additional funds are added or withdrawn afterwards. The additional units acquired for one security in the portfolio is completely financed by the sale of another security in the same portfolio. The portfolio is said to be dynamic since its composition is allowed to change over time. For notational convenience, dropping the subscript t for the asset price process St, the option value process Vt and the standard Brownian process Zt. The portfolio value at time t can be expressed as

Π(t) = MS(t) + MV(t) + M(t) = QS(t)S + QV(t)V + M(t) —– (1)

where M(t) is the currency value of the riskless asset invested in a riskless money market account. Suppose the asset price process is governed by the stochastic differential equation (1) in here, we apply the Ito lemma to obtain the differential of the option value V as:

dV = ∂V/∂t dt + ∂V/∂S dS + σ2/2 S22V/∂S2 dt = (∂V/∂t + μS ∂V/∂S σ2/2 S22V/∂S2)dt + σS ∂V/∂S dZ —– (2)

If we formally write the stochastic dynamics of V as

dV/V = μV dt + σV dZ —– (3)

then μV and σV are given by

μV = (∂V/∂t + ρS ∂V/∂S + σ2/2 S22V/∂S2)/V —– (4)

and

σV = (σS ∂V/∂S)/V —– (5)

The instantaneous currency return dΠ(t) of the above portfolio is attributed to the differential price changes of asset and option and interest accrued, and the differential changes in the amount of asset, option and money market account held. The differential of Π(t) is computed as:

dΠ(t) = [QS(t) dS + QV(t) dV + rM(t) dt] + [S dQS(t) + V dQV(t) + dM(t)] —– (6)

where rM(t)dt gives the interest amount earned from the money market account over dt and dM(t) represents the change in the money market account held due to net currency gained/lost from the sale of the underlying asset and option in the portfolio. And if the portfolio is self-financing, the sum of the last three terms in the above equation is zero. The instantaneous portfolio return dΠ(t) can then be expressed as:

dΠ(t) = QS(t) dS + QV(t) dV + rM(t) dt = MS(t) dS/S + MV(t) dV/V +  rM(t) dt —– (7)

Eliminating M(t) between (1) and (7) and expressing dS/S and dV/V in terms of their stochastic dynamics, we obtain

dΠ(t) = [(μ − r)MS(t) + (μV − r)MV(t)]dt + [σMS(t) + σV MV(t)]dZ —– (8)

How can we make the above self-financing portfolio instantaneously riskless so that its return is non-stochastic? This can be achieved by choosing an appropriate proportion of asset and option according to

σMS(t) + σV MV(t) = σS QS(t) + σS ∂V/∂S QV(t) = 0

that is, the number of units of asset and option in the self-financing portfolio must be in the ratio

QS(t)/QV(t) = -∂V/∂S —– (9)

at all times. The above ratio is time dependent, so continuous readjustment of the portfolio is necessary. We now have a dynamic replicating portfolio that is riskless and requires zero initial net investment, so the non-stochastic portfolio return dΠ(t) must be zero.

(8) becomes

0 = [(μ − r)MS(t) + (μV − r)MV(t)]dt

substituting the ratio factor in the above equation, we get

(μ − r)S ∂V/∂S = (μV − r)V —– (10)

Now substituting μfrom (4) into the above equation, we get the black-Scholes equation for V,

∂V/∂t + σ2/2 S22V/∂S2 + rS ∂V/∂S – rV = 0

Suppose we take QV(t) = −1 in the above dynamically hedged self-financing portfolio, that is, the portfolio always shorts one unit of the option. By the ratio factor, the number of units of risky asset held is always kept at the level of ∂V/∂S units, which is changing continuously over time. To maintain a self-financing hedged portfolio that constantly keeps shorting one unit of the option, we need to have both the underlying asset and the riskfree asset (money market account) in the portfolio. The net cash flow resulting in the buying/selling of the risky asset in the dynamic procedure of maintaining ∂V/∂S units of the risky asset is siphoned to the money market account.

Derivative Pricing Theory: Call, Put Options and “Black, Scholes'” Hedged Portfolio.Thought of the Day 152.0

black-scholes-formula-excel-here-is-the-formula-for-the-black-model-for-pricing-call-and-put-option-contracts-black-scholes-formula-excel-spreadsheet

screenshot

Fischer Black and Myron Scholes revolutionized the pricing theory of options by showing how to hedge continuously the exposure on the short position of an option. Consider the writer of a call option on a risky asset. S/he is exposed to the risk of unlimited liability if the asset price rises above the strike price. To protect the writer’s short position in the call option, s/he should consider purchasing a certain amount of the underlying asset so that the loss in the short position in the call option is offset by the long position in the asset. In this way, the writer is adopting the hedging procedure. A hedged position combines an option with its underlying asset so as to achieve the goal that either the asset compensates the option against loss or otherwise. By adjusting the proportion of the underlying asset and option continuously in a portfolio, Black and Scholes demonstrated that investors can create a riskless hedging portfolio where the risk exposure associated with the stochastic asset price is eliminated. In an efficient market with no riskless arbitrage opportunity, a riskless portfolio must earn an expected rate of return equal to the riskless interest rate.

Black and Scholes made the following assumptions on the financial market.

  1. Trading takes place continuously in time.
  2. The riskless interest rate r is known and constant over time.
  3. The asset pays no dividend.
  4. There are no transaction costs in buying or selling the asset or the option, and no taxes.
  5. The assets are perfectly divisible.
  6. There are no penalties to short selling and the full use of proceeds is permitted.
  7. There are no riskless arbitrage opportunities.

The stochastic process of the asset price St is assumed to follow the geometric Brownian motion

dSt/St = μ dt + σ dZt —– (1)

where μ is the expected rate of return, σ is the volatility and Zt is the standard Brownian process. Both μ and σ are assumed to be constant. Consider a portfolio that involves short selling of one unit of a call option and long holding of Δt units of the underlying asset. The portfolio value Π (St, t) at time t is given by

Π = −c + Δt St —– (2)

where c = c(St, t) denotes the call price. Note that Δt changes with time t, reflecting the dynamic nature of hedging. Since c is a stochastic function of St, we apply the Ito lemma to compute its differential as follows:

dc = ∂c/∂t dt + ∂c/∂St dSt + σ2/2 St2 ∂2c/∂St2 dt

such that

-dc + Δt dS= (-∂c/∂t – σ2/2 St2 ∂2c/∂St2)dt + (Δ– ∂c/∂St)dSt

= [-∂c/∂t – σ2/2 St2 ∂2c/∂St+ (Δ– ∂c/∂St)μSt]dt + (Δ– ∂c/∂St)σSdZt

The cumulative financial gain on the portfolio at time t is given by

G(Π (St, t )) = ∫0t -dc + ∫0t Δu dSu

= ∫0t [-∂c/∂u – σ2/2 Su22c/∂Su2 + (Δ– ∂c/∂Su)μSu]du + ∫0t (Δ– ∂c/∂Su)σSdZ—– (3)

The stochastic component of the portfolio gain stems from the last term, ∫0t (Δ– ∂c/∂Su)σSdZu. Suppose we adopt the dynamic hedging strategy by choosing Δu = ∂c/∂Su at all times u < t, then the financial gain becomes deterministic at all times. By virtue of no arbitrage, the financial gain should be the same as the gain from investing on the risk free asset with dynamic position whose value equals -c + Su∂c/∂Su. The deterministic gain from this dynamic position of riskless asset is given by

Mt = ∫0tr(-c + Su∂c/∂Su)du —– (4)

By equating these two deterministic gains, G(Π (St, t)) and Mt, we have

-∂c/∂u – σ2/2 Su22c/∂Su2 = r(-c + Su∂c/∂Su), 0 < u < t

which is satisfied for any asset price S if c(S, t) satisfies the equation

∂c/∂t + σ2/2 S22c/∂S+ rS∂c/∂S – rc = 0 —– (5)

This parabolic partial differential equation is called the Black–Scholes equation. Strangely, the parameter μ, which is the expected rate of return of the asset, does not appear in the equation.

To complete the formulation of the option pricing model, let’s prescribe the auxiliary condition. The terminal payoff at time T of the call with strike price X is translated into the following terminal condition:

c(S, T ) = max(S − X, 0) —– (6)

for the differential equation.

Since both the equation and the auxiliary condition do not contain ρ, one concludes that the call price does not depend on the actual expected rate of return of the asset price. The option pricing model involves five parameters: S, T, X, r and σ. Except for the volatility σ, all others are directly observable parameters. The independence of the pricing model on μ is related to the concept of risk neutrality. In a risk neutral world, investors do not demand extra returns above the riskless interest rate for bearing risks. This is in contrast to usual risk averse investors who would demand extra returns above r for risks borne in their investment portfolios. Apparently, the option is priced as if the rates of return on the underlying asset and the option are both equal to the riskless interest rate. This risk neutral valuation approach is viable if the risks from holding the underlying asset and option are hedgeable.

The governing equation for a put option can be derived similarly and the same Black–Scholes equation is obtained. Let V (S, t) denote the price of a derivative security with dependence on S and t, it can be shown that V is governed by

∂V/∂t + σ2/2 S22V/∂S+ rS∂V/∂S – rV = 0 —– (7)

The price of a particular derivative security is obtained by solving the Black–Scholes equation subject to an appropriate set of auxiliary conditions that model the corresponding contractual specifications in the derivative security.

The original derivation of the governing partial differential equation by Black and Scholes focuses on the financial notion of riskless hedging but misses the precise analysis of the dynamic change in the value of the hedged portfolio. The inconsistencies in their derivation stem from the assumption of keeping the number of units of the underlying asset in the hedged portfolio to be instantaneously constant. They take the differential change of portfolio value Π to be

dΠ =−dc + Δt dSt,

which misses the effect arising from the differential change in Δt. The ability to construct a perfectly hedged portfolio relies on the assumption of continuous trading and continuous asset price path. It has been commonly agreed that the assumed Geometric Brownian process of the asset price may not truly reflect the actual behavior of the asset price process. The asset price may exhibit jumps upon the arrival of a sudden news in the financial market. The interest rate is widely recognized to be fluctuating over time in an irregular manner rather than being constant. For an option on a risky asset, the interest rate appears only in the discount factor so that the assumption of constant/deterministic interest rate is quite acceptable for a short-lived option. The Black–Scholes pricing approach assumes continuous hedging at all times. In the real world of trading with transaction costs, this would lead to infinite transaction costs in the hedging procedure.

Philosophical Equivariance – Sewing Holonomies Towards Equal Trace Endomorphisms.

In d-dimensional topological field theory one begins with a category S whose objects are oriented (d − 1)-manifolds and whose morphisms are oriented cobordisms. Physicists say that a theory admits a group G as a global symmetry group if G acts on the vector space associated to each (d−1)-manifold, and the linear operator associated to each cobordism is a G-equivariant map. When we have such a “global” symmetry group G we can ask whether the symmetry can be “gauged”, i.e., whether elements of G can be applied “independently” – in some sense – at each point of space-time. Mathematically the process of “gauging” has a very elegant description: it amounts to extending the field theory functor from the category S to the category SG whose objects are (d − 1)-manifolds equipped with a principal G-bundle, and whose morphisms are cobordisms with a G-bundle. We regard S as a subcategory of SG by equipping each (d − 1)-manifold S with the trivial G-bundle S × G. In SG the group of automorphisms of the trivial bundle S × G contains G, and so in a gauged theory G acts on the state space H(S): this should be the original “global” action of G. But the gauged theory has a state space H(S,P) for each G-bundle P on S: if P is non-trivial one calls H(S,P) a “twisted sector” of the theory. In the case d = 2, when S = S1 we have the bundle Pg → S1 obtained by attaching the ends of [0,2π] × G via multiplication by g. Any bundle is isomorphic to one of these, and Pg is isomorphic to Pg iff g′ is conjugate to g. But note that the state space depends on the bundle and not just its isomorphism class, so we have a twisted sector state space Cg = H(S,Pg) labelled by a group element g rather than by a conjugacy class.

We shall call a theory defined on the category SG a G-equivariant Topological Field Theory (TFT). It is important to distinguish the equivariant theory from the corresponding “gauged theory”. In physics, the equivariant theory is obtained by coupling to nondynamical background gauge fields, while the gauged theory is obtained by “summing” over those gauge fields in the path integral.

An alternative and equivalent viewpoint which is especially useful in the two-dimensional case is that SG is the category whose objects are oriented (d − 1)-manifolds S equipped with a map p : S → BG, where BG is the classifying space of G. In this viewpoint we have a bundle over the space Map(S,BG) whose fibre at p is Hp. To say that Hp depends only on the G-bundle pEG on S pulled back from the universal G-bundle EG on BG by p is the same as to say that the bundle on Map(S,BG) is equipped with a flat connection allowing us to identify the fibres at points in the same connected component by parallel transport; for the set of bundle isomorphisms p0EG → p1EG is the same as the set of homotopy classes of paths from p0 to p1. When S = S1 the connected components of the space of maps correspond to the conjugacy classes in G: each bundle Pg corresponds to a specific point pg in the mapping space, and a group element h defines a specific path from pg to phgh−1 .

G-equivariant topological field theories are examples of “homotopy topological field theories”. Using Vladimir Turaev‘s two main results: first, an attractive generalization of the theorem that a two-dimensional TFT “is” a commutative Frobenius algebra, and, secondly, a classification of the ways of gauging a given global G-symmetry of a semisimple TFT.

Untitled

Definition of the product in the G-equivariant closed theory. The heavy dot is the basepoint on S1. To specify the morphism unambiguously we must indicate consistent holonomies along a set of curves whose complement consists of simply connected pieces. These holonomies are always along paths between points where by definition the fibre is G. This means that the product is not commutative. We need to fix a convention for holonomies of a composition of curves, i.e., whether we are using left or right path-ordering. We will take h(γ1 ◦ γ2) = h(γ1) · h(γ2).

A G-equivariant TFT gives us for each element g ∈ G a vector space Cg, associated to the circle equipped with the bundle pg whose holonomy is g. The usual pair-of-pants cobordism, equipped with the evident G-bundle which restricts to pg1 and pg2 on the two incoming circles, and to pg1g2 on the outgoing circle, induces a product

Cg1 ⊗ Cg2 → Cg1g2 —– (1)

Untitled

making C := ⊕g∈GCg into a G-graded algebra. Also there is a trace θ: C1  → C defined by the disk diagram with one ingoing circle. The holonomy around the boundary of the disk must be 1. Making the standard assumption that the cylinder corresponds to the unit operator we obtain a non-degenerate pairing

Cg ⊗ Cg−1 → C

A new element in the equivariant theory is that G acts as an automorphism group on C. That is, there is a homomorphism α : G → Aut(C) such that

αh : Cg → Chgh−1 —– (2)

Diagramatically, αh is defined by the surface in the immediately above figure. Now let us note some properties of α. First, if φ ∈ Ch then αh(φ) = φ. The reason for this is diagrammatically in the below figure.

Untitled

If the holonomy along path P2 is h then the holonomy along path P1 is 1. However, a Dehn twist around the inner circle maps P1 into P2. Therefore, αh(φ) = α1(φ) = φ, if φ ∈ Ch.

Next, while C is not commutative, it is “twisted-commutative” in the following sense. If φ1 ∈ Cg1 and φ2 ∈ Cg2 then

αg212 = φ2φ1 —– (3)

The necessity of this condition is illustrated in the figure below.

Untitled

The trace of the identity map of Cg is the partition function of the theory on a torus with the bundle with holonomy (g,1). Cutting the torus the other way, we see that this is the trace of αg on C1. Similarly, by considering the torus with a bundle with holonomy (g,h), where g and h are two commuting elements of G, we see that the trace of αg on Ch is the trace of αh on Cg−1. But we need a strengthening of this property. Even when g and h do not commute we can form a bundle with holonomy (g,h) on a torus with one hole, around which the holonomy will be c = hgh−1g−1. We can cut this torus along either of its generating circles to get a cobordism operator from Cc ⊗ Ch to Ch or from Cg−1 ⊗ Cc to Cg−1. If ψ ∈ Chgh−1g−1. Let us introduce two linear transformations Lψ, Rψ associated to left- and right-multiplication by ψ. On the one hand, Lψαg : φ􏰀 ↦ ψαg(φ) is a map Ch → Ch. On the other hand Rψαh : φ ↦ αh(φ)ψ is a map Cg−1 → Cg−1. The last sewing condition states that these two endomorphisms must have equal traces:

TrCh 􏰌Lψαg􏰍 = TrCg−1 􏰌Rψαh􏰍 —– (4)

Untitled

Untitled

(4) was taken by Turaev as one of his axioms. It can, however, be reexpressed in a way that we shall find more convenient. Let ∆g ∈ Cg ⊗ Cg−1 be the “duality” element corresponding to the identity cobordism of (S1,Pg) with both ends regarded as outgoing. We have ∆g = ∑ξi ⊗ ξi, where ξi and ξi ru􏰟n through dual bases of Cg and Cg−1. Let us also write

h = ∑ηi ⊗ ηi ∈ Ch ⊗ Ch−1. Then (4) is easily seen to be equivalent to

∑αhii = 􏰟 ∑ηiαgi) —– (5)

in which both sides are elements of Chgh−1g−1.

Playing in the Dark: How Online Games Provide Shelter for Criminal Organizations in the Surveillance Age?

surveillance-IndiaInternetCables

The “architecture of the Internet also lends itself to vulnerabilities and makes it more difficult to wiretap” on a manageable scale. Expanding surveillance programs like CALEA (Commission on Accreditation for Law Enforcement Agencies) to the Internet would consequently “require a different and more complicated protocol, which would create serious security problems.” Furthermore, because “[t]he Internet is easier to undermine than a telephone network due to its ‘flexibility and dynamism,'” incorporating means for surveying its use would “build security vulnerabilities into the communication protocols.” Attempts to add similar features in the past have “resulted in new, easily exploited security flaws rather than better law enforcement access.”

Moreover, Internet surveillance would likely cost a significant amount of money, much of which would be foisted upon online companies themselves. Consequently, not only would expanded surveillance lead to a “technology and security headache,” but the “hassles of implementation” and “the investigative burden and costs will shift to providers.”

Despite those concerns, however, online surveillance might be less costly and more effective than traditional wiretapping. Online surveillance allows for large quantities of data to be “gathered at minimal cost, either as it is produced or at some time later.” Additionally, though the development of computerized surveillance systems may be difficult, once created, they “may be duplicated at a fraction of the cost.” Further, online surveillance potentially makes identifying users easier because the content discovered often includes identifying information, like IP addresses. Finally, electronic surveillance may prove efficient for law enforcement because it does not require “contemporaneous listening.” Unlike traditional wiretapping, where agents listen to conversations live and stop recording if the conversations do not contain criminal content, electronic surveillance seems to require only “after-the-fact filtering,” which eliminates the need to have an agent monitor communications in real time. Thus, because online surveillance “offers cheaper, richer, and more reliable information with less risk,” its use might be more effective than other evidence-gathering techniques, especially “to the extent that law enforcement agents [can] focus their efforts on a particular person who spends time online.”

PLAYING IN THE DARK by Mathew Ruskin