The modern usage of the term infrastructure has gone through a series of permutations from early emphasis on logistics, organisation, and the expanding scope of technological networks to more recent interest in the intersections with landscape, ecology, and alternative theorisations of urban materiality. In this event is explored questions relating to the meaning and conceptualization of urban infrastructures. The question of infrastructure will serve as an entry point for wider reflections on the changing experience of nature, modernity, and urban space.

# The Banking Business…Note Quote

Why is lending indispensable to banking? This not-so new question has garnered a lot of steam, especially in the wake of 2007-08 crisis. In India, however, this question has become quite a staple of CSOs purportedly carrying out research and analysis in what has, albeit wrongly, begun to be considered offshoots of neoliberal policies of capitalism favoring cronyism on one hand, and marginalizing priority sector focus by nationalized banks on the other. Though, it is a bit far-fetched to call this analysis mushrooming on artificially-tilled grounds, it nevertheless isn’t justified for the leaps such analyses assume don’t exist. The purpose of this piece is precisely to demystify and be a correctional to such erroneous thoughts feeding activism.

The idea is to launch from the importance of lending practices to banking, and why if such practices weren’t the norm, banking as a business would falter. Monetary and financial systems are creations of double entry-accounting, in that, when banks lend, the process is a creation of a matrix/(ces) of new assets and new liabilities. Monetary system is a counterfactual, which is a bookkeeping mechanism for the intermediation of real economic activity giving a semblance of reality to finance capitalism in substance and form. Let us say, a bank A lends to a borrower. By this process, a new asset and a new liability is created for A, in that, there is a debit under bank assets, and a simultaneous credit on the borrower’s account. These accounting entries enhance bank’s and borrower’s respective categories, making it operationally different from opening bank accounts marked by deposits. The bank now has an asset equal to the amount of the loan and a liability equal to the deposit. Put a bit more differently, bank A writes a cheque or draft for the borrower, thus debiting the borrower’s loan account and crediting a payment liability account. Now, this borrower decides to deposit this cheque/draft at a different bank B, which sees the balance sheet of B grow by the same amount, with a payment due asset and a deposit liability. This is what is a bit complicated and referred to as matrix/(ces) at the beginning of this paragraph. The obvious complication is due to a duplication of balance sheet across the banks A and B, which clearly stands in need of urgent resolution. This duplication is categorized under the accounting principle of ‘Float’, and is the primary requisite for resolving duplicity. Float is the amount of time it takes for money to move from one account to another. The time period is significant because it’s as if the funds are in two places at once. The money is still in the cheque writer’s account, and the cheque recipient may have deposited funds to their bank as well. The resolution is reached when the bank B clears the cheque/draft and receives a reserve balance credit in exchange, at which point the bank A sheds both reserve balances and its payment liability. Now, what has happened is that the systemic balance sheet has grown by the amount of the original loan and deposit, even if these are domiciles in two different banks A and B. In other words, B’s balance sheet has an increased deposits and reserves, while A’s balance sheet temporarily unchanged due to loan issued offset reserves decline. It needs to be noted that here a reserve requirement is created in addition to a capital requirement, the former with the creation of a deposit, while the latter with the creation of a loan, implying that loans create capital requirement, whereas deposits create reserve requirement. *Pari Passu*, bank A will seek to borrow new funding from money markets and bank B could lend funds into these markets. This is a natural reaction to the fluctuating reserve distribution created at banks A and B. This course of normalization of reserve fluctuations is a basic function of commercial bank reserve management. Though, this is a typical case involving just two banks, a meshwork of different banks, their counterparties, are involved in such transactions that define present-day banking scenario, thus highlighting complexity referred to earlier.

Now, there is something called the Cash Reserve Ratio (CRR), whereby banks in India (and elsewhere as well) are required to hold a certain proportion of their deposits in the form of cash. However, these banks don’t hold these as cash with themselves for they deposit such cash (also known as currency chests) with the Reserve Bank of India (RBI). For example, if the bank’s deposits increase by Rs. 100, and if the CRR is 4% (this is the present CRR stipulated by the RBI), then the banks will have to hold Rs. 4 with the RBI, and the bank will be able to use only Rs. 96 for investments and lending, or credit purpose. Therefore, higher the CRR, lower is the amount that banks will be able to use for lending and investment. CRR is a tool used by the RBI to control liquidity in the banking system. Now, if the bank A lends out Rs. 100, it incurs a reserve requirement of Rs. 4, or in other words, for every Rs. 100 loan, there is a simultaneous reserve requirement of Rs. 4 created in the form of reserve liability. But, there is a further ingredient to this banking complexity in the form of Tier-1 and Tier-2 capital as laid down by BASEL Accords, to which India is a signatory. Under the accord, bank’s capital consists of tier-1 and tier-2 capital, where tier-1 is bank’s core capital, while tier-2 is supplementary, and the sum of these two is bank’s total capital. This is a crucial component and is considered highly significant by regulators (like the RBI, for instance), for the capital ratio is used to determine and rank bank’s capital adequacy. tier-1 capital consists of shareholders’ equity and retained earnings, and gives a measure of when the bank must absorb losses without ceasing business operations. BASEL-3 has capped the minimum tier-1 capital ratio at 6%, which is calculated by dividing bank’s tier-1 capital by its total risk-based assets. Tier-2 capital includes revaluation reserves, hybrid capital instruments and subordinated term debt, general loan-loss revenues, and undisclosed reserves. tier-2 capital is supplementary since it is less reliable than tier-1 capital. According to BASEL-3, the minimum total capital ratio is 8%, which indicates the minimum tier-2 capital ratio at 2%, as opposed to 6% for the tier-1 capital ratio. Going by these norms, a well capitalized bank in India must have a 8% combined tier-1 and tier-2 capital ratio, meaning that for every Rs. 100 bank loan, a simultaneous regulatory capital liability of Rs. 8 of tier-1/tier-2 is generated. Further, if a Rs. 100 loan has created a Rs. 100 deposit, it has actually created an asset of Rs. 100 for the bank, while at the same time a liability of Rs. 112, which is the sum of deposits and required reserves and capital. On the face of it, this looks like a losing deal for the bank. But, there is more than meets the eye here.

Assume bank A lends Mr. Amit Modi Rs. 100, by crediting Mr. Modi’s deposit account held at A with Rs. 100. Two new liabilities are immediately created that need urgent addressing, viz. reserve and capital requirement. One way to raise Rs. 8 of required capital, bank A sells shares, or raise equity-like debt or retain earnings. The other way is to attach an origination fee of 10% (sorry for the excessively high figure here, but for sake of brevity, let’s keep it at 10%). This 10% origination fee helps maintain retained earnings and assist satisfying capital requirements. Now, what is happening here might look unique, but is the key to any banking business of lending, i.e. the bank A is meeting its capital requirement by discounting a deposit it created of its own loan, and thereby reducing its liability without actually reducing its asset. To put it differently, bank A extracts a 10% fee from Rs. 100 it loans, thus depositing an actual sum of only Rs. 90. With this, A’s reserve requirement decrease by Rs. 3.6 (remember 4% is the CRR). This in turn means that the loan of Rs. 100 made by A actually creates liabilities worth Rs. Rs. 108.4 (4-3.6 = 0.4 + 8). The RBI, which imposes the reserve requirement will follow up new deposit creation with a systemic injection sufficient to accommodate the requirement of bank B that has issued the deposit. And this new requirement is what is termed the targeted asset for the bank. It will fund this asset in the normal course of its asset-liability management process, just as it would any other asset. At the margin, the bank actually has to compete for funding that will draw new reserve balances into its position with the RBI. This action of course is commingled with numerous other such transactions that occur in the normal course of reserve management. The sequence includes a time lag between the creation of the deposit and the activation of the corresponding reserve requirement against that deposit. A bank in theory can temporarily be at rest in terms of balance sheet growth, and still be experiencing continuous shifting in the mix of asset and liability types, including shifting of deposits. Part of this deposit shifting is inherent in a private sector banking system that fosters competition for deposit funding. The birth of a demand deposit in particular is separate from retaining it through competition. Moreover, the fork in the road that was taken in order to construct a private sector banking system implies that the RBI is not a mere slush fund that provides unlimited funding to the banking system.

The originating accounting entries in the above case are simple, a loan asset and a deposit liability. But this is only the start of the story. Commercial bank ‘asset-liability management’ functions oversee the comprehensive flow of funds in and out of individual banks. They control exposure to the basic banking risks of liquidity and interest rate sensitivity. Somewhat separately, but still connected within an overarching risk management framework, banks manage credit risk by linking line lending functions directly to the process of internal risk assessment and capital allocation. Banks require capital, especially equity capital, to take risk, and to take credit risk in particular. Interest rate risk and interest margin management are critical aspects of bank asset-liability management. The asset-liability management function provides pricing guidance for deposit products and related funding costs for lending operations. This function helps coordinate the operations of the left and the right hand sides of the balance sheet. For example, a central bank interest rate change becomes a cost of funds signal that transmits to commercial bank balance sheets as a marginal pricing influence. The asset-liability management function is the commercial bank coordination function for this transmission process, as the pricing signal ripples out to various balance sheet categories. Loan and deposit pricing is directly affected because the cost of funds that anchors all pricing in finance has been changed. In other cases, a change in the term structure of market interest rates requires similar coordination of commercial bank pricing implications. And this reset in pricing has implications for commercial bank approaches to strategies and targets for the compositional mix of assets and liabilities. The life of deposits is more dynamic than their birth or death. Deposits move around the banking system as banks compete to retain or attract them. Deposits also change form. Demand deposits can convert to term deposits, as banks seek a supply of longer duration funding for asset-liability matching purposes. And they can convert to new debt or equity securities issued by a particular bank, as buyers of these instruments draw down their deposits to pay for them. All of these changes happen across different banks, which can lead to temporary imbalances in the nominal matching of assets and liabilities, which in turn requires active management of the reserve account level, with appropriate liquidity management responses through money market operations in the short term, or longer term strategic adjustment in approaches to loan and deposit market share. The key idea here is that banks compete for deposits that currently exist in the system, including deposits that can be withdrawn on demand, or at maturity in the case of term deposits. And this competition extends more comprehensively to other liability forms such as debt, as well as to the asset side of the balance sheet through market share strategies for various lending categories. All of this balance sheet flux occurs across different banks, and requires that individual banks actively manage their balance sheets to ensure that assets are appropriately and efficiently funded with liabilities and equity. The ultimate purpose of reserve management is not reserve positioning *per se*. The end goal is balance sheets are in balance. The reserve system records the effect of this balance sheet activity. And even if loan books remain temporarily unchanged, all manner of other banking system assets and liabilities may be in motion. This includes securities portfolios, deposits, debt liabilities, and the status of the common equity and retained earnings account. And of course, loan books don’t remain unchanged for very long, in which case the loan/deposit growth dynamic comes directly into play on a recurring basis.

Commercial banks’ ability to create money is constrained by capital. When a bank creates a new loan, with an associated new deposit, the bank’s balance sheet size increases, and the proportion of the balance sheet that is made up of equity (shareholders’ funds, as opposed to customer deposits, which are debt, not equity) decreases. If the bank lends so much that its equity slice approaches zero, as happened in some banks prior to the financial crisis, even a very small fall in asset prices is enough to render it insolvent. Regulatory capital requirements are intended to ensure that banks never reach such a fragile position. In contrast, central banks’ ability to create money is constrained by the willingness of their government to back them, and the ability of that government to tax the population. In practice, most central bank money these days is asset-backed, since central banks create new money when they buy assets in open market operations or * Quantitative Easing*, and when they lend to banks. However, in theory a central bank could literally spirit money from thin air without asset purchases or lending to banks. This is Milton Friedman’s famous

*. The central bank would become technically insolvent as a result, but provided the government is able to tax the population, that wouldn’t matter. The ability of the government to tax the population depends on the credibility of the government and the productive capacity of the economy. Hyperinflation can occur when the supply side of the economy collapses, rendering the population unable and/or unwilling to pay taxes. It can also occur when people distrust a government and its central bank so much that they refuse to use the currency that the central bank creates. Distrust can come about because people think the government is corrupt and/or irresponsible, or because they think that the government is going to fall and the money it creates will become worthless. But nowhere in the genesis of hyperinflation does central bank insolvency feature….*

**helicopter drop**

# Lévy Process as Combination of a Brownian Motion with Drift and Infinite Sum of Independent Compound Poisson Processes: Introduction to Martingales. Part 4.

Every piecewise constant Lévy process X_{t}^{0} can be represented in the form for some Poisson random measure with intensity measure of the form ν(dx)dt where ν is a finite measure, defined by

ν(A) = E[#{t ∈ [0,1] : ∆X_{t}^{0} ≠ 0, ∆X_{t}^{0} ∈ A}], A ∈ B(R^{d}) —– (1)

Given a Brownian motion with drift γt + W_{t}, independent from X^{0}, the sum X_{t} = X_{t}^{0} + γt + W_{t} defines another Lévy process, which can be decomposed as:

X_{t} = γt + W_{t} + ∑_{s∈[0,t]} ΔX_{s} = γt + W_{t} + ∫_{[0,t]xRd} xJ_{X} (ds x dx) —– (2)

where J_{X} is a Poisson random measure on [0,∞[×R^{d} with intensity ν(dx)dt.

Can every Lévy process be represented in this form? Given a Lévy process X_{t}, we can still define its Lévy measure ν as above. ν(A) is still finite for any compact set A such that 0 ∉ A: if this were not true, the process would have an infinite number of jumps of finite size on [0, T], which contradicts the cadlag property. So ν defines a Radon measure on R^{d} \ {0}. But ν is not necessarily a finite measure: the above restriction still allows it to blow up at zero and X may have an infinite number of small jumps on [0, T]. In this case the sum of the jumps becomes an infinite series and its convergence imposes some conditions on the measure ν, under which we obtain a decomposition of X.

Let (X_{t})_{t≥0} be a Lévy process on R^{d} and ν its Lévy measure.

ν is a Radon measure on R^{d} \ {0} and verifies:

∫_{|x|≤1} |x|^{2} v(dx) < ∞

The jump measure of X, denoted by J_{X}, is a Poisson random measure on [0,∞[×R^{d} with intensity measure ν(dx)dt.

∃ a vector γ and a d-dimensional Brownian motion (B_{t})_{t≥0} with covariance matrix A such that

X_{t} = γ_{t} + B_{t} + X_{t}^{l} + lim_{ε↓0} X’^{ε}_{t} —– (3)

where

X_{t}^{l} = ∫_{|x|≥1,s∈[0,t]} xJ_{X} (ds x dx)

X’^{ε}_{t} = ∫_{ε≤|x|<1,s∈[0,t]} x{J_{X} (ds x dx) – ν(dx)ds}

≡ ∫_{ε≤|x|<1,s∈[0,t]} xJ’_{X} (ds x dx)

The terms in (3) are independent and the convergence in the last term is almost sure and uniform in t on [0,T].

The Lévy-Itô decomposition entails that for every Lévy process ∃ a vector γ, a positive definite matrix A and a positive measure ν that uniquely determine its distribution. The triplet (A,ν,γ) is called characteristic tripletor Lévy triplet of the process X_{t}. γt + Bt is a continuous Gaussian Lévy process and every Gaussian Lévy process is continuous and can be written in this form and can be described by two parameters: the drift γ and the covariance matrix of Brownian motion, denoted by A. The other two terms are discontinuous processes incorporating the jumps of X_{t} and are described by the Lévy measure ν. The condition ∫_{|y|≥1} ν(dy) < ∞ means that X has a finite number of jumps with absolute value larger than 1. So the sum

X_{t}^{l} = ∑^{|∆Xs|≥1}_{0≤s≤t} ∆X_{s}

contains almost surely a finite number of terms and X_{t}^{l} is a compound Poisson process. There is nothing special about the threshold ∆X = 1: for any ε > 0, the sum of jumps with amplitude between ε and 1:

X^{ε}_{t} = ∑^{1>|∆Xs|≥ε}_{0≤s≤t} ∆X_{s} = ∫_{ε≤|x|≤1,s∈[0,t]} xJ_{X}(ds x dx) —– (4)

is again a well-defined compound Poisson process. However, contrarily to the compound Poisson case, ν can have a singularity at zero: there can be infinitely many small jumps and their sum does not necessarily converge. This prevents us from making ε go to 0 directly in (4). In order to obtain convergence we have to center the remainder term, i.e., replace the jump integral by its compensated version,

X’^{ε}_{t} = ∫_{ε≤|x|≤1,s∈[0,t]} xJ’_{X} (ds x dx) —– (5)

which, is a martingale. While X^{ε} can be interpreted as an infinite superposition of independent Poisson processes, X’^{ε}_{t }should be seen as an infinite superposition of independent compensated, i.e., centered Poisson processes to which a central-limit type argument can be applied to show convergence. An important implication of the Lévy-Itô decomposition is that every Lévy process is a combination of a Brownian motion with drift and a possibly infinite sum of independent compound Poisson processes. This also means that every Lévy process can be approximated with arbitrary precision by a jump-diffusion process, that is by the sum of Brownian motion with drift and a compound Poisson process.

# Convergence in Probability Implying Stochastic Continuity. Part 3.

A compound Poisson process with intensity λ > 0 and jump size distribution f is a stochastic process X_{t} defined as

X_{t} = ∑_{i=1}^{Nt}Y_{i}

where jumps sizes Y_{i} are independent and identically distributed with distribution f and (N_{t}) is a Poisson process with intensity λ, independent from (Y_{i})_{i≥1}.

The following properties of a compound Poisson process are now deduced

- The sample paths of X are cadlag piecewise constant functions.
- The jump times (T
_{i})_{i≥1}have the same law as the jump times of the Poisson process N_{t}: they can be expressed as partial sums of independent exponential random variables with parameter λ. - The jump sizes (Y
_{i})_{i≥1}are independent and identically distributed with law f.

The Poisson process itself can be seen as a compound Poisson process on R such that Y_{i} ≡ 1. This explains the origin of term “compound Poisson” in the definition.

Let R(n),n ≥ 0 be a random walk with step size distribution f: R(n) = ∑_{i=0}^{n} Y_{i}. The compound Poisson process X_{t} can be obtained by changing the time of R with an independent Poisson process N_{t}: X_{t} = R(N_{t}). X_{t} thus describes the position of a random walk after a random number of time steps, given by N_{t}. Compound Poisson processes are * Lévy processes* (and

*) and they are the only Lévy processes with piecewise constant sample paths.*

**part 2**(X_{t})_{t≥0} is compound Poisson process if and only if it is a Lévy process and its sample paths are piecewise constant functions.

Let (X_{t})_{t≥0} be a Lévy process with piecewise constant paths. We can construct, path by path, a process (N_{t}, t ≥ 0) which counts the jumps of X:

N_{t} =# {0 < s ≤ t: X_{s−} ≠ X_{s}} —– (1)

Since the trajectories of X are piecewise constant, X has a finite number of jumps in any finite interval which entails that N_{t} is finite ∀ finite t. Hence, it is a counting process. Let h < t. Then

N_{t} − N_{h} = #{h < s ≤ t: X_{s−} ≠ X_{s}} = #{h < s ≤ t: X_{s−} − X_{h} ≠ X_{s} − X_{h}}

Hence, N_{t} − N_{h} depends only on (X_{s} − X_{h}), h ≤ s ≤ t. Therefore, from the independence and stationarity of increments of (X_{t}) it follows that (N_{t}) also has independent and stationary increments. Using the process N, we can compute the jump sizes of X: Y_{n} = X_{Sn} − X_{Sn−} where S_{n} = inf{t: N_{t} ≥ n}. First lets see how the increments of X conditionally on the trajectory of N are independent? Let t > s and consider the following four sets:

A_{1} ∈ σ(X_{s})

A_{2} ∈ σ(X_{t} − X_{s})

B_{1} ∈ σ(N_{r}, r ≤ s)

B_{2} ∈ σ(N_{r} − N_{s}, r > s)

such that P(B_{1}) > 0 and P(B_{2}) > 0. The independence of increments of X implies that processes (X_{r} − X_{s}, r > s) and (X_{r}, r ≤ s) are independent. Hence,

P[A_{1} ∩ B_{1} ∩ A_{2} ∩ B_{2}] = P[A_{1} ∩ B_{1}]P[A_{2} ∩ B_{2}]

Moreover,

– A_{1} and B_{1} are independent from B_{2}.

– A_{2} and B_{2} are independent from B_{1}.

– B_{1} and B_{2} are independent from each other.

Therefore, the conditional probability of interest can be expressed as:

P[A_{1} ∩ A_{2} | B_{1} ∩ B_{2}] = (P[A_{1} ∩ B_{1}]P[A_{2} ∩ B_{2}])/P[B_{1}]P[B_{2}]

= (P[A_{1} ∩ B_{1} ∩ B_{2}]P[A_{2} ∩ B_{1} ∩ B_{2}])/P[B_{1}]^{2}P[B_{2}]^{2} = P[A_{1} | B_{1} ∩ B_{2}]P[A_{2} | B_{1} ∩ B_{2}]

This proves that X_{t} − X_{s} and X_{s} are independent conditionally on the trajectory of N. In particular, choosing B_{1} = {N_{s} = 1} and B_{2} = {N_{t} − N_{s} = 1}

we obtain that Y_{1} and Y_{2} are independent. Since we could have taken any number of increments of X and not just two of them, this proves that (Y_{i})_{i≥1} are independent. The jump sizes have the same law, in that the two-dimensional process (X_{t}, N_{t}) has stationary increments. Therefore, for every n ≥ 0 and for every s > h > 0,

E[ƒ(X_{h}) | N_{h} = 1, N_{h} – N_{s} = n] = E[ƒ(X_{s+h} – X_{s}) | N_{s+h} – N_{s} = 1, N_{s} – N_{h} = n],

Where ƒ is any * bounded Borel function*. This entails that for every n ≥ 0, Y

_{1}and Y

_{n+2}have the same law.

Let (X_{t})_{t≥0} be a compound Poisson process.

Independence of increments. Let 0 < r < s and let ƒ and g be bounded Borel functions on R^{d}. To ease the notation, we prove only that X_{r} is independent from X_{s} − X_{r}, but the same reasoning applies to any finite number of increments. We must show that

E[ƒ(X_{r})g(X_{s} − X_{r})] = E[ƒ(X_{r})]E[g(X_{s} − X_{r})]

From the representation X_{r} = ∑_{i=1}^{Nr} and X_{s} − X_{r} = ∑_{i=Nr+1}^{N}s Y_{i} the following observations are made:

– Conditionally on the trajectory of N_{t} for t ∈ [0, s], X_{r} and X_{s} − X_{r} are independent because the first expression only depends on Y_{i} for i ≤ N_{r} and the second expression only depends on Y_{i} for i > N_{r}.

– The expectation E[ƒ(Xr) N_{t}, t ≤ s] depends only on N_{r} and the expectation E[g(X_{s} − X_{r}) N_{t}, t ≤ s] depends only on N_{s} − N_{r}.

On using the independence of increments of the Poisson process, we can write:

E[ƒ(X_{r})g(X_{s} – X_{r})] = E[E[ƒ(X_{r})g(X_{s} – X_{r}) | N_{t}, t ≤ s]]

= E[E[ƒ(X_{r}) | N_{t}, t ≤ s] E[g(X_{s} – X_{r}) | N_{t}, t ≤ s]]

= E[E[ƒ(X_{r}) | N_{t}, t ≤ s]] E[E[g(X_{s} – X_{r}) | N_{t}, t ≤ s]]

= E[ƒ(X_{r})] E[g(X_{s} – X_{r})]

Stationarity of increments. Let 0 < r < s and let ƒ be a bounded Borel function.

E[ƒ(X_{s} – X_{r})] = E[E[ ∑_{i=Nr+1}^{Ns} Y_{i} | N_{t}, t ≤ s]]

= E[E[∑_{i=1}^{Ns-Nr} Y_{i} | N_{t}, t ≤ s]] = E[E[∑_{i=1}^{Ns-r} Y_{i} | N_{t}, t ≤ s]] = E[ƒ(X_{s-r})]

Stochastic continuity. X_{t} only jumps if N_{t} does.

P(N_{s} →^{s<t}_{s→t} N_{t}) = 1

Hence, for every t > 0,

P(X_{s} →^{s<t}_{s→t} X_{t}) = 1

Since almost sure convergence entails convergence in probability, this implies stochastic continuity. Also, since any cadlag function may be approximated by a piecewise constant function, one may expect that general Lévy processes can be well approximated by compound Poisson ones and that by studying compound Poisson processes one can gain an insight into the properties of Lévy processes.

# Stochasticities. Lévy processes. Part 2.

Define the characteristic function of X_{t}:

Φ_{t}(z) ≡ Φ_{Xt}(z) ≡ E[e^{iz.Xt}], z ∈ R^{d}

For t > s, by writing X_{t+s} = X_{s} + (X_{t+s} − X_{s}) and using the fact that X_{t+s} − X_{s} is independent of X_{s}, we obtain that t ↦ Φ_{t}(z) is a multiplicative function.

Φ_{t+s}(z) = Φ_{Xt+s(z)} = Φ_{Xs}(z) Φ_{Xt+s} − X_{s}(z) = Φ_{Xs}(z) Φ_{Xt}(z) = Φ_{s}Φ_{t}

The stochastic continuity of t ↦ X_{t} implies in particular that X_{t} → X_{s} in distribution when s → t. Therefore, Φ_{Xs}(z) → Φ_{Xt}(z) when s → t so t ↦ Φ_{t}(z) is a continuous function of t. Together with the multiplicative property Φ_{s+t}(z) = Φ_{s}(z).Φ_{t}(z), this implies that t ↦ Φ_{t}(z) is an exponential function.

Let (X_{t})_{t≥0} be a * Lévy process* on R

^{d}. ∃ a continuous function ψ : R

^{d}↦ R called the characteristic exponent of X, such that:

E[e^{iz.Xt}] = e^{tψ(z)}, z ∈ R^{d}

ψ is the cumulant generating function of X_{1} : ψ = Ψ_{X1} and that the cumulant generating function of X_{t} varies linearly in t: Ψ_{Xt} = tΨ_{X1} = tψ. The law of X_{t} is therefore determined by the knowledge of the law of X_{1} : the only degree of freedom we have in specifying a Lévy process is to specify the distribution of X_{t} for a single time (say, t = 1).

*This lecture covers stochastic processes, including continuous-time stochastic processes and standard Brownian motion by Choongbum Lee*

# Cadlag Stochasticities: Lévy Processes. Part 1.

*A compound Poisson process with a Gaussian distribution of jump sizes, and a jump diffusion of a Lévy process with Gaussian component and finite jump intensity.*

A cadlag stochastic process (X_{t})_{t≥0} on (Ω,F,P) with values in R^{d} such that X_{0} = 0 is called a Lévy process if it possesses the following properties:

1. Independent increments: for every increasing sequence of times t_{0} . . . t_{n}, the random variables X_{t0}, X_{t1} − X_{t0} , . . . , X_{tn} − X_{tn−1} are independent.

2. Stationary increments: the law of X_{t+h} − X_{t} does not depend on t.

3. Stochastic continuity: ∀ε > 0, lim_{h→0} P(|X_{t+h} − X_{t}| ≥ ε) = 0.

A sample function x on a well-ordered set T is cadlag if it is continuous from the right and limited from the left at every point. That is, for every t_{0} ∈ T, t ↓ t_{0} implies x(t) → x(t_{0}), and for t ↑ t_{0}, lim_{t↑t0} x(t)exists, but need not be x(t_{0}). A stochastic process X is cadlag if almost all its sample paths are cadlag.

The third condition does not imply in any way that the sample paths are continuous, and is verified by the Poisson process. It serves to exclude processes with jumps at fixed (nonrandom) times, which can be regarded as “calendar effects” and means that for given time t, the probability of seeing a jump at t is zero: discontinuities occur at random times.

If we sample a Lévy process at regular time intervals 0, ∆, 2∆, . . ., we obtain a random walk: defining S_{n}(∆) ≡ X_{n∆}, we can write S_{n}(∆) = ∑_{k=0}^{n−1} Y_{k} where Y_{k} = X_{(k+1)∆} − X_{k∆} are independent and identically dependent random variables whose distribution is the same as the distribution of X_{∆}. Since this can be done for any sampling interval ∆ we see that by specifying a Lévy process one can specify a whole family of random walks S_{n}(∆).

Choosing n∆ = t, we see that for any t > 0 and any n ≥ 1, X_{t} = S_{n}(∆) can be represented as a sum of n independent and identically distributed random variables whose distribution is that of X_{t/n}: X_{t} can be “divided” into n independent and identically distributed parts. A distribution having this property is said to be infinitely divisible.

A probability distribution F on R^{d} is said to be infinitely divisible if for any integer n ≥ 2, ∃ n independent and identically distributed random variables Y_{1}, …Y_{n} such that Y_{1} + … + Y_{n} has distribution F.

Since the distribution of independent and identically distributed sums is given by convolution of the distribution of the summands, denoting by μ the distribution of Y_{k-s}, F = μ ∗ μ ∗ ··· ∗ μ is the n^{th} convolution of μ. So an infinitely divisible distribution can also be defined as a distribution F for which the n^{th} convolution root is still a probability distribution, for any n ≥ 2.

Thus, if X is a Lévy process, for any t > 0 the distribution of X_{t} is infinitely divisible. This puts a constraint on the possible choices of distributions for X_{t}: whereas the increments of a discrete-time random walk can have arbitrary distribution, the distribution of increments of a Lévy process has to be infinitely divisible.

The most common examples of infinitely divisible laws are: the Gaussian distribution, the gamma distribution, α-stable distributions and the Poisson distribution: a random variable having any of these distributions can be decomposed into a sum of n independent and identically distributed parts having the same distribution but with modified parameters. Conversely, given an infinitely divisible distribution F, it is easy to see that for any n ≥ 1 by chopping it into n independent and identically distributed components we can construct a random walk model on a time grid with step size 1/n such that the law of the position at t = 1 is given by F. In the limit, this procedure can be used to construct a continuous time Lévy process (X_{t})_{t≥0} such that the law of X_{1} if given by F. Let (X_{t})_{t≥0} be a Lévy process. Then for every t, X_{t} has an infinitely divisible distribution. Conversely, if F is an infinitely divisible distribution then ∃ a Lévy process (X_{t}) such that the distribution of X_{1} is given by F.

# Revisiting Financing Blue Economy

* Blue Economy* has suffered a definitional crisis ever since it started doing the rounds almost around the turn of the century. So much has it been plagued by this crisis, that even a working definition is acceptable only contextually, and is liable to paradigmatic shifts both littorally and political-economically.

The United Nations defines Blue Economy as:

A range of economic sectors and related policies that together determine whether the use of oceanic resources is sustainable. The “Blue Economy” concept seeks to promote economic growth, social inclusion, and the preservation or improvement of livelihoods while at the same time ensuring environmental sustainability of the oceans and coastal areas.

This definition is subscribed to by even the World Bank, and is commonly accepted as a standardized one since 2017. However, in 2014, United Nations Conference on Trade and Development (UNCTAD) had called Blue Economy as

The improvement of human well-being and social equity, while significantly reducing environmental risks and ecological scarcities…the concept of an oceans economy also embodies economic and trade activities that integrate the conservation and sustainable use and management of biodiversity including marine ecosystems, and genetic resources.

Preceding this by three years, the Pacific Small Islands Developing States (Pacific SIDS) referred to Blue Economy as the

Sustainable management of ocean resources to support livelihoods, more equitable benefit-sharing, and ecosystem resilience in the face of climate change, destructive fishing practices, and pressures from sources external to the fisheries sector.

As is noteworthy, these definitions across almost a decade have congruences and cohesion towards promoting economic growth, social inclusion and the preservation or improvement of livelihoods while ensuring environmental sustainability of oceanic and coastal areas, though are markedly mitigated in domains, albeit, only definitionally, for the concept since 2011 till it has been standardized in 2017 doesn’t really knock out any of the diverse components, but rather adds on. Marine biotechnology and bioprospecting, seabed mining and extraction, aquaculture, and offshore renewable energy supplement the established traditional oceanic industries like fisheries, tourism, and maritime transportation into a giant financial and economic appropriation of resources the concept endorses and encompasses. But, a term that threads through the above definitions is sustainability, which unfortunately happens to be another definitional dead-end. But, mapping the contours of sustainability in a theoretical fashion would at least contextualize the working definition of Blue Economy, to which initiatives of financial investments, legal frameworks, ecological deflections, economic zones and trading lines, fisheries, biotechnology and bioprospecting could be approvingly applied to. Though, as a caveat, such applications would be far from being exhaustive, they, at least potentially cohere onto underlying economic directions, and opening up a spectra of critiques.

If one were to follow global multinational institutions like the UN and the World Bank, prefixing sustainable to Blue Economy brings into perspective coastal economy that balances itself with long-term capacity of assets, goods and services and marine ecosystems towards a global driver of economic, social and environmental prosperity accruing direct and indirect benefits to communities, both regionally and globally. Assuming this to be true, what guarantees financial investments as healthy, and thus proving no risks to oceanic health and rolling back such growth-led development into peril? This is the question that draws paramount importance, and is a hotbed for constructive critique of the whole venture. The question of finance, or financial viability for Blue Economy, or the viability thereof. What is seemingly the underlying principle of Blue Economy is the financialization of natural resources, which is nothing short of replacing environmental regulations with market-driven regulations. This commodification of the ocean is then packaged and traded on the markets often amounting to transferring the stewardship of commons for financial interests. Marine ecology as a natural resource isn’t immune to commodification, and an array of financial agents are making it their indispensable destination, thrashing out new alliances converging around specific ideas about how maritime and coastal resources should be organized, and to whose benefit, under which terms and to what end? A systemic increase in financial speculation on commodities mainly driven by deregulation of derivative markets, increasing involvement of investment banks, hedge funds and other institutional investors in commodity speculation and the emergence of new instruments such as index funds and exchange-traded funds. Financial deregulation has successfully transformed commodities into financial assets, and has matured its penetration into commodity markets and their functioning. This maturity can be gauged from the fact that speculative capital is structurally intertwined with productive capital, which in the case of Blue Economy are commodities and natural resources, most generically.

But despite these fissures existing, the international organizations are relentlessly following up on attracting finances, and in a manner that could at best be said to follow principles of transparency, accountability, compliance and right to disclosure. The European Commission (EC) is partnering with World Wildlife Fund (WWF) in bringing together public and private financing institutions to develop a set of Principles of Sustainable Investment within a Blue Economy Development Framework. But, the question remains: how stringently are these institutions tied to adhering to these Principles?

Investors and policymakers are increasingly turning to the ocean for new opportunities and resources. According to OECD projections, by 2030 the “blue economy” could outperform the growth of the global economy as a whole, both in terms of value added and employment. But to get there, there will need to be a framework for ocean-related investment that is supported by policy incentives along the most sustainable pathways. Now, this might sound a bit rhetorical, and thus calls for unraveling. the international community has time and again reaffirmed its strong commitment to conserve and sustainably use the ocean and its resources, for which the formations like G7 and G20 acknowledge scaling up finance and ensuring sustainability of such investments as fundamental to meeting their needs. Investment capital, both public and private is therefore fundamental to unlocking Blue Economy. Even if there is a growing recognition that following “business s usual” trajectory neglects impacts on marine ecosystems entailing risks, these global bodies are of the view that investment decisions that incorporate sustainability elements ensure environmentally, economically and socially sustainable outcomes securing long-term health and integrity of the oceans furthering shared social, ecological and economic functions that are dependent on it. That financial institutions and markets can play this pivotal role only complicates the rhetorics further. Even if financial markets and institutions expressly intend to implement Sustainable Development Goals (SDGs), in particular Goal 14 which deals with conservation and sustainable use of the oceans, such intentions to be compliant with IFC performance Standards and EIB Environmental and Social Principles and Standards.

So far, what is being seen is small ticket size deals, but there is a potential that it will shift on its axis. With mainstream banking getting engaged, capital flows will follow the projects, and thus the real challenge lies in building the pipeline. But, here is a catch: there might be private capital in plentiful seeking impact solutions and a financing needs by projects on the ground, but private capital is seeking private returns, and the majority of ocean-related projects are not private but public goods. For public finance, there is an opportunity to allocate more proceeds to sustainable ocean initiatives through a bond route, such as sovereign and municipal bonds in order to finance coastal resilience projects. but such a route could also encounter a dead-end, in that many a countries that are ripe for coastal infrastructure are emerging economies and would thus incur a high cost of funding. A de-risking is possible, if institutions like the World Bank, or the Overseas Private Investment Corporation undertake credit enhancements, a high probability considering these institutions have been engineering Blue Economy on a priority basis. Global banks are contenders for financing the Blue Economy because of their geographic scope, but then are also likely to be exposed to a new playing field. The largest economies by Exclusive Economic Zones, which are sea zones determined by the UN don’t always stand out as world’s largest economies, a fact that is liable to drawing in domestic banks to collaborate based on incentives offered to be part of the solution. A significant challenge for private sector will be to find enough cash-flow generating projects to bundle them in a liquid, at-scale investment vehicle. One way of resolving this challenge is through creating a specialized financial institution, like an Ocean Sustainability Bank, which can be modeled on lines of European Bank for Reconstruction and Development (EBRD). The plus envisaged by such a creation is arriving at scales rather quickly. An example of this is by offering a larger institutional-sized approach by considering a coastal area as a single investment zone, thus bringing in integrated infrastructure-based financing approach. With such an approach, insurance companies would get attracted by looking at innovative financing for coastal resiliency, which is a part and parcel of climate change concerns, food security, health, poverty reduction and livelihoods. Projects having high social impact but low/no Internal Rate of Return (IRR) may be provided funding, in convergence with Governmental schemes. IRR is a metric used in capital budgeting to estimate the profitability of potential investments. It is a discount rate that makes the net present value (NPV) of all cash flows from a particular project equal to zero. NPV is the difference between the present value of cash inflows and present value of cash outflows over a period of time. IRR is sometimes referred to as “economic rate of return” or “discounted cash flow rate of return.” The use of “internal” refers to the omission of external factors, such as the cost of capital or inflation, from the calculation. The biggest concern, however appears in the form of immaturity of financial markets in emerging economies, which are purported to be major beneficiaries of Blue Economy.

The question then is, how far viable or sustainable are these financial interventions? Financialization produces effects which can create long-term trends (such as those on functional income distribution) but can also change across different periods of economic growth, slowdown and recession. Interpreting the implications of financialization for sustainability, therefore, requires a methodological diverse and empirical dual-track approach which combines different methods of investigations. Even times of prosperity, despite their fragile and vulnerable nature, can endure for several years before collapsing due to high levels of indebtedness, which in turn amplify the real effects of a financial crisis and hinder the economic growth. Things begin to get a bit more complicated when financialization interferes with environment and natural resources, for then the losses are not just merely on a financial platform alone. Financialization has played a significant role in the recent price shocks in food and energy markets, while the wave of speculative investment in natural resources has and is likely to produce perverse environmental and social impact. Moreover, the so-called financialization of environmental conservation tends to enhance the financial value of environmental resources but it is selective: not all stakeholders have the same opportunities and not all uses and values of natural resources and services are accounted for.