Derivative Pricing Theory: Call, Put Options and “Black, Scholes'” Hedged Portfolio.Thought of the Day 152.0

black-scholes-formula-excel-here-is-the-formula-for-the-black-model-for-pricing-call-and-put-option-contracts-black-scholes-formula-excel-spreadsheet

screenshot

Fischer Black and Myron Scholes revolutionized the pricing theory of options by showing how to hedge continuously the exposure on the short position of an option. Consider the writer of a call option on a risky asset. S/he is exposed to the risk of unlimited liability if the asset price rises above the strike price. To protect the writer’s short position in the call option, s/he should consider purchasing a certain amount of the underlying asset so that the loss in the short position in the call option is offset by the long position in the asset. In this way, the writer is adopting the hedging procedure. A hedged position combines an option with its underlying asset so as to achieve the goal that either the asset compensates the option against loss or otherwise. By adjusting the proportion of the underlying asset and option continuously in a portfolio, Black and Scholes demonstrated that investors can create a riskless hedging portfolio where the risk exposure associated with the stochastic asset price is eliminated. In an efficient market with no riskless arbitrage opportunity, a riskless portfolio must earn an expected rate of return equal to the riskless interest rate.

Black and Scholes made the following assumptions on the financial market.

  1. Trading takes place continuously in time.
  2. The riskless interest rate r is known and constant over time.
  3. The asset pays no dividend.
  4. There are no transaction costs in buying or selling the asset or the option, and no taxes.
  5. The assets are perfectly divisible.
  6. There are no penalties to short selling and the full use of proceeds is permitted.
  7. There are no riskless arbitrage opportunities.

The stochastic process of the asset price St is assumed to follow the geometric Brownian motion

dSt/St = μ dt + σ dZt —– (1)

where μ is the expected rate of return, σ is the volatility and Zt is the standard Brownian process. Both μ and σ are assumed to be constant. Consider a portfolio that involves short selling of one unit of a call option and long holding of Δt units of the underlying asset. The portfolio value Π (St, t) at time t is given by

Π = −c + Δt St —– (2)

where c = c(St, t) denotes the call price. Note that Δt changes with time t, reflecting the dynamic nature of hedging. Since c is a stochastic function of St, we apply the Ito lemma to compute its differential as follows:

dc = ∂c/∂t dt + ∂c/∂St dSt + σ2/2 St2 ∂2c/∂St2 dt

such that

-dc + Δt dS= (-∂c/∂t – σ2/2 St2 ∂2c/∂St2)dt + (Δ– ∂c/∂St)dSt

= [-∂c/∂t – σ2/2 St2 ∂2c/∂St+ (Δ– ∂c/∂St)μSt]dt + (Δ– ∂c/∂St)σSdZt

The cumulative financial gain on the portfolio at time t is given by

G(Π (St, t )) = ∫0t -dc + ∫0t Δu dSu

= ∫0t [-∂c/∂u – σ2/2 Su22c/∂Su2 + (Δ– ∂c/∂Su)μSu]du + ∫0t (Δ– ∂c/∂Su)σSdZ—– (3)

The stochastic component of the portfolio gain stems from the last term, ∫0t (Δ– ∂c/∂Su)σSdZu. Suppose we adopt the dynamic hedging strategy by choosing Δu = ∂c/∂Su at all times u < t, then the financial gain becomes deterministic at all times. By virtue of no arbitrage, the financial gain should be the same as the gain from investing on the risk free asset with dynamic position whose value equals -c + Su∂c/∂Su. The deterministic gain from this dynamic position of riskless asset is given by

Mt = ∫0tr(-c + Su∂c/∂Su)du —– (4)

By equating these two deterministic gains, G(Π (St, t)) and Mt, we have

-∂c/∂u – σ2/2 Su22c/∂Su2 = r(-c + Su∂c/∂Su), 0 < u < t

which is satisfied for any asset price S if c(S, t) satisfies the equation

∂c/∂t + σ2/2 S22c/∂S+ rS∂c/∂S – rc = 0 —– (5)

This parabolic partial differential equation is called the Black–Scholes equation. Strangely, the parameter μ, which is the expected rate of return of the asset, does not appear in the equation.

To complete the formulation of the option pricing model, let’s prescribe the auxiliary condition. The terminal payoff at time T of the call with strike price X is translated into the following terminal condition:

c(S, T ) = max(S − X, 0) —– (6)

for the differential equation.

Since both the equation and the auxiliary condition do not contain ρ, one concludes that the call price does not depend on the actual expected rate of return of the asset price. The option pricing model involves five parameters: S, T, X, r and σ. Except for the volatility σ, all others are directly observable parameters. The independence of the pricing model on μ is related to the concept of risk neutrality. In a risk neutral world, investors do not demand extra returns above the riskless interest rate for bearing risks. This is in contrast to usual risk averse investors who would demand extra returns above r for risks borne in their investment portfolios. Apparently, the option is priced as if the rates of return on the underlying asset and the option are both equal to the riskless interest rate. This risk neutral valuation approach is viable if the risks from holding the underlying asset and option are hedgeable.

The governing equation for a put option can be derived similarly and the same Black–Scholes equation is obtained. Let V (S, t) denote the price of a derivative security with dependence on S and t, it can be shown that V is governed by

∂V/∂t + σ2/2 S22V/∂S+ rS∂V/∂S – rV = 0 —– (7)

The price of a particular derivative security is obtained by solving the Black–Scholes equation subject to an appropriate set of auxiliary conditions that model the corresponding contractual specifications in the derivative security.

The original derivation of the governing partial differential equation by Black and Scholes focuses on the financial notion of riskless hedging but misses the precise analysis of the dynamic change in the value of the hedged portfolio. The inconsistencies in their derivation stem from the assumption of keeping the number of units of the underlying asset in the hedged portfolio to be instantaneously constant. They take the differential change of portfolio value Π to be

dΠ =−dc + Δt dSt,

which misses the effect arising from the differential change in Δt. The ability to construct a perfectly hedged portfolio relies on the assumption of continuous trading and continuous asset price path. It has been commonly agreed that the assumed Geometric Brownian process of the asset price may not truly reflect the actual behavior of the asset price process. The asset price may exhibit jumps upon the arrival of a sudden news in the financial market. The interest rate is widely recognized to be fluctuating over time in an irregular manner rather than being constant. For an option on a risky asset, the interest rate appears only in the discount factor so that the assumption of constant/deterministic interest rate is quite acceptable for a short-lived option. The Black–Scholes pricing approach assumes continuous hedging at all times. In the real world of trading with transaction costs, this would lead to infinite transaction costs in the hedging procedure.

Advertisement

Defaultable Bonds. Thought of the Day 133.0

equations

Defaultable bonds are bonds that have a positive possibility of default.  Most corporate bonds and some government bonds are defaultable.  When a bond defaults, its coupon and principal payments will be altered.  Most of the time, only a portion of the principal, and sometimes, also a portion of the coupon, will be paid. A defaultable (T, x) – bond with maturity T > 0 and credit rating x ∈ I ⊆ [0, 1], is a financial contract which pays to its holder 1 unit of currency at time T provided that the writer of the bond hasn’t bankrupted till time T. The set I stands for all possible credit ratings. The bankruptcy is modeled with the use of a so called loss process {L(t), t ≥ 0} which starts from zero, increases and takes values in the interval [0, 1]. The bond is worthless if the loss process exceeds its credit rating. Thus the payoff profile of the (T, x) – bond is of the form

1{LT ≤ x}

The price P(t, T, x) of the (T, x) – bond is a stochastic process defined by

P(t, T, x) = 1{LT ≤ x}e−∫tT f(t, u, x)du, t ∈ [0, T] —– (1)

where f (·, ·, x) stands for an x-forward rate. The value x = 1 corresponds to the risk-free bond and f(t, T, 1) determines the short rate process via f(t, t, 1), t ≥ 0.

The (T, x) – bond market is thus fully determined by the family of x-forward rates and the loss process L. This is an extension of the classical non-defaultable bond market which can be identified with the case when I is a singleton, that is, when I = {1}.

The model of (T, x) – bonds does not correspond to defaultable bonds which are directly traded on a real market. For instance, in this setting the bankruptcy of the (T, x2) – bond automatically implies the bankruptcy of the (T, x1) – bond if x1 < x2. In reality, a bond with a higher credit rating may, however, default earlier than that with a lower one. The (T, x) – bonds are basic instruments related to the pool of defaultable assets called Collateralized Debt Obligations (CDOs), which are actually widely traded on the market. In the CDO market model, the loss process L(t) describes the part of the pool which has defaulted up to time t > 0 and F(LT), where F as some function, specifies the CDO payoff at time T > 0. In particular, (T, x) – bonds may be identified with the digital-type CDO payoffs with the function F of the form

F(z) = Fx(z) := 1[0,x](z), x ∈ I, z ∈ [0,1]

Then the price of that payoff pt(Fx(LT)) at time t ≤ T equals P(t, T, x). Moreover, each regular CDO claim can be replicated, and thus also priced, with a portfolio consisting of a certain combination of (T, x) – bonds. Thus it follows that the model of (T, x) – bonds determines the structure of the CDO payoffs. The induced family of prices

P(t, T, x), T ≥ 0, x ∈ I

will be a CDO term structure. On real markets the price of a claim which pays more is always higher. This implies

P(t, T, x1) = pt(Fx1(LT)) ≤ pt(Fx2(LT)) = P(t, T, x2), t ∈ [0, T], x1 < x2, x1, x2 ∈ I —– (2)

which means that the prices of (T, x) – bonds are increasing in x. Similarly, if the claim is paid earlier, then it has a higher value and hence

P(t, T1, x) = pt(Fx(LT1)) ≥ pt(Fx(LT2)) = P(t, T2, x), t ∈ [0, T1], T1 < T2, x ∈ I —– (3)

which means that the (T, x) – bond prices are decreasing in T. The CDO term structure is monotone if both (2) and (3) are satisfied. Surprisingly, monotonicity of the (T, x) – bond prices is not always preserved in mathematical models even if they satisfy severe no-arbitrage conditions.

Delta Hedging.

AAEAAQAAAAAAAAZiAAAAJDkxMWNiZjI2LWI3NTUtNDlhYS05OGU2LTI1NjVlZWY5OGFiNA

The principal investors in most convertible securities are hedge funds that engage in convertible arbitrage strategies. These investors typically purchase the convertible and simultaneously sell short a certain number of the issuer’s common shares that underlie the convertible. The number of shares they sell short as a percent of the shares underlying the convertible is approximately equal to the risk-neutral probability at that point in time (as determined by a convertible pricing model that uses binomial option pricing as its foundation) that the investor will eventually convert the security into common shares. This probability is then applied to the number of common shares the convertible security could convert into to determine the number of shares the hedge fund investor should sell short (the “hedge ratio”).

As an example, assume a company’s share price is $10 at the time of its convertible issuance. A hedge fund purchases a portion of the convertible, which gives the right to convert into 100 common shares of the issuer. If the hedge ratio is 65%, the hedge fund may sell short 65 shares of the issuer’s stock on the same date as the convertible purchase. During the life span of the convertible, the hedge fund investor may sell more shares short or buy shares, based on the changing hedge ratio. To illustrate, if one month after purchasing the convertible (and establishing a 65-share short position) the issuer’s share price decreases to $9, the hedge ratio may drop from 65 to 60%. To align the hedge ratio with the shares sold short as a percent of shares the investor has the right to convert the security into, the hedge fund investor will need to buy five shares in the open market from other shareholders and deliver those shares to the parties who had lent the shares originally. “Covering” five shares of their short position leaves the hedge fund with a new short position of 60 shares. If the issuer’s share price two months after issuance increases to $11, the hedge ratio may increase to 70%. In this case, the hedge fund investor may want to be short 70 shares. The investor achieves this position by borrowing 10 more shares and selling them short, which increases the short position from 60 to 70 shares. This process of buying low and selling high continues until the convertible either converts or matures.

The end result is that the hedge fund investor is generating trading profits throughout the life of the convertible by buying stock to reduce the short position when the issuer’s share price drops, and borrowing and selling shares short when the issuer’s share price increases. This dynamic trading process is called “delta hedging,” which is a well-known and consistently practiced strategy by hedge funds. Since hedge funds typically purchase between 60% and 80% of most convertible securities in the public markets, a significant amount of trading in the issuer’s stock takes place throughout the life of a convertible security. The purpose of all this trading in the convertible issuer’s common stock is to hedge share price risk embedded in the convertible and create trading profits that offset the opportunity cost of purchasing a convertible that has a coupon that is substantially lower than a straight bond from the same issuer with the same maturity.

In order for hedge funds to invest in convertible securities, there needs to be a substantial amount of the issuer’s common shares available for hedge funds to borrow, and adequate liquidity in the issuer’s stock for hedge funds to buy and sell shares in relation to their delta hedging activity. If there are insufficient shares available to be borrowed or inadequate trading volume in the issuer’s stock, a prospective issuer is generally discouraged from issuing a convertible security in the public markets, or is required to issue a smaller convertible, because hedge funds may not be able to participate. Alternatively, an issuer could attempt to privately place a convertible with a single non-hedge fund investor. However, it may be impossible to find such an investor, and even if found, the required pricing for the convertible is likely to be disadvantageous for the issuer.

When a new convertible security is priced in the public capital markets, it is generally the case that the terms of the security imply a theoretical value of between 102% and 105% of face value, based on a convertible pricing model. The convertible is usually sold at a price of 100% to investors, and is therefore underpriced compared to its theoretical value. This practice provides an incentive for hedge funds to purchase the security, knowing that, by delta hedging their investment, they should be able to extract trading profits at least equal to the difference between the theoretical value and “par” (100%). For a public market convertible with atypical characteristics (e.g., an oversized issuance relative to market capitalization, an issuer with limited stock trading volume, or an issuer with limited stock borrow availability), hedge fund investors normally require an even higher theoretical value (relative to par) as an inducement to invest.

Convertible pricing models incorporate binomial trees to determine the theoretical value of convertible securities. These models consider the following factors that influence the theoretical value: current common stock price; anticipated volatility of the common stock return during the life of the convertible security; risk-free interest rate; the company’s stock borrow cost and common stock dividend yield; the company’s credit risk; maturity of the convertible security; and the convertible security’s coupon or dividend rate and payment frequency, conversion premium, and length of call protection.

Cryptocurrency and Efficient Market Hypothesis. Drunken Risibility.

According to the traditional definition, a currency has three main properties: (i) it serves as a medium of exchange, (ii) it is used as a unit of account and (iii) it allows to store value. Along economic history, monies were related to political power. In the beginning, coins were minted in precious metals. Therefore, the value of a coin was intrinsically determined by the value of the metal itself. Later, money was printed in paper bank notes, but its value was linked somewhat to a quantity in gold, guarded in the vault of a central bank. Nation states have been using their political power to regulate the use of currencies and impose one currency (usually the one issued by the same nation state) as legal tender for obligations within their territory. In the twentieth century, a major change took place: abandoning gold standard. The detachment of the currencies (specially the US dollar) from the gold standard meant a recognition that the value of a currency (specially in a world of fractional banking) was not related to its content or representation in gold, but to a broader concept as the confidence in the economy in which such currency is based. In this moment, the value of a currency reflects the best judgment about the monetary policy and the “health” of its economy.

In recent years, a new type of currency, a synthetic one, emerged. We name this new type as “synthetic” because it is not the decision of a nation state, nor represents any underlying asset or tangible wealth source. It appears as a new tradable asset resulting from a private agreement and facilitated by the anonymity of internet. Among this synthetic currencies, Bitcoin (BTC) emerges as the most important one, with a market capitalization of a few hundred million short of $80 billions.

bitcoin-price-bitstamp-sept1

Bitcoin Price Chart from Bitstamp

There are other cryptocurrencies, based on blockchain technology, such as Litecoin (LTC), Ethereum (ETH), Ripple (XRP). The website https://coinmarketcap.com/currencies/ counts up to 641 of such monies. However, as we can observe in the figure below, Bitcoin represents 89% of the capitalization of the market of all cryptocurrencies.

Untitled

Cryptocurrencies. Share of market capitalization of each currency.

One open question today is if Bitcoin is in fact a, or may be considered as a, currency. Until now, we cannot observe that Bitcoin fulfills the main properties of a standard currency. It is barely (though increasingly so!) accepted as a medium of exchange (e.g. to buy some products online), it is not used as unit of account (there are no financial statements valued in Bitcoins), and we can hardly believe that, given the great swings in price, anyone can consider Bitcoin as a suitable option to store value. Given these characteristics, Bitcoin could fit as an ideal asset for speculative purposes. There is no underlying asset to relate its value to and there is an open platform to operate round the clock.

Untitled

Bitcoin returns, sampled every 5 hours.

Speculation has a long history and it seems inherent to capitalism. One common feature of speculative assets in history has been the difficulty in valuation. Tulipmania, the South Sea bubble, and more others, reflect on one side human greedy behavior, and on the other side, the difficulty to set an objective value to an asset. All speculative behaviors were reflected in a super-exponential growth of the time series.

Cryptocurrencies can be seen as the libertarian response to central bank failure to manage financial crises, as the one occurred in 2008. Also cryptocurrencies can bypass national restrictions to international transfers, probably at a cheaper cost. Bitcoin was created by a person or group of persons under the pseudonym Satoshi Nakamoto. The discussion of Bitcoin has several perspectives. The computer science perspective deals with the strengths and weaknesses of blockchain technology. In fact, according to R. Ali et. al., the introduction of a “distributed ledger” is the key innovation. Traditional means of payments (e.g. a credit card), rely on a central clearing house that validate operations, acting as “middleman” between buyer and seller. On contrary, the payment validation system of Bitcoin is decentralized. There is a growing army of miners, who put their computer power at disposal of the network, validating transactions by gathering together blocks, adding them to the ledger and forming a ’block chain’. This work is remunerated by giving the miners Bitcoins, what makes (until now) the validating costs cheaper than in a centralized system. The validation is made by solving some kind of algorithm. With the time solving the algorithm becomes harder, since the whole ledger must be validated. Consequently it takes more time to solve it. Contrary to traditional currencies, the total number of Bitcoins to be issued is beforehand fixed: 21 million. In fact, the issuance rate of Bitcoins is expected to diminish over time. According to Laursen and Kyed, validating the public ledger was initially rewarded with 50 Bitcoins, but the protocol foresaw halving this quantity every four years. At the current pace, the maximum number of Bitcoins will be reached in 2140. Taking into account the decentralized character, Bitcoin transactions seem secure. All transactions are recorded in several computer servers around the world. In order to commit fraud, a person should change and validate (simultaneously) several ledgers, which is almost impossible. Additional, ledgers are public, with encrypted identities of parties, making transactions “pseudonymous, not anonymous”. The legal perspective of Bitcoin is fuzzy. Bitcoin is not issued, nor endorsed by a nation state. It is not an illegal substance. As such, its transaction is not regulated.

In particular, given the nonexistence of saving accounts in Bitcoin, and consequently the absense of a Bitcoin interest rate, precludes the idea of studying the price behavior in relation with cash flows generated by Bitcoins. As a consequence, the underlying dynamics of the price signal, finds the Efficient Market Hypothesis as a theoretical framework. The Efficient Market Hypothesis (EMH) is the cornerstone of financial economics. One of the seminal works on the stochastic dynamics of speculative prices is due to L Bachelier, who in his doctoral thesis developed the first mathematical model concerning the behavior of stock prices. The systematic study of informational efficiency begun in the 1960s, when financial economics was born as a new area within economics. The classical definition due to Eugene Fama (Foundations of Finance_ Portfolio Decisions and Securities Prices 1976-06) says that a market is informationally efficient if it “fully reflects all available information”. Therefore, the key element in assessing efficiency is to determine the appropriate set of information that impels prices. Following Efficient Capital Markets, informational efficiency can be divided into three categories: (i) weak efficiency, if prices reflect the information contained in the past series of prices, (ii) semi-strong efficiency, if prices reflect all public information and (iii) strong efficiency, if prices reflect all public and private information. As a corollary of the EMH, one cannot accept the presence of long memory in financial time series, since its existence would allow a riskless profitable trading strategy. If markets are informationally efficient, arbitrage prevent the possibility of such strategies. If we consider the financial market as a dynamical structure, short term memory can exist (to some extent) without contradicting the EMH. In fact, the presence of some mispriced assets is the necessary stimulus for individuals to trade and reach an (almost) arbitrage free situation. However, the presence of long range memory is at odds with the EMH, because it would allow stable trading rules to beat the market.

The presence of long range dependence in financial time series generates a vivid debate. Whereas the presence of short term memory can stimulate investors to exploit small extra returns, making them disappear, long range correlations poses a challenge to the established financial model. As recognized by Ciaian et. al., Bitcoin price is not driven by macro-financial indicators. Consequently a detailed analysis of the underlying dynamics (Hurst exponent) becomes important to understand its emerging behavior. There are several methods (both parametric and non parametric) to calculate the Hurst exponent, which become a mandatory framework to tackle BTC trading.

Arbitrage, or Tensors thereof…

qs_2015_1

What is an arbitrage? Basically it means ”to get something from nothing” and a free lunch after all. More strict definition states the arbitrage as an operational opportunity to make a risk-free profit with a rate of return higher than the risk-free interest rate accured on deposit.

The arbitrage appears in the theory when we consider a curvature of the connection. A rate of excess return for an elementary arbitrage operation (a difference between rate of return for the operation and the risk-free interest rate) is an element of curvature tensor calculated from the connection. It can be understood keeping in mind that a curvature tensor elements are related to a difference between two results of infinitesimal parallel transports performed in different order. In financial terms it means that the curvature tensor elements measure a difference in gains accured from two financial operations with the same initial and final points or, in other words, a gain from an arbitrage operation.

In a certain sense, the rate of excess return for an elementary arbitrage operation is an analogue of the electromagnetic field. In an absence of any uncertanty (or, in other words, in an absense of walks of prices, exchange and interest rates) the only state is realised is the state of zero arbitrage. However, if we place the uncertenty in the game, prices and the rates move and some virtual arbitrage possibilities to get more than less appear. Therefore we can say that the uncertanty play the same role in the developing theory as the quantization did for the quantum gauge theory.

What of “matter” fields then, which interact through the connection. The “matter” fields are money flows fields, which have to be gauged by the connection. Dilatations of money units (which do not change a real wealth) play a role of gauge transformation which eliminates the effect of the dilatation by a proper tune of the connection (interest rate, exchange rates, prices and so on) exactly as the Fisher formula does for the real interest rate in the case of an inflation. The symmetry of the real wealth to a local dilatation of money units (security splits and the like) is the gauge symmetry of the theory.

A theory may contain several types of the “matter” fields which may differ, for example, by a sign of the connection term as it is for positive and negative charges in the electrodynamics. In the financial stage it means different preferances of investors. Investor’s strategy is not always optimal. It is due to partially incomplete information in hands, choice procedure, partially, because of investors’ (or manager’s) internal objectives. Physics of Finance

 

 

Momentum of Accelerated Capital. Note Quote.

high-frequency-trading

Distinct types of high frequency trading firms include independent proprietary firms, which use private funds and specific strategies which remain secretive, and may act as market makers generating automatic buy and sell orders continuously throughout the day. Broker-dealer proprietary desks are part of traditional broker-dealer firms but are not related to their client business, and are operated by the largest investment banks. Thirdly hedge funds focus on complex statistical arbitrage, taking advantage of pricing inefficiencies between asset classes and securities.

Today strategies using algorithmic trading and High Frequency Trading play a central role on financial exchanges, alternative markets, and banks‘ internalized (over-the-counter) dealings:

High frequency traders typically act in a proprietary capacity, making use of a number of strategies and generating a very large number of trades every single day. They leverage technology and algorithms from end-to-end of the investment chain – from market data analysis and the operation of a specific trading strategy to the generation, routing, and execution of orders and trades. What differentiates HFT from algorithmic trading is the high frequency turnover of positions as well as its implicit reliance on ultra-low latency connection and speed of the system.

The use of algorithms in computerised exchange trading has experienced a long evolution with the increasing digitalisation of exchanges:

Over time, algorithms have continuously evolved: while initial first-generation algorithms – fairly simple in their goals and logic – were pure trade execution algos, second-generation algorithms – strategy implementation algos – have become much more sophisticated and are typically used to produce own trading signals which are then executed by trade execution algos. Third-generation algorithms include intelligent logic that learns from market activity and adjusts the trading strategy of the order based on what the algorithm perceives is happening in the market. HFT is not a strategy per se, but rather a technologically more advanced method of implementing particular trading strategies. The objective of HFT strategies is to seek to benefit from market liquidity imbalances or other short-term pricing inefficiencies.

While algorithms are employed by most traders in contemporary markets, the intense focus on speed and the momentary holding periods are the unique practices of the high frequency traders. As the defence of high frequency trading is built around the principles that it increases liquidity, narrows spreads, and improves market efficiency, the high number of trades made by HFT traders results in greater liquidity in the market. Algorithmic trading has resulted in the prices of securities being updated more quickly with more competitive bid-ask prices, and narrowing spreads. Finally HFT enables prices to reflect information more quickly and accurately, ensuring accurate pricing at smaller time intervals. But there are critical differences between high frequency traders and traditional market makers:

  1. HFT do not have an affirmative market making obligation, that is they are not obliged to provide liquidity by constantly displaying two sides quotes, which may translate into a lack of liquidity during volatile conditions.
  2. HFT contribute little market depth due to the marginal size of their quotes, which may result in larger orders having to transact with many small orders, and this may impact on overall transaction costs.
  3. HFT quotes are barely accessible due to the extremely short duration for which the liquidity is available when orders are cancelled within milliseconds.

Besides the shallowness of the HFT contribution to liquidity, are the real fears of how HFT can compound and magnify risk by the rapidity of its actions:

There is evidence that high-frequency algorithmic trading also has some positive benefits for investors by narrowing spreads – the difference between the price at which a buyer is willing to purchase a financial instrument and the price at which a seller is willing to sell it – and by increasing liquidity at each decimal point. However, a major issue for regulators and policymakers is the extent to which high-frequency trading, unfiltered sponsored access, and co-location amplify risks, including systemic risk, by increasing the speed at which trading errors or fraudulent trades can occur.

Although there have always been occasional trading errors and episodic volatility spikes in markets, the speed, automation and interconnectedness of today‘s markets create a different scale of risk. These risks demand that exchanges and market participants employ effective quality management systems and sophisticated risk mitigation controls adapted to these new dynamics in order to protect against potential threats to market stability arising from technology malfunctions or episodic illiquidity. However, there are more deliberate aspects of HFT strategies which may present serious problems for market structure and functioning, and where conduct may be illegal, for example in order anticipation seeks to ascertain the existence of large buyers or sellers in the marketplace and then to trade ahead of those buyers and sellers in anticipation that their large orders will move market prices. A momentum strategy involves initiating a series of orders and trades in an attempt to ignite a rapid price move. HFT strategies can resemble traditional forms of market manipulation that violate the Exchange Act:

  1. Spoofing and layering occurs when traders create a false appearance of market activity by entering multiple non-bona fide orders on one side of the market at increasing or decreasing prices in order to induce others to buy or sell the stock at a price altered by the bogus orders.
  2. Painting the tape involves placing successive small amount of buy orders at increasing prices in order to stimulate increased demand.

  3. Quote Stuffing and price fade are additional HFT dubious practices: quote stuffing is a practice that floods the market with huge numbers of orders and cancellations in rapid succession which may generate buying or selling interest, or compromise the trading position of other market participants. Order or price fade involves the rapid cancellation of orders in response to other trades.

The World Federation of Exchanges insists: ― Exchanges are committed to protecting market stability and promoting orderly markets, and understand that a robust and resilient risk control framework adapted to today‘s high speed markets, is a cornerstone of enhancing investor confidence. However this robust and resilient risk control framework‘ seems lacking, including in the dark pools now established for trading that were initially proposed as safer than the open market.

Accelerated Capital as an Anathema to the Principles of Communicative Action. A Note Quote on the Reciprocity of Capital and Ethicality of Financial Economics

continuum

Markowitz portfolio theory explicitly observes that portfolio managers are not (expected) utility maximisers, as they diversify, and offers the hypothesis that a desire for reward is tempered by a fear of uncertainty. This model concludes that all investors should hold the same portfolio, their individual risk-reward objectives are satisfied by the weighting of this ‘index portfolio’ in comparison to riskless cash in the bank, a point on the capital market line. The slope of the Capital Market Line is the market price of risk, which is an important parameter in arbitrage arguments.

Merton had initially attempted to provide an alternative to Markowitz based on utility maximisation employing stochastic calculus. He was only able to resolve the problem by employing the hedging arguments of Black and Scholes, and in doing so built a model that was based on the absence of arbitrage, free of turpe-lucrum. That the prescriptive statement “it should not be possible to make sure profits”, is a statement explicit in the Efficient Markets Hypothesis and in employing an Arrow security in the context of the Law of One Price. Based on these observations, we conject that the whole paradigm for financial economics is built on the principle of balanced reciprocity. In order to explore this conjecture we shall examine the relationship between commerce and themes in Pragmatic philosophy. Specifically, we highlight Robert Brandom’s (Making It Explicit Reasoning, Representing, and Discursive Commitment) position that there is a pragmatist conception of norms – a notion of primitive correctnesses of performance implicit in practice that precludes and are presupposed by their explicit formulation in rules and principles.

The ‘primitive correctnesses’ of commercial practices was recognised by Aristotle when he investigated the nature of Justice in the context of commerce and then by Olivi when he looked favourably on merchants. It is exhibited in the doux-commerce thesis, compare Fourcade and Healey’s contemporary description of the thesis Commerce teaches ethics mainly through its communicative dimension, that is, by promoting conversations among equals and exchange between strangers, with Putnam’s description of Habermas’ communicative action based on the norm of sincerity, the norm of truth-telling, and the norm of asserting only what is rationally warranted …[and] is contrasted with manipulation (Hilary Putnam The Collapse of the Fact Value Dichotomy and Other Essays)

There are practices (that should be) implicit in commerce that make it an exemplar of communicative action. A further expression of markets as centres of communication is manifested in the Asian description of a market brings to mind Donald Davidson’s (Subjective, Intersubjective, Objective) argument that knowledge is not the product of a bipartite conversations but a tripartite relationship between two speakers and their shared environment. Replacing the negotiation between market agents with an algorithm that delivers a theoretical price replaces ‘knowledge’, generated through communication, with dogma. The problem with the performativity that Donald MacKenzie (An Engine, Not a Camera_ How Financial Models Shape Markets) is concerned with is one of monism. In employing pricing algorithms, the markets cannot perform to something that comes close to ‘true belief’, which can only be identified through communication between sapient humans. This is an almost trivial observation to (successful) market participants, but difficult to appreciate by spectators who seek to attain ‘objective’ knowledge of markets from a distance. To appreciate the relevance to financial crises of the position that ‘true belief’ is about establishing coherence through myriad triangulations centred on an asset rather than relying on a theoretical model.

Shifting gears now, unless the martingale measure is a by-product of a hedging approach, the price given by such martingale measures is not related to the cost of a hedging strategy therefore the meaning of such ‘prices’ is not clear. If the hedging argument cannot be employed, as in the markets studied by Cont and Tankov (Financial Modelling with Jump Processes), there is no conceptual framework supporting the prices obtained from the Fundamental Theorem of Asset Pricing. This lack of meaning can be interpreted as a consequence of the strict fact/value dichotomy in contemporary mathematics that came with the eclipse of Poincaré’s Intuitionism by Hilbert’s Formalism and Bourbaki’s Rationalism. The practical problem of supporting the social norms of market exchange has been replaced by a theoretical problem of developing formal models of markets. These models then legitimate the actions of agents in the market without having to make reference to explicitly normative values.

The Efficient Market Hypothesis is based on the axiom that the market price is determined by the balance between supply and demand, and so an increase in trading facilitates the convergence to equilibrium. If this axiom is replaced by the axiom of reciprocity, the justification for speculative activity in support of efficient markets disappears. In fact, the axiom of reciprocity would de-legitimise ‘true’ arbitrage opportunities, as being unfair. This would not necessarily make the activities of actual market arbitrageurs illicit, since there are rarely strategies that are without the risk of a loss, however, it would place more emphasis on the risks of speculation and inhibit the hubris that has been associated with the prelude to the recent Crisis. These points raise the question of the legitimacy of speculation in the markets. In an attempt to understand this issue Gabrielle and Reuven Brenner identify the three types of market participant. ‘Investors’ are preoccupied with future scarcity and so defer income. Because uncertainty exposes the investor to the risk of loss, investors wish to minimise uncertainty at the cost of potential profits, this is the basis of classical investment theory. ‘Gamblers’ will bet on an outcome taking odds that have been agreed on by society, such as with a sporting bet or in a casino, and relates to de Moivre’s and Montmort’s ‘taming of chance’. ‘Speculators’ bet on a mis-calculation of the odds quoted by society and the reason why speculators are regarded as socially questionable is that they have opinions that are explicitly at odds with the consensus: they are practitioners who rebel against a theoretical ‘Truth’. This is captured in Arjun Appadurai’s argument that the leading agents in modern finance believe in their capacity to channel the workings of chance to win in the games dominated by cultures of control . . . [they] are not those who wish to “tame chance” but those who wish to use chance to animate the otherwise deterministic play of risk [quantifiable uncertainty]”.

In the context of Pragmatism, financial speculators embody pluralism, a concept essential to Pragmatic thinking and an antidote to the problem of radical uncertainty. Appadurai was motivated to study finance by Marcel Mauss’ essay Le Don (The Gift), exploring the moral force behind reciprocity in primitive and archaic societies and goes on to say that the contemporary financial speculator is “betting on the obligation of return”, and this is the fundamental axiom of contemporary finance. David Graeber (Debt The First 5,000 Years) also recognises the fundamental position reciprocity has in finance, but where as Appadurai recognises the importance of reciprocity in the presence of uncertainty, Graeber essentially ignores uncertainty in his analysis that ends with the conclusion that “we don’t ‘all’ have to pay our debts”. In advocating that reciprocity need not be honoured, Graeber is not just challenging contemporary capitalism but also the foundations of the civitas, based on equality and reciprocity. The origins of Graeber’s argument are in the first half of the nineteenth century. In 1836 John Stuart Mill defined political economy as being concerned with [man] solely as a being who desires to possess wealth, and who is capable of judging of the comparative efficacy of means for obtaining that end.

In Principles of Political Economy With Some of Their Applications to Social Philosophy, Mill defended Thomas Malthus’ An Essay on the Principle of Population, which focused on scarcity. Mill was writing at a time when Europe was struck by the Cholera pandemic of 1829–1851 and the famines of 1845–1851 and while Lord Tennyson was describing nature as “red in tooth and claw”. At this time, society’s fear of uncertainty seems to have been replaced by a fear of scarcity, and these standards of objectivity dominated economic thought through the twentieth century. Almost a hundred years after Mill, Lionel Robbins defined economics as “the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses”. Dichotomies emerge in the aftermath of the Cartesian revolution that aims to remove doubt from philosophy. Theory and practice, subject and object, facts and values, means and ends are all separated. In this environment ex cathedra norms, in particular utility (profit) maximisation, encroach on commercial practice.

In order to set boundaries on commercial behaviour motivated by profit maximisation, particularly when market uncertainty returned after the Nixon shock of 1971, society imposes regulations on practice. As a consequence, two competing ethics, functional Consequential ethics guiding market practices and regulatory Deontological ethics attempting stabilise the system, vie for supremacy. It is in this debilitating competition between two essentially theoretical ethical frameworks that we offer an explanation for the Financial Crisis of 2007-2009: profit maximisation, not speculation, is destabilising in the presence of radical uncertainty and regulation cannot keep up with motivated profit maximisers who can justify their actions through abstract mathematical models that bare little resemblance to actual markets. An implication of reorienting financial economics to focus on the markets as centres of ‘communicative action’ is that markets could become self-regulating, in the same way that the legal or medical spheres are self-regulated through professions. This is not a ‘libertarian’ argument based on freeing the Consequential ethic from a Deontological brake. Rather it argues that being a market participant entails restricting norms on the agent such as sincerity and truth telling that support knowledge creation, of asset prices, within a broader objective of social cohesion. This immediately calls into question the legitimacy of algorithmic/high- frequency trading that seems an anathema in regard to the principles of communicative action.

Fundamental Theorem of Asset Pricing: Tautological Meeting of Mathematical Martingale and Financial Arbitrage by the Measure of Probability.

thinkstockphotos-496599823

The Fundamental Theorem of Asset Pricing (FTAP hereafter) has two broad tenets, viz.

1. A market admits no arbitrage, if and only if, the market has a martingale measure.

2. Every contingent claim can be hedged, if and only if, the martingale measure is unique.

The FTAP is a theorem of mathematics, and the use of the term ‘measure’ in its statement places the FTAP within the theory of probability formulated by Andrei Kolmogorov (Foundations of the Theory of Probability) in 1933. Kolmogorov’s work took place in a context captured by Bertrand Russell, who observed that

It is important to realise the fundamental position of probability in science. . . . As to what is meant by probability, opinions differ.

In the 1920s the idea of randomness, as distinct from a lack of information, was becoming substantive in the physical sciences because of the emergence of the Copenhagen Interpretation of quantum mechanics. In the social sciences, Frank Knight argued that uncertainty was the only source of profit and the concept was pervading John Maynard Keynes’ economics (Robert Skidelsky Keynes the return of the master).

Two mathematical theories of probability had become ascendant by the late 1920s. Richard von Mises (brother of the Austrian economist Ludwig) attempted to lay down the axioms of classical probability within a framework of Empiricism, the ‘frequentist’ or ‘objective’ approach. To counter–balance von Mises, the Italian actuary Bruno de Finetti presented a more Pragmatic approach, characterised by his claim that “Probability does not exist” because it was only an expression of the observer’s view of the world. This ‘subjectivist’ approach was closely related to the less well-known position taken by the Pragmatist Frank Ramsey who developed an argument against Keynes’ Realist interpretation of probability presented in the Treatise on Probability.

Kolmogorov addressed the trichotomy of mathematical probability by generalising so that Realist, Empiricist and Pragmatist probabilities were all examples of ‘measures’ satisfying certain axioms. In doing this, a random variable became a function while an expectation was an integral: probability became a branch of Analysis, not Statistics. Von Mises criticised Kolmogorov’s generalised framework as un-necessarily complex. About a decade and a half back, the physicist Edwin Jaynes (Probability Theory The Logic Of Science) champions Leonard Savage’s subjectivist Bayesianism as having a “deeper conceptual foundation which allows it to be extended to a wider class of applications, required by current problems of science”.

The objections to measure theoretic probability for empirical scientists can be accounted for as a lack of physicality. Frequentist probability is based on the act of counting; subjectivist probability is based on a flow of information, which, following Claude Shannon, is now an observable entity in Empirical science. Measure theoretic probability is based on abstract mathematical objects unrelated to sensible phenomena. However, the generality of Kolmogorov’s approach made it flexible enough to handle problems that emerged in physics and engineering during the Second World War and his approach became widely accepted after 1950 because it was practically more useful.

In the context of the first statement of the FTAP, a ‘martingale measure’ is a probability measure, usually labelled Q, such that the (real, rather than nominal) price of an asset today, X0, is the expectation, using the martingale measure, of its (real) price in the future, XT. Formally,

X0 = EQ XT

The abstract probability distribution Q is defined so that this equality exists, not on any empirical information of historical prices or subjective judgement of future prices. The only condition placed on the relationship that the martingale measure has with the ‘natural’, or ‘physical’, probability measures usually assigned the label P, is that they agree on what is possible.

The term ‘martingale’ in this context derives from doubling strategies in gambling and it was introduced into mathematics by Jean Ville in a development of von Mises’ work. The idea that asset prices have the martingale property was first proposed by Benoit Mandelbrot in response to an early formulation of Eugene Fama’s Efficient Market Hypothesis (EMH), the two concepts being combined by Fama. For Mandelbrot and Fama the key consequence of prices being martingales was that the current price was independent of the future price and technical analysis would not prove profitable in the long run. In developing the EMH there was no discussion on the nature of the probability under which assets are martingales, and it is often assumed that the expectation is calculated under the natural measure. While the FTAP employs modern terminology in the context of value-neutrality, the idea of equating a current price with a future, uncertain, has ethical ramifications.

The other technical term in the first statement of the FTAP, arbitrage, has long been used in financial mathematics. Liber Abaci Fibonacci (Laurence Sigler Fibonaccis Liber Abaci) discusses ‘Barter of Merchandise and Similar Things’, 20 arms of cloth are worth 3 Pisan pounds and 42 rolls of cotton are similarly worth 5 Pisan pounds; it is sought how many rolls of cotton will be had for 50 arms of cloth. In this case there are three commodities, arms of cloth, rolls of cotton and Pisan pounds, and Fibonacci solves the problem by having Pisan pounds ‘arbitrate’, or ‘mediate’ as Aristotle might say, between the other two commodities.

Within neo-classical economics, the Law of One Price was developed in a series of papers between 1954 and 1964 by Kenneth Arrow, Gérard Debreu and Lionel MacKenzie in the context of general equilibrium, in particular the introduction of the Arrow Security, which, employing the Law of One Price, could be used to price any asset. It was on this principle that Black and Scholes believed the value of the warrants could be deduced by employing a hedging portfolio, in introducing their work with the statement that “it should not be possible to make sure profits” they were invoking the arbitrage argument, which had an eight hundred year history. In the context of the FTAP, ‘an arbitrage’ has developed into the ability to formulate a trading strategy such that the probability, under a natural or martingale measure, of a loss is zero, but the probability of a positive profit is not.

To understand the connection between the financial concept of arbitrage and the mathematical idea of a martingale measure, consider the most basic case of a single asset whose current price, X0, can take on one of two (present) values, XTD < XTU, at time T > 0, in the future. In this case an arbitrage would exist if X0 ≤ XTD < XTU: buying the asset now, at a price that is less than or equal to the future pay-offs, would lead to a possible profit at the end of the period, with the guarantee of no loss. Similarly, if XTD < XTU ≤ X0, short selling the asset now, and buying it back would also lead to an arbitrage. So, for there to be no arbitrage opportunities we require that

XTD < X0 < XTU

This implies that there is a number, 0 < q < 1, such that

X0 = XTD + q(XTU − XTD)

= qXTU + (1−q)XTD

The price now, X0, lies between the future prices, XTU and XTD, in the ratio q : (1 − q) and represents some sort of ‘average’. The first statement of the FTAP can be interpreted simply as “the price of an asset must lie between its maximum and minimum possible (real) future price”.

If X0 < XTD ≤ XTU we have that q < 0 whereas if XTD ≤ XTU < X0 then q > 1, and in both cases q does not represent a probability measure which by Kolmogorov’s axioms, must lie between 0 and 1. In either of these cases an arbitrage exists and a trader can make a riskless profit, the market involves ‘turpe lucrum’. This account gives an insight as to why James Bernoulli, in his moral approach to probability, considered situations where probabilities did not sum to 1, he was considering problems that were pathological not because they failed the rules of arithmetic but because they were unfair. It follows that if there are no arbitrage opportunities then quantity q can be seen as representing the ‘probability’ that the XTU price will materialise in the future. Formally

X0 = qXTU + (1−q) XTD ≡ EQ XT

The connection between the financial concept of arbitrage and the mathematical object of a martingale is essentially a tautology: both statements mean that the price today of an asset must lie between its future minimum and maximum possible value. This first statement of the FTAP was anticipated by Frank Ramsey when he defined ‘probability’ in the Pragmatic sense of ‘a degree of belief’ and argues that measuring ‘degrees of belief’ is through betting odds. On this basis he formulates some axioms of probability, including that a probability must lie between 0 and 1. He then goes on to say that

These are the laws of probability, …If anyone’s mental condition violated these laws, his choice would depend on the precise form in which the options were offered him, which would be absurd. He could have a book made against him by a cunning better and would then stand to lose in any event.

This is a Pragmatic argument that identifies the absence of the martingale measure with the existence of arbitrage and today this forms the basis of the standard argument as to why arbitrages do not exist: if they did the, other market participants would bankrupt the agent who was mis-pricing the asset. This has become known in philosophy as the ‘Dutch Book’ argument and as a consequence of the fact/value dichotomy this is often presented as a ‘matter of fact’. However, ignoring the fact/value dichotomy, the Dutch book argument is an alternative of the ‘Golden Rule’– “Do to others as you would have them do to you.”– it is infused with the moral concepts of fairness and reciprocity (Jeffrey Wattles The Golden Rule).

FTAP is the ethical concept of Justice, capturing the social norms of reciprocity and fairness. This is significant in the context of Granovetter’s discussion of embeddedness in economics. It is conventional to assume that mainstream economic theory is ‘undersocialised’: agents are rational calculators seeking to maximise an objective function. The argument presented here is that a central theorem in contemporary economics, the FTAP, is deeply embedded in social norms, despite being presented as an undersocialised mathematical object. This embeddedness is a consequence of the origins of mathematical probability being in the ethical analysis of commercial contracts: the feudal shackles are still binding this most modern of economic theories.

Ramsey goes on to make an important point

Having any definite degree of belief implies a certain measure of consistency, namely willingness to bet on a given proposition at the same odds for any stake, the stakes being measured in terms of ultimate values. Having degrees of belief obeying the laws of probability implies a further measure of consistency, namely such a consistency between the odds acceptable on different propositions as shall prevent a book being made against you.

Ramsey is arguing that an agent needs to employ the same measure in pricing all assets in a market, and this is the key result in contemporary derivative pricing. Having identified the martingale measure on the basis of a ‘primal’ asset, it is then applied across the market, in particular to derivatives on the primal asset but the well-known result that if two assets offer different ‘market prices of risk’, an arbitrage exists. This explains why the market-price of risk appears in the Radon-Nikodym derivative and the Capital Market Line, it enforces Ramsey’s consistency in pricing. The second statement of the FTAP is concerned with incomplete markets, which appear in relation to Arrow-Debreu prices. In mathematics, in the special case that there are as many, or more, assets in a market as there are possible future, uncertain, states, a unique pricing vector can be deduced for the market because of Cramer’s Rule. If the elements of the pricing vector satisfy the axioms of probability, specifically each element is positive and they all sum to one, then the market precludes arbitrage opportunities. This is the case covered by the first statement of the FTAP. In the more realistic situation that there are more possible future states than assets, the market can still be arbitrage free but the pricing vector, the martingale measure, might not be unique. The agent can still be consistent in selecting which particular martingale measure they choose to use, but another agent might choose a different measure, such that the two do not agree on a price. In the context of the Law of One Price, this means that we cannot hedge, replicate or cover, a position in the market, such that the portfolio is riskless. The significance of the second statement of the FTAP is that it tells us that in the sensible world of imperfect knowledge and transaction costs, a model within the framework of the FTAP cannot give a precise price. When faced with incompleteness in markets, agents need alternative ways to price assets and behavioural techniques have come to dominate financial theory. This feature was realised in The Port Royal Logic when it recognised the role of transaction costs in lotteries.

Gauge Theory of Arbitrage, or Financial Markets Resembling Screening in Electrodynamics

Arbitrage-image

When a mispricing appears in a market, market speculators and arbitrageurs rectify the mistake by obtaining a profit from it. In the case of profitable fluctuations they move into profitable assets, leaving comparably less profitable ones. This affects prices in such a way that all assets of similar risk become equally attractive, i.e. the speculators restore the equilibrium. If this process occurs infinitely rapidly, then the market corrects the mispricing instantly and current prices fully reflect all relevant information. In this case one says that the market is efficient. However, clearly it is an idealization and does not hold for small enough times.

The general picture, sketched above, of the restoration of equilibrium in financial markets resembles screening in electrodynamics. Indeed, in the case of electrodynamics, negative charges move into the region of the positive electric field, positive charges get out of the region and thus screen the field. Comparing this with the financial market we can say that a local virtual arbitrage opportunity with a positive excess return plays a role of the positive electric field, speculators in the long position behave as negative charges, whilst the speculators in the short position behave as positive ones. Movements of positive and negative charges screen out a profitable fluctuation and restore the equilibrium so that there is no arbitrage opportunity any more, i.e. the speculators have eliminated the arbitrage opportunity.

The analogy is apparently superficial, but it is not. The analogy emerges naturally in the framework of the Gauge Theory of Arbitrage (GTA). The theory treats a calculation of net present values and asset buying and selling as a parallel transport of money in some curved space, and interpret the interest rate, exchange rates and prices of asset as proper connection components. This structure is exactly equivalent to the geometrical structure underlying the electrodynamics where the components of the vector-potential are connection components responsible for the parallel transport of the charges. The components of the corresponding curvature tensors are the electromagnetic field in the case of electrodynamics and the excess rate of return in case of GTA. The presence of uncertainty is equivalent to the introduction of noise in the electrodynamics, i.e. quantization of the theory. It allows one to map the theory of the capital market onto the theory of quantized gauge field interacting with matter (money flow) fields. The gauge transformations of the matter field correspond to a change of the par value of the asset units which effect is eliminated by a gauge tuning of the prices and rates. Free quantum gauge field dynamics (in the absence of money flows) is described by a geometrical random walk for the assets prices with the log-normal probability distribution. In general case the consideration maps the capital market onto Quantum Electrodynamics where the price walks are affected by money flows.

Electrodynamical model of quasi-efficient financial market

Financial Forward Rate “Strings” (Didactic 1)

screenshot

Imagine that Julie wants to invest $1 for two years. She can devise two possible strategies. The first one is to put the money in a one-year bond at an interest rate r1. At the end of the year, she must take her money and find another one-year bond, with interest rate r1/2 which is the interest rate in one year on a loan maturing in two years. The final payoff of this strategy is simply (1 + r1)(1 + r1/2). The problem is that Julie cannot know for sure what will be the one-period interest rate r1/2 of next year. Thus, she can only estimate a return by guessing the expectation of r1/2.

Instead of making two separate investments of one year each, Julie could invest her money today in a bond that pays off in two years with interest rate r2. The final payoff is then (1 + r2)2. This second strategy is riskless as she knows for sure her return. Now, this strategy can be reinterpreted along the line of the first strategy as follows. It consists in investing for one year at the rate r1 and for the second year at a forward rate f2. The forward rate is like the r1/2 rate, with the essential difference that it is guaranteed : by buying the two-year bond, Julie can “lock in” an interest rate f2 for the second year.

This simple example illustrates that the set of all possible bonds traded on the market is equivalent to the so-called forward rate curve. The forward rate f(t,x) is thus the interest rate that can be contracted at time t for instantaneously riskless borrowing 1 or lending at time t + x. It is thus a function or curve of the time-to-maturity x2, where x plays the role of a “length” variable, that deforms with time t. Its knowledge is completely equivalent to the set of bond prices P(t,x) at time t that expire at time t + x. The shape of the forward rate curve f(t,x) incessantly fluctuates as a function of time t. These fluctuations are due to a combination of factors, including future expectation of the short-term interest rates, liquidity preferences, market segmentation and trading. It is obvious that the forward rate f (t, x+δx) for δx small can not be very different from f (t,x). It is thus tempting to see f(t,x) as a “string” characterized by a kind of tension which prevents too large local deformations that would not be financially acceptable. This superficial analogy is in the follow up of the repetitious intersections between finance and physics, starting with Bachelier who solved the diffusion equation of Brownian motion as a model of stock market price fluctuations five years before Einstein, continuing with the discovery of the relevance of Lévy laws for cotton price fluctuations by Mandelbrot that can be compared with the present interest of such power laws for the description of physical and natural phenomena. The present investigation delves into how to formalize mathematically this analogy between the forward rate curve and a string. We formulate the term structure of interest rates as the solution of a stochastic partial differential equation (SPDE), following the physical analogy of a continuous curve (string) whose shape moves stochastically through time.

The equation of motion of macroscopic physical strings is derived from conservation laws. The fundamental equations of motion of microscopic strings formulated to describe the fundamental particles derive from global symmetry principles and dualities between long-range and short-range descriptions. Are there similar principles that can guide the determination of the equations of motion of the more down-to-earth financial forward rate “strings”?

Suppose that in the middle ages, before Copernicus and Galileo, the Earth really was stationary at the centre of the universe, and only began moving later on. Imagine that during the nineteenth century, when everyone believed classical physics to be true, that it really was true, and quantum phenomena were non-existent. These are not philosophical musings, but an attempt to portray how physics might look if it actually behaved like the financial markets. Indeed, the financial world is such that any insight is almost immediately used to trade for a profit. As the insight spreads among traders, the “universe” changes accordingly. As G. Soros has pointed out, market players are “actors observing their own deeds”. As E. Derman, head of quantitative strategies at Goldman Sachs, puts it, in physics you are playing against God, who does not change his mind very often. In finance, you are playing against Gods creatures, whose feelings are ephemeral, at best unstable, and the news on which they are based keep streaming in. Value clearly derives from human beings, while mass, charge and electromagnetism apparently do not. This has led to suggestions that a fruitful framework to study finance and economy is to use evolutionary models inspired from biology and genetics.

This does not however guide us much for the determination of “fundamental” equa- tions, if any. Here, we propose to use the condition of absence of arbitrage opportunity and show that this leads to strong constraints on the structure of the governing equations. The basic idea is that, if there are arbitrage opportunities (free lunches), they cannot live long or must be quite subtle, otherwise traders would act on them and arbitrage them away. The no-arbitrage condition is an idealization of a self-consistent dynamical state of the market resulting from the incessant actions of the traders (ar- bitragers). It is not the out-of-fashion equilibrium approximation sometimes described but rather embodies a very subtle cooperative organization of the market.

We consider this condition as the fundamental backbone for the theory. The idea to impose this requirement is not new and is in fact the prerequisite of most models developed in the academic finance community. Modigliani and Miller [here and here] have indeed emphasized the critical role played by arbitrage in determining the value of securities. It is sometimes suggested that transaction costs and other market imperfections make irrelevant the no-arbitrage condition. Let us address briefly this question.

Transaction costs in option replication and other hedging activities have been extensively investigated since they (or other market “imperfections”) clearly disturb the risk-neutral argument and set option theory back a few decades. Transaction costs induce, for obvious reasons, dynamic incompleteness, thus preventing valuation as we know it since Black and Scholes. However, the most efficient dynamic hedgers (market makers) incur essentially no transaction costs when owning options. These specialized market makers compete with each other to provide liquidity in option instruments, and maintain inventories in them. They rationally limit their dynamic replication to their residual exposure, not their global exposure. In addition, the fact that they do not hold options until maturity greatly reduces their costs of dynamic hedging. They have an incentive in the acceleration of financial intermediation. Furthermore, as options are rarely replicated until maturity, the expected transaction costs of the short options depend mostly on the dynamics of the order flow in the option markets – not on the direct costs of transacting. For the efficient operators (and those operators only), markets are more dynamically complete than anticipated. This is not true for a second category of traders, those who merely purchase or sell financial instruments that are subjected to dynamic hedging. They, accordingly, neither are equipped for dynamic hedging, nor have the need for it, thanks to the existence of specialized and more efficient market makers. The examination of their transaction costs in the event of their decision to dynamically replicate their options is of no true theoretical contribution. A second important point is that the existence of transaction costs should not be invoked as an excuse for disregarding the no-arbitrage condition, but, rather should be constructively invoked to study its impacts on the models…..