Bullish or Bearish. Note Quote.

Untitled

The term spread refers to the difference in premiums between the purchase and sale of options. An option spread is the simultaneous purchase of one or more options contracts and sale of the equivalent number of options contracts, in a different series of the class of options. A spread could involve the same underlying: 

  •  Buying and selling calls, or 
  •  Buying and selling puts.

Combining puts and calls into groups of two or more makes it feasible to design derivatives with interesting payoff profiles. The profit and loss outcomes depend on the options used (puts or calls); positions taken (long or short); whether their strike prices are identical or different; and the similarity or difference of their exercise dates. Among directional positions are bullish vertical call spreads, bullish vertical put spreads, bearish vertical spreads, and bearish vertical put spreads. 

If the long position has a higher premium than the short position, this is known as a debit spread, and the investor will be required to deposit the difference in premiums. If the long position has a lower premium than the short position, this is a credit spread, and the investor will be allowed to withdraw the difference in premiums. The spread will be even if the premiums on each side results are the same. 

A potential loss in an option spread is determined by two factors: 

  • Strike price 
  • Expiration date 

If the strike price of the long call is greater than the strike price of the short call, or if the strike price of the long put is less than the strike price of the short put, a margin is required because adverse market moves can cause the short option to suffer a loss before the long option can show a profit.

A margin is also required if the long option expires before the short option. The reason is that once the long option expires, the trader holds an unhedged short position. A good way of looking at margin requirements is that they foretell potential loss. Here are, in a nutshell, the main option spreadings.

A calendar, horizontal, or time spread is the simultaneous purchase and sale of options of the same class with the same exercise prices but with different expiration dates. A vertical, or price or money, spread is the simultaneous purchase and sale of options of the same class with the same expiration date but with different exercise prices. A bull, or call, spread is a type of vertical spread that involves the purchase of the call option with the lower exercise price while selling the call option with the higher exercise price. The result is a debit transaction because the lower exercise price will have the higher premium.

  • The maximum risk is the net debit: the long option premium minus the short option premium. 
  • The maximum profit potential is the difference in the strike prices minus the net debit. 
  • The breakeven is equal to the lower strike price plus the net debit. 

A trader will typically buy a vertical bull call spread when he is mildly bullish. Essentially, he gives up unlimited profit potential in return for reducing his risk. In a vertical bull call spread, the trader is expecting the spread premium to widen because the lower strike price call comes into the money first. 

Vertical spreads are the more common of the direction strategies, and they may be bullish or bearish to reflect the holder’s view of market’s anticipated direction. Bullish vertical put spreads are a combination of a long put with a low strike, and a short put with a higher strike. Because the short position is struck closer to-the-money, this generates a premium credit. 

Bearish vertical call spreads are the inverse of bullish vertical call spreads. They are created by combining a short call with a low strike and a long call with a higher strike. Bearish vertical put spreads are the inverse of bullish vertical put spreads, generated by combining a short put with a low strike and a long put with a higher strike. This is a bearish position taken when a trader or investor expects the market to fall. 

The bull or sell put spread is a type of vertical spread involving the purchase of a put option with the lower exercise price and sale of a put option with the higher exercise price. Theoretically, this is the same action that a bull call spreader would take. The difference between a call spread and a put spread is that the net result will be a credit transaction because the higher exercise price will have the higher premium. 

  • The maximum risk is the difference in the strike prices minus the net credit. 
  • The maximum profit potential equals the net credit. 
  • The breakeven equals the higher strike price minus the net credit. 

The bear or sell call spread involves selling the call option with the lower exercise price and buying the call option with the higher exercise price. The net result is a credit transaction because the lower exercise price will have the higher premium.

A bear put spread (or buy spread) involves selling some of the put option with the lower exercise price and buying the put option with the higher exercise price. This is the same action that a bear call spreader would take. The difference between a call spread and a put spread, however, is that the net result will be a debit transaction because the higher exercise price will have the higher premium. 

  • The maximum risk is equal to the net debit. 
  • The maximum profit potential is the difference in the strike
    prices minus the net debit. 
  • The breakeven equals the higher strike price minus the net debit.

An investor or trader would buy a vertical bear put spread because he or she is mildly bearish, giving up an unlimited profit potential in return for a reduction in risk. In a vertical bear put spread, the trader is expecting the spread premium to widen because the higher strike price put comes into the money first. 

In conclusion, investors and traders who are bullish on the market will either buy a bull call spread or sell a bull put spread. But those who are bearish on the market will either buy a bear put spread or sell a bear call spread. When the investor pays more for the long option than she receives in premium for the short option, then the spread is a debit transaction. In contrast, when she receives more than she pays, the spread is a credit transaction. Credit spreads typically require a margin deposit. 

Tranche Declension.

800px-CDO_-_FCIC_and_IMF_Diagram

With the CDO (collateralized debt obligation) market picking up, it is important to build a stronger understanding of pricing and risk management models. The role of the Gaussian copula model, has well-known deficiencies and has been criticized, but it continues to be fundamental as a starter. Here, we draw attention to the applicability of Gaussian inequalities in analyzing tranche loss sensitivity to correlation parameters for the Gaussian copula model.

We work with an RN-valued Gaussian random variable X = (X1, … , XN), where each Xj is normalized to mean 0 and variance 1, and study the equity tranche loss

L[0,a] = ∑m=1Nlm1[xm≤cm] – {∑m=1Nlm1[xm≤cm] – a}

where l1 ,…, lN > 0, a > 0, and c1,…, cN ∈ R are parameters. We thus establish an identity between the sensitivity of E[L[0,a]] to the correlation rjk = E[XjXk] and the parameters cj and ck, from where subsequently we come to the inequality

∂E[L[0,a]]/∂rjk ≤ 0

Applying this inequality to a CDO containing N names whose default behavior is governed by the Gaussian variables Xj shows that an increase in name-to-name correlation decreases expected loss in an equity tranche. This is a generalization of the well-known result for Gaussian copulas with uniform correlation.

Consider a CDO consisting of N names, with τj denoting the (random) default time of the jth name. Let

Xj = φj-1(Fjj))

where Fj is the distribution function of τj (relative to the market pricing measure), assumed to be continuous and strictly increasing, and φj is the standard Gaussian distribution function. Then for any x ∈ R we have

P[Xj ≤ x] = P[τj ≤ Fj-1j(x))] = Fj(Fj-1j(x))) = φj(x)

which means that Xj has standard Gaussian distribution. The Gaussian copula model posits that the joint distribution of the Xj is Gaussian; thus,

X = (X1, …., Xn)

is an RN-valued Gaussian variable whose marginals are all standard Gaussian. The correlation

τj = E[XjXk]

reflects the default correlation between the names j and k. Now let

pj = E[τj ≤ T] = P[Xj ≤ cj]

be the probability that the jth name defaults within a time horizon T, which is held constant, and

cj = φj−1(Fj(T))

is the default threshold of the jth name.

In schematics, when we explore the essential phenomenon, the default of name j, which happens if the default time τis within the time horizon T, results in a loss of amount lj > 0 in the CDO portfolio. Thus, the total loss during the time period [0, T] is

L = ∑m=1Nlm1[xm≤cm]

This is where we are essentially working with a one-period CDO, and ignoring discounting from the random time of actual default. A tranche is simply a range of loss for the portfolio; it is specified by a closed interval [a, b] with 0 ≤ a ≤ b. If the loss x is less than a, then this tranche is unaffected, whereas if x ≥ b then the entire tranche value b − a is eaten up by loss; in between, if a ≤ x ≤ b, the loss to the tranche is x − a. Thus, the tranche loss function t[a, b] is given by

t[a, b](x) = 0 if x < a; = x – a, if x ∈ [a, b]; = b – a; if x > b

or compactly,

t[a, b](x) = (x – a)+ – (x – b)+

From this, it is clear that t[a, b](x) is continuous in (a, b, x), and we see that it is a non-decreasing function of x. Thus, the loss in an equity tranche [0, a] is given by

t[0,a](L) = L − (L − a)+

with a > 0.

Haircuts and Collaterals.

In+addition,+new+collateral+requirements+are+approaching…

In a repo-style securities financing transaction, the repo buyer or lender is exposed to the borrower’s default risk for the whole duration with a market contingent exposure, framed on a short window for default settlement. A margin period of risk (MPR) is a time period starting from the last date when margin is met to the date when the defaulting counterparty is closed out with completion of collateral asset disposal. MPR could cover a number of events or processes, including collateral valuation, margin calculation, margin call, valuation dispute and resolution, default notification and default grace period, and finally time to sell collateral to recover the lent principal and accrued interest. If the sales proceeds are not sufficient, the deficiency could be made a claim to the borrower’s estate, unless the repo is non-recourse. The lender’s exposure in a repo during the MPR is simply principal plus accrued and unpaid interest. Since the accrued and unpaid interest is usually margined at cash, repo exposure in the MPR is flat.

A flat exposure could apply to OTC derivatives as well. For an OTC netting, the mark-to-market of the derivatives could fluctuate as its underlying prices move. The derivatives exposure is formally set on the early termination date which could be days behind the point of default. The surviving counterparty, however, could have delta hedged against market factors following the default so that the derivative exposure remains a more manageable gamma exposure. For developing a collateral haircut model, what is generally assumed is a constant exposure during the MPR.

The primary driver of haircuts is asset volatility. Market liquidity risk is another significant one, as liquidation of the collateral assets might negatively impact the market, if the collateral portfolio is illiquid, large, or concentrated in certain asset sectors or classes. Market prices could be depressed, bid/ask spreads could widen, and some assets might have to be sold at a steep discount. This is particularly pronounced with private securitization and lower grade corporates, which trade infrequently and often rely on valuation services rather than actual market quotations. A haircut model therefore needs to capture liquidity risk, in addition to asset volatility.

In an idealized setting, we therefore consider a counterparty (or borrower) C’s default time at t, when the margin is last met, an MPR of u during which there is no margin posting, and the collateral assets are sold at time t+u instantaneously on the market, with a possible liquidation discount g.

Let us denote the collateral market value as B(t), exposure to the defaulting counterparty C as E(t). At time t, one share of the asset is margined properly, i.e., E(t) = (1-h)B(t), where h is a constant haircut, 1 >h ≥0. The margin agreement is assumed to have a zero minimum transfer amount. The lender would have a residual exposure (E(t) – B(t+u)(1-g))+, where g is a constant, 1 > g ≥ 0. Exposure to C is assumed flat after t. We can write the loss function from holding the collateral as follows,

L(t + u) = Et(1 – Bt+u/Bt (1 – g)/(1 – h))+ = (1 – g)Bt(1 – Bt+u/Bt (h – g)/(1 – g))+ —– (1)

Conditional on default happening at time t, the above determines a one-period loss distribution driven by asset price return B(t+u)/B(t). For repos, this loss function is slightly different from the lender’s ultimate loss which would be lessened due to a claim and recovery process. In the regulatory context, haircut is viewed as a mitigation to counterparty exposure and made independent of counterparty, so recovery from the defaulting party is not considered.

Let y = (1 – Bt+u/Bt) be the price decline. If g=0, Pr(y>h) equals to Pr(L(u)>0). There is no loss, if the price decline is less or equal to h. A first rupee loss will occur only if y > h. h thus provides a cushion before a loss is incurred. Given a target rating class’s default probability p, the first loss haircut can be written as

hp = inf{h > 0:Pr(L(u) > 0) ≤ p} —– (2)

Let VaRq denote the VaR of holding the asset, an amount which the price decline won’t exceed, given a confidence interval of q, say 99%. In light of the adoption of the expected shortfall (ES) in BASEL IV’s new market risk capital standard, we get a chance to define haircut as ES under the q-quantile,

hES = ESq = E[y|y > VaRq]

VaRq = inf{y0 > 0 : Pr(y > y0) ≤ 1 − q} —– (3)

Without the liquidity discount, hp is the same as VaRq. If haircuts are set to VaRq or hES, the market risk capital for holding the asset for the given MPR, defined as a multiple of VaR or ES, is zero. This implies that we can define a haircut to meet a minimum economic capital (EC) requirement C0,

hEC = inf{h ∈ R+: EC[L|h] ≤ C0} —– (4)

where EC is measured either as VaR or ES subtracted by expected loss (EL). For rating criteria employing EL based target per rating class, we could introduce one more definition of haircuts based on EL target L0,

hEL = inf{h ∈ R+: E[L|h] ≤ L0} —– (5)

The expected loss target L0 can be set based on EL criteria of certain designated high credit rating, whether bank internal or external. With an external rating such as Moody’s, for example, a firm can set the haircut to a level such that the expected (cumulative) loss satisfies the expected loss tolerance L0 of some predetermined Moody’s rating target, e.g., ‘Aaa’ or ‘Aa1’. In (4) and (5), loss L’s holding period does not have to be an MPR. In fact, these two definitions apply to the general trading book credit risk capital approach where the standard horizon is one year with a 99.9% confidence interval for default risk.

Different from VaRq, definitions hp, hEL, and hEC are based on a loss distribution solely generated by collateral market risk exposure. As such, we no longer apply the usual wholesale credit risk terminology of probability of default (PD) and loss given default (LGD) to determine EL as product of PD and LGD. Here EL is directly computed from a loss distribution originated from market risk and the haircut intends to be wholesale counterparty independent. For real repo transactions where repo haircuts are known to be counterparty dependent, these definitions remain fit, when the loss distribution incorporates the counterparty credit quality.

Long Term Capital Management. Note Quote.

Long Term Capital Management, or LTCM, was a hedge fund founded in 1994 by John Meriwether, the former head of Salomon Brothers’s domestic fixed-income arbitrage group. Meriwether had grown the arbitrage group to become Salomon’s most profitable group by 1991, when it was revealed that one of the traders under his purview had astonishingly submitted a false bid in a U.S. Treasury bond auction. Despite reporting the trade immediately to CEO John Gutfreund, the outcry from the scandal forced Meriwether to resign.

Meriwether revived his career several years later with the founding of LTCM. Amidst the beginning of one of the greatest bull markets the global markets had ever seen, Meriwether assembled a team of some of the world’s most respected economic theorists to join other refugees from the arbitrage group at Salomon. The board of directors included Myron Scholes, a coauthor of the famous Black-Scholes formula used to price option contracts, and MIT Sloan professor Robert Merton, both of whom would later share the 1997 Nobel Prize for Economics. The firm’s impressive brain trust, collectively considered geniuses by most of the financial world, set out to raise a $1 billion fund by explaining to investors that their profoundly complex computer models allowed them to price securities according to risk more accurately than the rest of the market, in effect “vacuuming up nickels that others couldn’t see.”

One typical LTCM trade concerned the divergence in price between long-term U.S. Treasury bonds. Despite offering fundamentally the same (minimal) default risk, those issued more recently – known as “on-the-run” securities – traded more heavily than those “off-the-run” securities issued just months previously. Heavier trading meant greater liquidity, which in turn resulted in ever-so-slightly higher prices. As “on-the-run” securities become “off-the-run” upon the issuance of a new tranche of Treasury bonds, the price discrepancy generally disappears with time. LTCM sought to exploit that price convergence by shorting the more expensive “on-the-run” bond while purchasing the “off- the-run” security.

By early 1998 the intellectual firepower of its board members and the aggressive trading practices that had made the arbitrage group at Salomon so successful had allowed LTCM to flourish, growing its initial $1 billion of investor equity to $4.72 billion. However, the miniscule spreads earned on arbitrage trades could not provide the type of returns sought by hedge fund investors. In order to make transactions such as these worth their while, LTCM had to employ massive leverage in order to magnify its returns. Ultimately, the fund’s equity component sat atop more than $124.5 billion in borrowings for total assets of more than $129 billion. These borrowings were merely the tip of the ice-berg; LTCM also held off-balance-sheet derivative positions with a notional value of more than $1.25 trillion.

Untitled

The fund’s success began to pose its own problems. The market lacked sufficient capacity to absorb LTCM’s bloated size, as trades that had been profitable initially became impossible to conduct on a massive scale. Moreover, a flood of arbitrage imitators tightened the spreads on LTCM’s “bread-and-butter” trades even further. The pressure to continue delivering returns forced LTCM to find new arbitrage opportunities, and the fund diversified into areas where it could not pair its theoretical insights with trading experience. Soon LTCM had made large bets in Russia and in other emerging markets, on S&P futures, and in yield curve, junk bond, merger, and dual-listed securities arbitrage.

Combined with its style drift, the fund’s more than 26 leverage put LTCM in an increasingly precarious bubble, which was eventually burst by a combination of factors that forced the fund into a liquidity crisis. In contrast to Scholes’s comments about plucking invisible, riskless nickels from the sky, financial theorist Nassim Taleb later compared the fund’s aggressive risk taking to “picking up pennies in front of a steamroller,” a steamroller that finally came in the form of 1998’s market panic. The departure of frequent LTCM counterparty Salomon Brothers from the arbitrage market that summer put downward pressure on many of the fund’s positions, and Russia’s default on its government-issued bonds threw international credit markets into a downward spiral. Panicked investors around the globe demonstrated a “flight to quality,” selling the risky securities in which LTCM traded and purchasing U.S. Treasury securities, further driving up their price and preventing a price convergence upon which the fund had bet so heavily.

None of LTCM’s sophisticated theoretical models had contemplated such an internationally correlated credit market collapse, and the fund began hemorrhaging money, losing nearly 20% of its equity in May and June alone. Day after day, every market in which LTCM traded turned against it. Its powerless brain trust watched in horror as its equity shrank to $600 million in early September without any reduction in borrowing, resulting in an unfathomable 200 leverage ratio. Sensing the fund’s liquidity crunch, Bear Stearns refused to continue acting as a clearinghouse for the fund’s trades, throwing LTCM into a panic. Without the short-term credit that enabled its entire trading operations, the fund could not continue and its longer-term securities grew more illiquid by the day.

Obstinate in their refusal to unwind what they still considered profitable trades hammered by short-term market irrationality, LTCM’s partners refused a buyout offer of $250 million by Goldman Sachs, ING Barings, and Warren Buffet’s Berkshire Hathaway. However, LTCM’s role as a counterparty in thousands of derivatives trades that touched investment firms around the world threatened to provoke a wider collapse in international securities markets if the fund went under, so the U.S. Federal Reserve stepped in to maintain order. Wishing to avoid the precedent of a government bailout of a hedge fund and the moral hazard it could subsequently encourage, the Fed invited every major investment bank on Wall Street to an emergency meeting in New York and dictated the terms of the $3.625 billion bailout that would preserve market liquidity. The Fed convinced Bankers Trust, Barclays, Chase, Credit Suisse First Boston, Deutsche Bank, Goldman Sachs, Merrill Lynch, J.P. Morgan, Morgan Stanley, Salomon Smith Barney, and UBS – many of whom were investors in the fund – to contribute $300 million apiece, with $125 million coming from Société Générale and $100 million from Lehman Brothers and Paribas. Eventually the market crisis passed, and each bank managed to liquidate its position at a slight profit. Only one bank contacted by the Fed refused to join the syndicate and share the burden in the name of preserving market integrity.

That bank was Bear Stearns.

Bear’s dominant trading position in bonds and derivatives had won it the profitable business of acting as a settlement house for nearly all of LTCM’s trading in those markets. On September 22, 1998, just days before the Fed-organized bailout, Bear put the final nail in the LTCM coffin by calling in a short-term debt in the amount of $500 million in an attempt to limit its own exposure to the failing hedge fund, rendering it insolvent in the process. Ever the maverick in investment banking circles, Bear stubbornly refused to contribute to the eventual buyout, even in the face of a potentially apocalyptic market crash and despite the millions in profits it had earned as LTCM’s prime broker. In typical Bear fashion, James Cayne ignored the howls from other banks that failure to preserve confidence in the markets through a bailout would bring them all down in flames, famously growling through a chewed cigar as the Fed solicited contributions for the emergency financing, “Don’t go alphabetically if you want this to work.”

Market analysts were nearly unanimous in describing the lessons learned from LTCM’s implosion; in effect, the fund’s profound leverage had placed it in such a precarious position that it could not wait for its positions to turn profitable. While its trades were sound in principal, LTCM’s predicted price convergence was not realized until long after its equity had been wiped out completely. A less leveraged firm, they explained, might have realized lower profits than the 40% annual return LTCM had offered investors up until the 1998 crisis, but could have weathered the storm once the market turned against it. In the words of economist John Maynard Keynes, the market had remained irrational longer than LTCM could remain solvent. The crisis further illustrated the importance not merely of liquidity but of perception in the less regulated derivatives markets. Once LTCM’s ability to meet its obligations was called into question, its demise became inevitable, as it could no longer find counterparties with whom to trade and from whom it could borrow to continue operating.

The thornier question of the Fed’s role in bailing out an overly aggressive investment fund in the name of market stability remained unresolved, despite the Fed’s insistence on private funding for the actual buyout. Though impossible to foresee at the time, the issue would be revisited anew less than ten years later, and it would haunt Bear Stearns. With negative publicity from Bear’s $38.5 million settlement with the SEC regarding charges that it had ignored fraudulent behavior by a client for whom it cleared trades and LTCM’s collapse behind it, Bear Stearns continued to grow under Cayne’s leadership, with its stock price appreciating some 600% from his assumption of control in 1993 until 2008. However, a rapid-fire sequence of negative events began to unfurl in the summer of 2007 that would push Bear into a liquidity crunch eerily similar to the one that felled LTCM.

Cryptocurrency and Efficient Market Hypothesis. Drunken Risibility.

According to the traditional definition, a currency has three main properties: (i) it serves as a medium of exchange, (ii) it is used as a unit of account and (iii) it allows to store value. Along economic history, monies were related to political power. In the beginning, coins were minted in precious metals. Therefore, the value of a coin was intrinsically determined by the value of the metal itself. Later, money was printed in paper bank notes, but its value was linked somewhat to a quantity in gold, guarded in the vault of a central bank. Nation states have been using their political power to regulate the use of currencies and impose one currency (usually the one issued by the same nation state) as legal tender for obligations within their territory. In the twentieth century, a major change took place: abandoning gold standard. The detachment of the currencies (specially the US dollar) from the gold standard meant a recognition that the value of a currency (specially in a world of fractional banking) was not related to its content or representation in gold, but to a broader concept as the confidence in the economy in which such currency is based. In this moment, the value of a currency reflects the best judgment about the monetary policy and the “health” of its economy.

In recent years, a new type of currency, a synthetic one, emerged. We name this new type as “synthetic” because it is not the decision of a nation state, nor represents any underlying asset or tangible wealth source. It appears as a new tradable asset resulting from a private agreement and facilitated by the anonymity of internet. Among this synthetic currencies, Bitcoin (BTC) emerges as the most important one, with a market capitalization of a few hundred million short of $80 billions.

bitcoin-price-bitstamp-sept1

Bitcoin Price Chart from Bitstamp

There are other cryptocurrencies, based on blockchain technology, such as Litecoin (LTC), Ethereum (ETH), Ripple (XRP). The website https://coinmarketcap.com/currencies/ counts up to 641 of such monies. However, as we can observe in the figure below, Bitcoin represents 89% of the capitalization of the market of all cryptocurrencies.

Untitled

Cryptocurrencies. Share of market capitalization of each currency.

One open question today is if Bitcoin is in fact a, or may be considered as a, currency. Until now, we cannot observe that Bitcoin fulfills the main properties of a standard currency. It is barely (though increasingly so!) accepted as a medium of exchange (e.g. to buy some products online), it is not used as unit of account (there are no financial statements valued in Bitcoins), and we can hardly believe that, given the great swings in price, anyone can consider Bitcoin as a suitable option to store value. Given these characteristics, Bitcoin could fit as an ideal asset for speculative purposes. There is no underlying asset to relate its value to and there is an open platform to operate round the clock.

Untitled

Bitcoin returns, sampled every 5 hours.

Speculation has a long history and it seems inherent to capitalism. One common feature of speculative assets in history has been the difficulty in valuation. Tulipmania, the South Sea bubble, and more others, reflect on one side human greedy behavior, and on the other side, the difficulty to set an objective value to an asset. All speculative behaviors were reflected in a super-exponential growth of the time series.

Cryptocurrencies can be seen as the libertarian response to central bank failure to manage financial crises, as the one occurred in 2008. Also cryptocurrencies can bypass national restrictions to international transfers, probably at a cheaper cost. Bitcoin was created by a person or group of persons under the pseudonym Satoshi Nakamoto. The discussion of Bitcoin has several perspectives. The computer science perspective deals with the strengths and weaknesses of blockchain technology. In fact, according to R. Ali et. al., the introduction of a “distributed ledger” is the key innovation. Traditional means of payments (e.g. a credit card), rely on a central clearing house that validate operations, acting as “middleman” between buyer and seller. On contrary, the payment validation system of Bitcoin is decentralized. There is a growing army of miners, who put their computer power at disposal of the network, validating transactions by gathering together blocks, adding them to the ledger and forming a ’block chain’. This work is remunerated by giving the miners Bitcoins, what makes (until now) the validating costs cheaper than in a centralized system. The validation is made by solving some kind of algorithm. With the time solving the algorithm becomes harder, since the whole ledger must be validated. Consequently it takes more time to solve it. Contrary to traditional currencies, the total number of Bitcoins to be issued is beforehand fixed: 21 million. In fact, the issuance rate of Bitcoins is expected to diminish over time. According to Laursen and Kyed, validating the public ledger was initially rewarded with 50 Bitcoins, but the protocol foresaw halving this quantity every four years. At the current pace, the maximum number of Bitcoins will be reached in 2140. Taking into account the decentralized character, Bitcoin transactions seem secure. All transactions are recorded in several computer servers around the world. In order to commit fraud, a person should change and validate (simultaneously) several ledgers, which is almost impossible. Additional, ledgers are public, with encrypted identities of parties, making transactions “pseudonymous, not anonymous”. The legal perspective of Bitcoin is fuzzy. Bitcoin is not issued, nor endorsed by a nation state. It is not an illegal substance. As such, its transaction is not regulated.

In particular, given the nonexistence of saving accounts in Bitcoin, and consequently the absense of a Bitcoin interest rate, precludes the idea of studying the price behavior in relation with cash flows generated by Bitcoins. As a consequence, the underlying dynamics of the price signal, finds the Efficient Market Hypothesis as a theoretical framework. The Efficient Market Hypothesis (EMH) is the cornerstone of financial economics. One of the seminal works on the stochastic dynamics of speculative prices is due to L Bachelier, who in his doctoral thesis developed the first mathematical model concerning the behavior of stock prices. The systematic study of informational efficiency begun in the 1960s, when financial economics was born as a new area within economics. The classical definition due to Eugene Fama (Foundations of Finance_ Portfolio Decisions and Securities Prices 1976-06) says that a market is informationally efficient if it “fully reflects all available information”. Therefore, the key element in assessing efficiency is to determine the appropriate set of information that impels prices. Following Efficient Capital Markets, informational efficiency can be divided into three categories: (i) weak efficiency, if prices reflect the information contained in the past series of prices, (ii) semi-strong efficiency, if prices reflect all public information and (iii) strong efficiency, if prices reflect all public and private information. As a corollary of the EMH, one cannot accept the presence of long memory in financial time series, since its existence would allow a riskless profitable trading strategy. If markets are informationally efficient, arbitrage prevent the possibility of such strategies. If we consider the financial market as a dynamical structure, short term memory can exist (to some extent) without contradicting the EMH. In fact, the presence of some mispriced assets is the necessary stimulus for individuals to trade and reach an (almost) arbitrage free situation. However, the presence of long range memory is at odds with the EMH, because it would allow stable trading rules to beat the market.

The presence of long range dependence in financial time series generates a vivid debate. Whereas the presence of short term memory can stimulate investors to exploit small extra returns, making them disappear, long range correlations poses a challenge to the established financial model. As recognized by Ciaian et. al., Bitcoin price is not driven by macro-financial indicators. Consequently a detailed analysis of the underlying dynamics (Hurst exponent) becomes important to understand its emerging behavior. There are several methods (both parametric and non parametric) to calculate the Hurst exponent, which become a mandatory framework to tackle BTC trading.

Stock Hedging Loss and Risk

stock_17

A stock is supposed to be bought at time zero with price S0, and to be sold at time T with uncertain price ST. In order to hedge the market risk of the stock, the company decides to choose one of the available put options written on the same stock with maturity at time τ, where τ is prior and close to T, and the n available put options are specified by their strike prices Ki (i = 1,2,··· ,n). As the prices of different put options are also different, the company needs to determine an optimal hedge ratio h (0 ≤ h ≤ 1) with respect to the chosen strike price. The cost of hedging should be less than or equal to the predetermined hedging budget C. In other words, the company needs to determine the optimal strike price and hedging ratio under the constraint of hedging budget. The chosen put option is supposed to finish in-the-money at maturity, and the constraint of hedging expenditure is supposed to be binding.

Suppose the market price of the stock is S0 at time zero, the hedge ratio is h, the price of the put option is P0, and the riskless interest rate is r. At time T, the time value of the hedging portfolio is

S0erT + hP0erT —– (1)

and the market price of the portfolio is

ST + h(K − Sτ)+ er(T − τ) —— (2)

therefore the loss of the portfolio is

L = S0erT + hP0erT − (ST +h(K − Sτ)+ er(T − τ)—– (3)

where x+ = max(x, 0), which is the payoff function of put option at maturity. For a given threshold v, the probability that the amount of loss exceeds v is denoted as

α = Prob{L ≥ v} —– (4)

in other words, v is the Value-at-Risk (VaR) at α percentage level. There are several alternative measures of risk, such as CVaR (Conditional Value-at-Risk), ESF (Expected Shortfall), CTE (Conditional Tail Expectation), and other coherent risk measures.

The mathematical model of stock price is chosen to be a geometric Brownian motion

dSt/St = μdt + σdBt —– (5)

where St is the stock price at time t (0 < t ≤ T), μ and σ are the drift and the volatility of stock price, and Bt is a standard Brownian motion. The solution of the stochastic differential equation is

St = S0 eσBt + (μ − 1/2σ2)t —– (6)

where B0 = 0, and St is lognormally distributed.

For a given threshold of loss v, the probability that the loss exceeds v is

Prob {L ≥ v} = E [I{X≤c1}FY(g(X) − X)] + E [I{X≥c1}FY (c2 − X)] —– (7)

where E[X] is the expectation of random variable X. I{X<c} is the index function of X such that I{X<c} = 1 when {X < c} is true, otherwise I{X<c} = 0. FY(y) is the cumulative distribution function of random variable Y, and

c1 = 1/σ [ln(k/S0) – (μ – 1/2σ2)τ]

g(X) = 1/σ [ln((S0 + hP0)erT − h(K − f(X))er(T − τ) − v)/S0 – (μ – 1/2σ2)T]

f(X) = S0 eσX + (μ−1σ2

c2 = 1/σ [ln((S0 + hP0)erT − v)/S0 – (μ – 1/2σ2)T]

X and Y are both normally distributed, where X ∼ N(0, √τ), Y ∼ N(0, √(T−τ)).

For a specified hedging strategy, Q(v) = Prob {L ≥ v} is a decreasing function of v. The VaR under α level can be obtained from equation

Q(v) = α —– (8)

The expectations can be calculated with Monte Carlo simulation methods, and the optimal hedging strategy which has the smallest VaR can be obtained from (8) by numerical searching methods.

Momentum of Accelerated Capital. Note Quote.

high-frequency-trading

Distinct types of high frequency trading firms include independent proprietary firms, which use private funds and specific strategies which remain secretive, and may act as market makers generating automatic buy and sell orders continuously throughout the day. Broker-dealer proprietary desks are part of traditional broker-dealer firms but are not related to their client business, and are operated by the largest investment banks. Thirdly hedge funds focus on complex statistical arbitrage, taking advantage of pricing inefficiencies between asset classes and securities.

Today strategies using algorithmic trading and High Frequency Trading play a central role on financial exchanges, alternative markets, and banks‘ internalized (over-the-counter) dealings:

High frequency traders typically act in a proprietary capacity, making use of a number of strategies and generating a very large number of trades every single day. They leverage technology and algorithms from end-to-end of the investment chain – from market data analysis and the operation of a specific trading strategy to the generation, routing, and execution of orders and trades. What differentiates HFT from algorithmic trading is the high frequency turnover of positions as well as its implicit reliance on ultra-low latency connection and speed of the system.

The use of algorithms in computerised exchange trading has experienced a long evolution with the increasing digitalisation of exchanges:

Over time, algorithms have continuously evolved: while initial first-generation algorithms – fairly simple in their goals and logic – were pure trade execution algos, second-generation algorithms – strategy implementation algos – have become much more sophisticated and are typically used to produce own trading signals which are then executed by trade execution algos. Third-generation algorithms include intelligent logic that learns from market activity and adjusts the trading strategy of the order based on what the algorithm perceives is happening in the market. HFT is not a strategy per se, but rather a technologically more advanced method of implementing particular trading strategies. The objective of HFT strategies is to seek to benefit from market liquidity imbalances or other short-term pricing inefficiencies.

While algorithms are employed by most traders in contemporary markets, the intense focus on speed and the momentary holding periods are the unique practices of the high frequency traders. As the defence of high frequency trading is built around the principles that it increases liquidity, narrows spreads, and improves market efficiency, the high number of trades made by HFT traders results in greater liquidity in the market. Algorithmic trading has resulted in the prices of securities being updated more quickly with more competitive bid-ask prices, and narrowing spreads. Finally HFT enables prices to reflect information more quickly and accurately, ensuring accurate pricing at smaller time intervals. But there are critical differences between high frequency traders and traditional market makers:

  1. HFT do not have an affirmative market making obligation, that is they are not obliged to provide liquidity by constantly displaying two sides quotes, which may translate into a lack of liquidity during volatile conditions.
  2. HFT contribute little market depth due to the marginal size of their quotes, which may result in larger orders having to transact with many small orders, and this may impact on overall transaction costs.
  3. HFT quotes are barely accessible due to the extremely short duration for which the liquidity is available when orders are cancelled within milliseconds.

Besides the shallowness of the HFT contribution to liquidity, are the real fears of how HFT can compound and magnify risk by the rapidity of its actions:

There is evidence that high-frequency algorithmic trading also has some positive benefits for investors by narrowing spreads – the difference between the price at which a buyer is willing to purchase a financial instrument and the price at which a seller is willing to sell it – and by increasing liquidity at each decimal point. However, a major issue for regulators and policymakers is the extent to which high-frequency trading, unfiltered sponsored access, and co-location amplify risks, including systemic risk, by increasing the speed at which trading errors or fraudulent trades can occur.

Although there have always been occasional trading errors and episodic volatility spikes in markets, the speed, automation and interconnectedness of today‘s markets create a different scale of risk. These risks demand that exchanges and market participants employ effective quality management systems and sophisticated risk mitigation controls adapted to these new dynamics in order to protect against potential threats to market stability arising from technology malfunctions or episodic illiquidity. However, there are more deliberate aspects of HFT strategies which may present serious problems for market structure and functioning, and where conduct may be illegal, for example in order anticipation seeks to ascertain the existence of large buyers or sellers in the marketplace and then to trade ahead of those buyers and sellers in anticipation that their large orders will move market prices. A momentum strategy involves initiating a series of orders and trades in an attempt to ignite a rapid price move. HFT strategies can resemble traditional forms of market manipulation that violate the Exchange Act:

  1. Spoofing and layering occurs when traders create a false appearance of market activity by entering multiple non-bona fide orders on one side of the market at increasing or decreasing prices in order to induce others to buy or sell the stock at a price altered by the bogus orders.
  2. Painting the tape involves placing successive small amount of buy orders at increasing prices in order to stimulate increased demand.

  3. Quote Stuffing and price fade are additional HFT dubious practices: quote stuffing is a practice that floods the market with huge numbers of orders and cancellations in rapid succession which may generate buying or selling interest, or compromise the trading position of other market participants. Order or price fade involves the rapid cancellation of orders in response to other trades.

The World Federation of Exchanges insists: ― Exchanges are committed to protecting market stability and promoting orderly markets, and understand that a robust and resilient risk control framework adapted to today‘s high speed markets, is a cornerstone of enhancing investor confidence. However this robust and resilient risk control framework‘ seems lacking, including in the dark pools now established for trading that were initially proposed as safer than the open market.

Accelerated Capital as an Anathema to the Principles of Communicative Action. A Note Quote on the Reciprocity of Capital and Ethicality of Financial Economics

continuum

Markowitz portfolio theory explicitly observes that portfolio managers are not (expected) utility maximisers, as they diversify, and offers the hypothesis that a desire for reward is tempered by a fear of uncertainty. This model concludes that all investors should hold the same portfolio, their individual risk-reward objectives are satisfied by the weighting of this ‘index portfolio’ in comparison to riskless cash in the bank, a point on the capital market line. The slope of the Capital Market Line is the market price of risk, which is an important parameter in arbitrage arguments.

Merton had initially attempted to provide an alternative to Markowitz based on utility maximisation employing stochastic calculus. He was only able to resolve the problem by employing the hedging arguments of Black and Scholes, and in doing so built a model that was based on the absence of arbitrage, free of turpe-lucrum. That the prescriptive statement “it should not be possible to make sure profits”, is a statement explicit in the Efficient Markets Hypothesis and in employing an Arrow security in the context of the Law of One Price. Based on these observations, we conject that the whole paradigm for financial economics is built on the principle of balanced reciprocity. In order to explore this conjecture we shall examine the relationship between commerce and themes in Pragmatic philosophy. Specifically, we highlight Robert Brandom’s (Making It Explicit Reasoning, Representing, and Discursive Commitment) position that there is a pragmatist conception of norms – a notion of primitive correctnesses of performance implicit in practice that precludes and are presupposed by their explicit formulation in rules and principles.

The ‘primitive correctnesses’ of commercial practices was recognised by Aristotle when he investigated the nature of Justice in the context of commerce and then by Olivi when he looked favourably on merchants. It is exhibited in the doux-commerce thesis, compare Fourcade and Healey’s contemporary description of the thesis Commerce teaches ethics mainly through its communicative dimension, that is, by promoting conversations among equals and exchange between strangers, with Putnam’s description of Habermas’ communicative action based on the norm of sincerity, the norm of truth-telling, and the norm of asserting only what is rationally warranted …[and] is contrasted with manipulation (Hilary Putnam The Collapse of the Fact Value Dichotomy and Other Essays)

There are practices (that should be) implicit in commerce that make it an exemplar of communicative action. A further expression of markets as centres of communication is manifested in the Asian description of a market brings to mind Donald Davidson’s (Subjective, Intersubjective, Objective) argument that knowledge is not the product of a bipartite conversations but a tripartite relationship between two speakers and their shared environment. Replacing the negotiation between market agents with an algorithm that delivers a theoretical price replaces ‘knowledge’, generated through communication, with dogma. The problem with the performativity that Donald MacKenzie (An Engine, Not a Camera_ How Financial Models Shape Markets) is concerned with is one of monism. In employing pricing algorithms, the markets cannot perform to something that comes close to ‘true belief’, which can only be identified through communication between sapient humans. This is an almost trivial observation to (successful) market participants, but difficult to appreciate by spectators who seek to attain ‘objective’ knowledge of markets from a distance. To appreciate the relevance to financial crises of the position that ‘true belief’ is about establishing coherence through myriad triangulations centred on an asset rather than relying on a theoretical model.

Shifting gears now, unless the martingale measure is a by-product of a hedging approach, the price given by such martingale measures is not related to the cost of a hedging strategy therefore the meaning of such ‘prices’ is not clear. If the hedging argument cannot be employed, as in the markets studied by Cont and Tankov (Financial Modelling with Jump Processes), there is no conceptual framework supporting the prices obtained from the Fundamental Theorem of Asset Pricing. This lack of meaning can be interpreted as a consequence of the strict fact/value dichotomy in contemporary mathematics that came with the eclipse of Poincaré’s Intuitionism by Hilbert’s Formalism and Bourbaki’s Rationalism. The practical problem of supporting the social norms of market exchange has been replaced by a theoretical problem of developing formal models of markets. These models then legitimate the actions of agents in the market without having to make reference to explicitly normative values.

The Efficient Market Hypothesis is based on the axiom that the market price is determined by the balance between supply and demand, and so an increase in trading facilitates the convergence to equilibrium. If this axiom is replaced by the axiom of reciprocity, the justification for speculative activity in support of efficient markets disappears. In fact, the axiom of reciprocity would de-legitimise ‘true’ arbitrage opportunities, as being unfair. This would not necessarily make the activities of actual market arbitrageurs illicit, since there are rarely strategies that are without the risk of a loss, however, it would place more emphasis on the risks of speculation and inhibit the hubris that has been associated with the prelude to the recent Crisis. These points raise the question of the legitimacy of speculation in the markets. In an attempt to understand this issue Gabrielle and Reuven Brenner identify the three types of market participant. ‘Investors’ are preoccupied with future scarcity and so defer income. Because uncertainty exposes the investor to the risk of loss, investors wish to minimise uncertainty at the cost of potential profits, this is the basis of classical investment theory. ‘Gamblers’ will bet on an outcome taking odds that have been agreed on by society, such as with a sporting bet or in a casino, and relates to de Moivre’s and Montmort’s ‘taming of chance’. ‘Speculators’ bet on a mis-calculation of the odds quoted by society and the reason why speculators are regarded as socially questionable is that they have opinions that are explicitly at odds with the consensus: they are practitioners who rebel against a theoretical ‘Truth’. This is captured in Arjun Appadurai’s argument that the leading agents in modern finance believe in their capacity to channel the workings of chance to win in the games dominated by cultures of control . . . [they] are not those who wish to “tame chance” but those who wish to use chance to animate the otherwise deterministic play of risk [quantifiable uncertainty]”.

In the context of Pragmatism, financial speculators embody pluralism, a concept essential to Pragmatic thinking and an antidote to the problem of radical uncertainty. Appadurai was motivated to study finance by Marcel Mauss’ essay Le Don (The Gift), exploring the moral force behind reciprocity in primitive and archaic societies and goes on to say that the contemporary financial speculator is “betting on the obligation of return”, and this is the fundamental axiom of contemporary finance. David Graeber (Debt The First 5,000 Years) also recognises the fundamental position reciprocity has in finance, but where as Appadurai recognises the importance of reciprocity in the presence of uncertainty, Graeber essentially ignores uncertainty in his analysis that ends with the conclusion that “we don’t ‘all’ have to pay our debts”. In advocating that reciprocity need not be honoured, Graeber is not just challenging contemporary capitalism but also the foundations of the civitas, based on equality and reciprocity. The origins of Graeber’s argument are in the first half of the nineteenth century. In 1836 John Stuart Mill defined political economy as being concerned with [man] solely as a being who desires to possess wealth, and who is capable of judging of the comparative efficacy of means for obtaining that end.

In Principles of Political Economy With Some of Their Applications to Social Philosophy, Mill defended Thomas Malthus’ An Essay on the Principle of Population, which focused on scarcity. Mill was writing at a time when Europe was struck by the Cholera pandemic of 1829–1851 and the famines of 1845–1851 and while Lord Tennyson was describing nature as “red in tooth and claw”. At this time, society’s fear of uncertainty seems to have been replaced by a fear of scarcity, and these standards of objectivity dominated economic thought through the twentieth century. Almost a hundred years after Mill, Lionel Robbins defined economics as “the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses”. Dichotomies emerge in the aftermath of the Cartesian revolution that aims to remove doubt from philosophy. Theory and practice, subject and object, facts and values, means and ends are all separated. In this environment ex cathedra norms, in particular utility (profit) maximisation, encroach on commercial practice.

In order to set boundaries on commercial behaviour motivated by profit maximisation, particularly when market uncertainty returned after the Nixon shock of 1971, society imposes regulations on practice. As a consequence, two competing ethics, functional Consequential ethics guiding market practices and regulatory Deontological ethics attempting stabilise the system, vie for supremacy. It is in this debilitating competition between two essentially theoretical ethical frameworks that we offer an explanation for the Financial Crisis of 2007-2009: profit maximisation, not speculation, is destabilising in the presence of radical uncertainty and regulation cannot keep up with motivated profit maximisers who can justify their actions through abstract mathematical models that bare little resemblance to actual markets. An implication of reorienting financial economics to focus on the markets as centres of ‘communicative action’ is that markets could become self-regulating, in the same way that the legal or medical spheres are self-regulated through professions. This is not a ‘libertarian’ argument based on freeing the Consequential ethic from a Deontological brake. Rather it argues that being a market participant entails restricting norms on the agent such as sincerity and truth telling that support knowledge creation, of asset prices, within a broader objective of social cohesion. This immediately calls into question the legitimacy of algorithmic/high- frequency trading that seems an anathema in regard to the principles of communicative action.

Fundamental Theorem of Asset Pricing: Tautological Meeting of Mathematical Martingale and Financial Arbitrage by the Measure of Probability.

thinkstockphotos-496599823

The Fundamental Theorem of Asset Pricing (FTAP hereafter) has two broad tenets, viz.

1. A market admits no arbitrage, if and only if, the market has a martingale measure.

2. Every contingent claim can be hedged, if and only if, the martingale measure is unique.

The FTAP is a theorem of mathematics, and the use of the term ‘measure’ in its statement places the FTAP within the theory of probability formulated by Andrei Kolmogorov (Foundations of the Theory of Probability) in 1933. Kolmogorov’s work took place in a context captured by Bertrand Russell, who observed that

It is important to realise the fundamental position of probability in science. . . . As to what is meant by probability, opinions differ.

In the 1920s the idea of randomness, as distinct from a lack of information, was becoming substantive in the physical sciences because of the emergence of the Copenhagen Interpretation of quantum mechanics. In the social sciences, Frank Knight argued that uncertainty was the only source of profit and the concept was pervading John Maynard Keynes’ economics (Robert Skidelsky Keynes the return of the master).

Two mathematical theories of probability had become ascendant by the late 1920s. Richard von Mises (brother of the Austrian economist Ludwig) attempted to lay down the axioms of classical probability within a framework of Empiricism, the ‘frequentist’ or ‘objective’ approach. To counter–balance von Mises, the Italian actuary Bruno de Finetti presented a more Pragmatic approach, characterised by his claim that “Probability does not exist” because it was only an expression of the observer’s view of the world. This ‘subjectivist’ approach was closely related to the less well-known position taken by the Pragmatist Frank Ramsey who developed an argument against Keynes’ Realist interpretation of probability presented in the Treatise on Probability.

Kolmogorov addressed the trichotomy of mathematical probability by generalising so that Realist, Empiricist and Pragmatist probabilities were all examples of ‘measures’ satisfying certain axioms. In doing this, a random variable became a function while an expectation was an integral: probability became a branch of Analysis, not Statistics. Von Mises criticised Kolmogorov’s generalised framework as un-necessarily complex. About a decade and a half back, the physicist Edwin Jaynes (Probability Theory The Logic Of Science) champions Leonard Savage’s subjectivist Bayesianism as having a “deeper conceptual foundation which allows it to be extended to a wider class of applications, required by current problems of science”.

The objections to measure theoretic probability for empirical scientists can be accounted for as a lack of physicality. Frequentist probability is based on the act of counting; subjectivist probability is based on a flow of information, which, following Claude Shannon, is now an observable entity in Empirical science. Measure theoretic probability is based on abstract mathematical objects unrelated to sensible phenomena. However, the generality of Kolmogorov’s approach made it flexible enough to handle problems that emerged in physics and engineering during the Second World War and his approach became widely accepted after 1950 because it was practically more useful.

In the context of the first statement of the FTAP, a ‘martingale measure’ is a probability measure, usually labelled Q, such that the (real, rather than nominal) price of an asset today, X0, is the expectation, using the martingale measure, of its (real) price in the future, XT. Formally,

X0 = EQ XT

The abstract probability distribution Q is defined so that this equality exists, not on any empirical information of historical prices or subjective judgement of future prices. The only condition placed on the relationship that the martingale measure has with the ‘natural’, or ‘physical’, probability measures usually assigned the label P, is that they agree on what is possible.

The term ‘martingale’ in this context derives from doubling strategies in gambling and it was introduced into mathematics by Jean Ville in a development of von Mises’ work. The idea that asset prices have the martingale property was first proposed by Benoit Mandelbrot in response to an early formulation of Eugene Fama’s Efficient Market Hypothesis (EMH), the two concepts being combined by Fama. For Mandelbrot and Fama the key consequence of prices being martingales was that the current price was independent of the future price and technical analysis would not prove profitable in the long run. In developing the EMH there was no discussion on the nature of the probability under which assets are martingales, and it is often assumed that the expectation is calculated under the natural measure. While the FTAP employs modern terminology in the context of value-neutrality, the idea of equating a current price with a future, uncertain, has ethical ramifications.

The other technical term in the first statement of the FTAP, arbitrage, has long been used in financial mathematics. Liber Abaci Fibonacci (Laurence Sigler Fibonaccis Liber Abaci) discusses ‘Barter of Merchandise and Similar Things’, 20 arms of cloth are worth 3 Pisan pounds and 42 rolls of cotton are similarly worth 5 Pisan pounds; it is sought how many rolls of cotton will be had for 50 arms of cloth. In this case there are three commodities, arms of cloth, rolls of cotton and Pisan pounds, and Fibonacci solves the problem by having Pisan pounds ‘arbitrate’, or ‘mediate’ as Aristotle might say, between the other two commodities.

Within neo-classical economics, the Law of One Price was developed in a series of papers between 1954 and 1964 by Kenneth Arrow, Gérard Debreu and Lionel MacKenzie in the context of general equilibrium, in particular the introduction of the Arrow Security, which, employing the Law of One Price, could be used to price any asset. It was on this principle that Black and Scholes believed the value of the warrants could be deduced by employing a hedging portfolio, in introducing their work with the statement that “it should not be possible to make sure profits” they were invoking the arbitrage argument, which had an eight hundred year history. In the context of the FTAP, ‘an arbitrage’ has developed into the ability to formulate a trading strategy such that the probability, under a natural or martingale measure, of a loss is zero, but the probability of a positive profit is not.

To understand the connection between the financial concept of arbitrage and the mathematical idea of a martingale measure, consider the most basic case of a single asset whose current price, X0, can take on one of two (present) values, XTD < XTU, at time T > 0, in the future. In this case an arbitrage would exist if X0 ≤ XTD < XTU: buying the asset now, at a price that is less than or equal to the future pay-offs, would lead to a possible profit at the end of the period, with the guarantee of no loss. Similarly, if XTD < XTU ≤ X0, short selling the asset now, and buying it back would also lead to an arbitrage. So, for there to be no arbitrage opportunities we require that

XTD < X0 < XTU

This implies that there is a number, 0 < q < 1, such that

X0 = XTD + q(XTU − XTD)

= qXTU + (1−q)XTD

The price now, X0, lies between the future prices, XTU and XTD, in the ratio q : (1 − q) and represents some sort of ‘average’. The first statement of the FTAP can be interpreted simply as “the price of an asset must lie between its maximum and minimum possible (real) future price”.

If X0 < XTD ≤ XTU we have that q < 0 whereas if XTD ≤ XTU < X0 then q > 1, and in both cases q does not represent a probability measure which by Kolmogorov’s axioms, must lie between 0 and 1. In either of these cases an arbitrage exists and a trader can make a riskless profit, the market involves ‘turpe lucrum’. This account gives an insight as to why James Bernoulli, in his moral approach to probability, considered situations where probabilities did not sum to 1, he was considering problems that were pathological not because they failed the rules of arithmetic but because they were unfair. It follows that if there are no arbitrage opportunities then quantity q can be seen as representing the ‘probability’ that the XTU price will materialise in the future. Formally

X0 = qXTU + (1−q) XTD ≡ EQ XT

The connection between the financial concept of arbitrage and the mathematical object of a martingale is essentially a tautology: both statements mean that the price today of an asset must lie between its future minimum and maximum possible value. This first statement of the FTAP was anticipated by Frank Ramsey when he defined ‘probability’ in the Pragmatic sense of ‘a degree of belief’ and argues that measuring ‘degrees of belief’ is through betting odds. On this basis he formulates some axioms of probability, including that a probability must lie between 0 and 1. He then goes on to say that

These are the laws of probability, …If anyone’s mental condition violated these laws, his choice would depend on the precise form in which the options were offered him, which would be absurd. He could have a book made against him by a cunning better and would then stand to lose in any event.

This is a Pragmatic argument that identifies the absence of the martingale measure with the existence of arbitrage and today this forms the basis of the standard argument as to why arbitrages do not exist: if they did the, other market participants would bankrupt the agent who was mis-pricing the asset. This has become known in philosophy as the ‘Dutch Book’ argument and as a consequence of the fact/value dichotomy this is often presented as a ‘matter of fact’. However, ignoring the fact/value dichotomy, the Dutch book argument is an alternative of the ‘Golden Rule’– “Do to others as you would have them do to you.”– it is infused with the moral concepts of fairness and reciprocity (Jeffrey Wattles The Golden Rule).

FTAP is the ethical concept of Justice, capturing the social norms of reciprocity and fairness. This is significant in the context of Granovetter’s discussion of embeddedness in economics. It is conventional to assume that mainstream economic theory is ‘undersocialised’: agents are rational calculators seeking to maximise an objective function. The argument presented here is that a central theorem in contemporary economics, the FTAP, is deeply embedded in social norms, despite being presented as an undersocialised mathematical object. This embeddedness is a consequence of the origins of mathematical probability being in the ethical analysis of commercial contracts: the feudal shackles are still binding this most modern of economic theories.

Ramsey goes on to make an important point

Having any definite degree of belief implies a certain measure of consistency, namely willingness to bet on a given proposition at the same odds for any stake, the stakes being measured in terms of ultimate values. Having degrees of belief obeying the laws of probability implies a further measure of consistency, namely such a consistency between the odds acceptable on different propositions as shall prevent a book being made against you.

Ramsey is arguing that an agent needs to employ the same measure in pricing all assets in a market, and this is the key result in contemporary derivative pricing. Having identified the martingale measure on the basis of a ‘primal’ asset, it is then applied across the market, in particular to derivatives on the primal asset but the well-known result that if two assets offer different ‘market prices of risk’, an arbitrage exists. This explains why the market-price of risk appears in the Radon-Nikodym derivative and the Capital Market Line, it enforces Ramsey’s consistency in pricing. The second statement of the FTAP is concerned with incomplete markets, which appear in relation to Arrow-Debreu prices. In mathematics, in the special case that there are as many, or more, assets in a market as there are possible future, uncertain, states, a unique pricing vector can be deduced for the market because of Cramer’s Rule. If the elements of the pricing vector satisfy the axioms of probability, specifically each element is positive and they all sum to one, then the market precludes arbitrage opportunities. This is the case covered by the first statement of the FTAP. In the more realistic situation that there are more possible future states than assets, the market can still be arbitrage free but the pricing vector, the martingale measure, might not be unique. The agent can still be consistent in selecting which particular martingale measure they choose to use, but another agent might choose a different measure, such that the two do not agree on a price. In the context of the Law of One Price, this means that we cannot hedge, replicate or cover, a position in the market, such that the portfolio is riskless. The significance of the second statement of the FTAP is that it tells us that in the sensible world of imperfect knowledge and transaction costs, a model within the framework of the FTAP cannot give a precise price. When faced with incompleteness in markets, agents need alternative ways to price assets and behavioural techniques have come to dominate financial theory. This feature was realised in The Port Royal Logic when it recognised the role of transaction costs in lotteries.

Malthusian Catastrophe.

population-arti19_depositphotos_18606893_m

As long as wealth is growing exponentially, it does not matter that some of the surplus labor is skimmed. If the production of the laborers is growing x% and their wealth grows y% – even if y% < x%, and the wealth of the capital grows faster, z%, with z% > x% – everybody is happy. The workers minimally increased their wealth, even if their productivity has increased tremendously. Nearly all increased labor production has been confiscated by the capital, exorbitant bonuses of bank managers are an example. (Managers, by the way, by definition, do not ’produce’ anything, but only help skim the production of others; it is ‘work’, but not ‘production’. As long as the skimming [money in] is larger than the cost of their work [money out], they will be hired by the capital. For instance, if they can move the workers into producing more for equal pay. If not, out they go).

If the economy is growing at a steady pace (x%), resulting in an exponential growth (1+x/100)n, effectively today’s life can be paid with (promises of) tomorrow’s earnings, ‘borrowing from the future’. (At a shrinking economy, the opposite occurs, paying tomorrow’s life with today’s earnings; having nothing to live on today).

Let’s put that in an equation. The economy of today Ei is defined in terms of growth of economy itself, the difference between today’s economy and tomorrow’s economy, Ei+1 − Ei,

Ei = α(Ei+1 − Ei) —– (1)

with α related to the growth rate, GR ≡ (Ei+1 − Ei)/Ei = 1/α. In a time-differential equation:

E(t) = αdE(t)/dt —– (2)

which has as solution

E(t) = E0e1/α —– (3)

exponential growth.

The problem is that eternal growth of x% is not possible. Our entire society depends on a

continuous growth; it is the fiber of our system. When it stops, everything collapses, if the derivative dE(t)/dt becomes negative, economy itself becomes negative and we start destroying things (E < 0) instead of producing things. If the growth gets relatively smaller, E itself gets smaller, assuming steady borrowing-from-tomorrow factor α (second equation above). But that is a contradiction; if E gets smaller, the derivative must be negative. The only consistent observation is that if E shrinks, E becomes immediately negative! This is what is called a Malthusian Catastrophe.

Now we seem to saturate with our production, we no longer have x% growth, but it is closer to 0. The capital, however, has inertia (viz. The continuing culture in the financial world of huge bonuses, often justified as “well, that is the market. What can we do?!”). The capital continues to increase their skimming of the surplus labor with the same z%. The laborers, therefore, now have a decrease of wealth close to z%. (Note that the capital cannot have a decline, a negative z%, because it would refuse to do something if that something does not make profit).

Many things that we took for granted before, free health care for all, early pension, free education, cheap or free transport (no road tolls, etc.) are more and more under discussion, with an argument that they are “becoming unaffordable”. This label is utter nonsense, when you think of it, since

1) Before, apparently, they were affordable.

2) We have increased productivity of our workers.

1 + 2 = 3) Things are becoming more and more affordable. Unless, they are becoming unaffordable for some (the workers) and not for others (the capitalists).

It might well be that soon we discover that living is unaffordable. The new money M’ in Marx’s equation is used as a starting point in new cycle M → M’. The eternal cycle causes condensation of wealth to the capital, away from the labor power. M keeps growing and growing. Anything that does not accumulate capital, M’ – M < 0, goes bankrupt. Anything that does not grow fast enough, M’ – M ≈ 0, is bought by something that does, reconfigured to have M’ – M large again. Note that these reconfigurations – optimizations of skimming (the laborers never profit form the reconfigurations, they are rather being sacked as a result of them) – are presented by the media as something good, where words as ‘increased synergy’ are used to defend mergers, etc. It alludes to the sponsors of the messages coming to us. Next time you read the word ‘synergy’ in these communications, just replace it with ‘fleecing’.

The capital actually ‘refuses’ to do something if it does not make profit. If M’ is not bigger than M in a step, the step would simply not be done, implying also no Labour Power used and no payment for Labour Power. Ignoring for the moment philanthropists, in capitalistic Utopia capital cannot but grow. If economy is not growing it is therefore always at the cost of labor! Humans, namely, do not have this option of not doing things, because “better to get 99 paise while living costs 1 rupee, i.e., ‘loss’, than get no paisa at all [while living still costs one rupee (haha, excuse me the folly of quixotic living!]”. Death by slow starvation is chosen before rapid death.

In an exponential growing system, everything is OK; Capital grows and reward on labor as well. When the economy stagnates only the labor power (humans) pays the price. It reaches a point of revolution, when the skimming of Labour Power is so big, that this Labour Power (humans) cannot keep itself alive. Famous is the situation of Marie-Antoinette (representing the capital), wife of King Louis XVI of France, who responded to the outcry of the public (Labour Power) who demanded bread (sic!) by saying “They do not have bread? Let them eat cake!” A revolution of the labor power is unavoidable in a capitalist system when it reaches saturation, because the unavoidable increment of the capital is paid by the reduction of wealth of the labor power. That is a mathematical certainty.