(Il)liquid Hedge Lock-Ups. Thought of the Day 107.0

here-are-the-hedge-funds-that-are-dominating-in-2016

Hedge funds have historically limited their participation in illiquid investments, preferring to match their investment horizon to the typical one-year lock-up periods that their investors agree to. However, many hedge funds have increasingly invested in illiquid assets in an effort to augment returns. For example, they have invested in private investments in public equity (PIPEs), acquiring large minority holdings in public companies. Their purchases of CDOs and CLOs (collateralized loan obligations) are also somewhat illiquid, since these fixed income securities are difficult to price and there is a limited secondary market during times of crisis. In addition, hedge funds have participated in loans, and invested in physical assets. Sometimes, investments that were intended to be held for less than one year have become long-term, illiquid assets when the assets depreciated and hedge funds decided to continue holding the assets until values recovered, rather than selling at a loss. It is estimated that more than 20% of total assets under management by hedge funds are illiquid, hard-to-price assets. This makes hedge fund asset valuation difficult, and has created a mismatch between hedge fund assets and liabilities, giving rise to significant problems when investors attempt to withdraw their cash at the end of lock-up periods.

Hedge funds generally focus their investment strategies on financial assets that are liquid and able to be readily priced based on reported prices in the market for those assets or by reference to comparable assets that have a discernible price. Since most of these assets can be valued and sold over a short period of time to generate cash, hedge funds permit investors to invest in or withdraw money from the fund at regular intervals and managers receive performance fees based on quarterly mark-to-market valuations. However, in order to match up maturities of assets and liabilities for each investment strategy, most hedge funds have the ability to prevent invested capital from being withdrawn during certain periods of time. They achieve this though “lock-up” and “gate” provisions that are included in investment agreements with their investors.

A lock-up provision provides that during an initial investment period of, typically, one to two years, an investor is not allowed to withdraw any money from the fund. Generally, the lock-up period is a function of the investment strategy that is being pursued. Sometimes, lock-up periods are modified for specific investors through the use of a side letter agreement. However, this can become problematic because of the resulting different effective lock-up periods that apply to different investors who invest at the same time in the same fund. Also, this can trigger “most favored nations” provisions in other investor agreements.

A gate is a restriction that limits the amount of withdrawals during a quarterly or semi- annual redemption period after the lock-up period expires. Typically gates are percentages of a fund’s capital that can be withdrawn on a scheduled redemption date. A gate of 10 to 20% is common. A gate provision allows the hedge fund to increase exposure to illiquid assets without facing a liquidity crisis. In addition, it offers some protection to investors that do not attempt to withdraw funds because if withdrawals are too high, assets might have to be sold by the hedge fund at disadvantageous prices, causing a potential reduction in investment returns for remaining investors. During 2008 and 2009, as many hedge fund investors attempted to withdraw money based on poor returns and concerns about the financial crisis, there was considerable frustration and some litigation directed at hedge fund gate provisions.

Hedge funds sometimes use a “side pocket” account to house comparatively illiquid or hard-to-value assets. Once an asset is designated for inclusion in a side pocket, new investors don’t participate in the returns from this asset. When existing investors withdraw money from the hedge fund, they remain as investors in the side pocket asset until it either is sold or becomes liquid through a monetization event such as an IPO. Management fees are typically charged on side pocket assets based on their cost, rather than a mark-to-market value of the asset. Incentive fees are charged based on realized proceeds when the asset is sold. Usually, there is no requirement to force the sale of side pocket investments by a specific date. Sometimes, investors accuse hedge funds of putting distressed assets that were intended to be sold during a one-year horizon into a side pocket account to avoid dragging down the returns of the overall fund. Investors are concerned about unexpected illiquidity arising from a side pocket and the potential for even greater losses if a distressed asset that has been placed there continues to decline in value. Fund managers sometimes use even more drastic options to limit withdrawals, such as suspending all redemption rights (but only in the most dire circumstances).

Credit Default Swaps.

sg2008112876743

Credit default swaps are the most liquid instruments in the credit derivatives markets, accounting for nearly half of the total outstanding notional worldwide, and up to 85% of total outstanding notional of contracts with reference to emerging market issuers. In a CDS, the protection buyer pays a premium to the protection seller in exchange for a contingent payment in case a credit event involving a reference security occurs during the contract period.

Untitled

The premium (default swap spread) reflects the credit risk of the bond issuer, and is usually quoted as a spread over a reference rate such as LIBOR or the swap rate, to be paid either up front, quarterly, or semiannually. The contingent payment can be settled either by physical delivery of the reference security or an equivalent asset, or in cash. With physical settlement, the protection buyer delivers the reference security (or equivalent one) to the protection seller and receives the par amount. With cash settlement, the protection buyer receives a payment equal to the difference between par and the recovery value of the reference security, the latter determined from a dealer poll or from price quote services. Contracts are typically subject to physical settlement. This allows protection sellers to benefit from any rebound in prices caused by the rush to purchase deliverable bonds by protection buyers after the realization of the credit event.

In mature markets, trading is highly concentrated on 5 year contracts, and to certain extent, market participants consider these contracts a ‘‘commodity.’’ Usual contract maturities are 1, 2, 5, and 10 years. The coexistence of markets for default swaps and bonds raises the issue on whether prices in the former merely mirrors market expectations already reflected in bond prices. If credit risk were the only factor affecting the CDS spread, with credit risk characterized by the probability of default and the expected loss given default, the CDS spread and the bond spread should be approximately similar, as a portfolio of a default swap contract and a defaultable bond is essentially a risk-free asset.

However, market frictions and some embedded options in the CDS contract, such as the cheapest-to-deliver option, cause CDS spreads and bond spreads to diverge. The difference between these two spreads is referred to as the default swap basis. The default swap basis is positive when the CDS spread trades at a premium relative to the bond spread, and negative when the CDS spread trades at a discount.

Several factors contribute to the widening of the basis, either by widening the CDS spread or tightening the bond spread. Factors that tend to widen the CDS spread include: (1) the cheapest-to-deliver option, since protection sellers must charge a higher premium to account for the possibility of being delivered a less valuable asset in physically settled contracts; (2) the issuance of new bonds and/or loans, as increased hedging by market makers in the bond market pushes up the price of protection, and the number of potential cheapest-to-deliver assets increases; (3) the ability to short default swaps rather than bonds when the bond issuer’s credit quality deteriorates, leading to increased protection buying in the market; and (4) bond prices trading less than par, since the protection seller is guaranteeing the recovery of the par amount rather than the lower current bond price.

Factors that tend to tighten bond spreads include: (1) bond clauses allowing the coupon to step up if the issue is downgraded, as they provide additional benefits to the bondholder not enjoyed by the protection buyer and (2) the zero-lower bound for default swap premiums causes the basis to be positive when bond issuers can trade below the LIBOR curve, as is often the case for higher rated issues.

Similarly, factors that contribute to the tightening of the basis include: (1) existence of greater counterparty risk to the protection buyer than to the protection seller, so buyers are compensated by paying less than the bond spread; (2) the removal of funding risk for the protection seller, as selling protection is equivalent to funding the asset at LIBOR. Less risk demands less compensation and hence, a tightening in the basis; and (3) the increased supply of structured products such as CDS-backed collateralized debt obligations (CDOs), as they increase the supply of protection in the market.

Movements in the basis depend also on whether the market is mainly dominated by high cost investors or low cost investors. A long credit position, i.e., holding the credit risk, can be obtained either by selling protection or by financing the purchase of the risky asset. The CDS remains a viable alternative if its premium does not exceed the difference between the asset yield and the funding cost. The higher the funding cost, the lower the premium and hence, the tighter the basis. Thus, when the market share of low cost investors is relatively high and the average funding costs are below LIBOR, the basis tends to widen. Finally, relative liquidity also plays a role in determining whether the basis narrows or widens, as investors need to be compensated by wider spreads in the less liquid market. Hence, if the CDS market is more liquid than the corresponding underlying bond market (cash market), the basis will narrow and vice versa.

Credit Risk Portfolio. Note Quote.

maxresdefault

The recent development in credit markets is characterized by a flood of innovative credit risky structures. State-of-the-art portfolios contain derivative instruments ranging from simple, nearly commoditized contracts such as credit default swap (CDS), to first- generation portfolio derivatives such as first-to-default (FTD) baskets and collateralized debt obligation (CDO) tranches, up to complex structures involving spread options and different asset classes (hybrids). These new structures allow portfolio managers to implement multidimensional investment strategies, which seamlessly conform to their market view. Moreover, the exploding liquidity in credit markets makes tactical (short-term) overlay management very cost efficient. While the outperformance potential of an active portfolio management will put old-school investment strategies (such as buy-and-hold) under enormous pressure, managing a highly complex credit portfolio requires the introduction of new optimization technologies.

New derivatives allow the decoupling of business processes in the risk management industry (in banking, as well as in asset management), since credit treasury units are now able to manage specific parts of credit risk actively and independently. The traditional feedback loop between risk management and sales, which was needed to structure the desired portfolio characteristics only by selective business acquisition, is now outdated. Strategic cross asset management will gain in importance, as a cost-efficient overlay management can now be implemented by combining liquid instruments from the credit universe.

In any case, all these developments force portfolio managers to adopt an integrated approach. All involved risk factors (spread term structures including curve effects, spread correlations, implied default correlations, and implied spread volatilities) have to be captured and integrated into appropriate risk figures. We have a look on constant proportion debt obligations (CPDOs) as a leveraged exposure on credit indices, constant proportion portfolio insurance (CPPI) as a capital guaranteed instrument, CDO tranches to tap the correlation market, and equity futures to include exposure to stock markets in the portfolio.

For an integrated credit portfolio management approach, it is of central importance to aggregate risks over various instruments with different payoff characteristics. In this chapter, we will see that a state-of-the-art credit portfolio contains not only linear risks (CDS and CDS index contracts) but also nonlinear risks (such as FTD baskets, CDO tranches, or credit default swaptions). From a practitioner’s point of view there is a simple solution for this risk aggregation problem, namely delta-gamma management. In such a framework, one approximates the risks of all instruments in a portfolio by its first- and second-order sensitivities and aggregates these sensitivities to the portfolio level. Apparently, for a proper aggregation of risk factors, one has to take the correlation of these risk factors into account. However, for credit risky portfolios, a simplistic sensitivity approach will be inappropriate, as can be seen by the characteristics of credit portfolio risks shows:

  • Credit risky portfolios usually involve a larger number of reference entities. Hence, one has to take a large number of sensitivities into account. However, this is a phenomenon that is already well known from the management of stock portfolios. The solution is to split the risk for each constituent into a systematic risk (e.g., a beta with a portfolio hedging tool) and an alpha component which reflects the idiosyncratic part of the risk.

  • However, in contrast to equities, credit risk is not one dimensional (i.e., one risky security per issuer) but at least two dimensional (i.e., a set of instruments with different maturities). This is reflected in the fact that there is a whole term structure of credit spreads. Moreover, taking also different subordination levels (with different average recovery rates) into account, credit risk becomes a multidimensional object for each reference entity.
  • While most market risks can be satisfactorily approximated by diffusion processes, for credit risk the consideration of events (i.e., jumps) is imperative. The most apparent reason for this is that the dominating element of credit risk is event risk. However, in a market perspective, there are more events than the ultimate default event that have to be captured. Since one of the main drivers of credit spreads is the structure of the underlying balance sheet, a change (or the risk of a change) in this structure usually triggers a large movement in credit spreads. The best-known example for such an event is a leveraged buyout (LBO).
  • For credit market players, correlation is a very special topic, as a central pricing parameter is named implied correlation. However, there are two kinds of correlation parameters that impact a credit portfolio: price correlation and event correlation. While the former simply deals with the dependency between two price (i.e., spread) time series under normal market conditions, the latter aims at describing the dependency between two price time series in case of an event. In its simplest form, event correlation can be seen as default correlation: what is the risk that company B defaults given that company A has defaulted? While it is already very difficult to model this default correlation, for practitioners event correlation is even more complex, since there are other events than just the default event, as already mentioned above. Hence, we can modify the question above: what is the risk that spreads of company B blow out given that spreads of company A have blown out? In addition, the notion of event correlation can also be used to capture the risk in capital structure arbitrage trades (i.e., trading stock versus bonds of one company). In this example, the question might be: what is the risk that the stock price of company A jumps given that its bond spreads have blown out? The complicated task in this respect is that we do not only have to model the joint event probability but also the direction of the jumps. A brief example highlights why this is important. In case of a default event, spreads will blow out accompanied by a significant drop in the stock price. This means that there is a negative correlation between spreads and stock prices. However, in case of an LBO event, spreads will blow out (reflecting the deteriorated credit quality because of the higher leverage), while stock prices rally (because of the fact that the acquirer usually pays a premium to buy a majority of outstanding shares).

These show that a simple sensitivity approach – e.g., calculate and tabulate all deltas and gammas and let a portfolio manager play with – is not appropriate. Further risk aggregation (e.g., beta management) and risk factors that capture the event risk are needed. For the latter, a quick solution is the so-called instantaneous default loss (IDL). The IDL expresses the loss incurred in a credit risk instrument in case of a credit event. For single-name CDS, this is simply the loss given default (LGD). However, for a portfolio derivative such as a mezzanine tranche, this figure does not directly refer to the LGD of the defaulted item, but to the changed subordination of the tranche because of the default. Hence, this figure allows one to aggregate various instruments with respect to credit events.

Velocity of Money

Trump-ASX-Brexit-market-640x360

The most basic difference between the demand theory of money and exchange theory of money lies in the understanding of quantity equation

M . v = P . Y —– (1)

Here M is money supply, P is price and Y is real output; in addition, v is constant velocity of money. The demand theory understands that (1) reflects the needs of the economic individual for money, not only the meaning of exchange. Under the assumption of liquidity preference, the demand theory introduces nominal interest rate into demand function of money, thus exhibiting more economic pictures than traditional quantity theory does. Let us, however concentrate on the economic movement through linearization of exchange theory emphasizing exchange medium function of money.

Let us assume that the central bank provides a very small supply M of money, which implies that the value PY of products manufactured by the producer will be unable to be realized only through one transaction. The producer has to suspend the transaction until the purchasers possess money at hand again, which will elevate the transaction costs and even result in the bankruptcy of the producer. Then, will the producer do nothing and wait for the bankruptcy?

In reality, producers would rather adjust sales value through raising or lowering the price or amount of product to attempt the realization of a maximal sales value M than reserve the stock of products to subject the sale to the limit of velocity of money. In other words, producer would adjust price or real output to control the velocity of money, since the velocity of money can influence the realization of the product value.

Every time money changes hands, a transaction is completed; thus numerous turnovers of money for an individual during a given period of time constitute a macroeconomic exchange ∑ipiYi if the prices pi can be replaced by an average price P, then we can rewrite the value of exchange as ∑ipiYi = P . Y. In a real economy, the producer will manage to make P . Y close the money supply M as much as possible through adjusting the real output or its price.

For example, when a retailer comes to a strange community to sell her commodities, she always prefers to make a price through trial and error. If she finds that higher price can still promote the sales amount, then she will choose to continue raising the price until the sales amount less changes; on the other hand, if she confirms that lower price can create the more sales amount, then she will decrease the price of the commodity. Her strategy of pricing depends on price elasticity of demand for the commodity. However, the maximal value of the sales amount is determined by how much money the community can supply, thus the pricing of the retailer will make her sales close this maximal sale value, namely money for consumption of the community. This explains why the same commodity can always be sold at a higher price in the rich area.

Equation (1) is not an identical equation but an equilibrium state of exchange process in an economic system. Evidently, the difference M –  P . Y  between the supply of money and present sales value provides a vacancy for elevating sales value, in other words, the supply of money acts as the role of a carrying capacity for sales value. We assume that the vacancy is in direct proportion to velocity of increase of the sales value, and then derive a dynamical quantity equation

M(t) - P(t) . Y(t)  =  k . d[P(t) . Y(t)]/d(t) —– (2)

Here k is a positive constant and expresses a characteristic time with which the vacancy is filled. This is a speculated basic dynamical quantity equation of exchange by money. In reality, the money supply M(t) can usually be given; (2) is actually an evolution equation of sales value P(t)Y(t) , which can uniquely determine an evolving path of the price.

The role of money in (2) can be seen that money is only a medium of commodity exchange, just like the chopsticks for eating and the soap for washing. All needs for money are or will be order to carry out the commodity exchange. The behavior of holding money of the economic individuals implies a potential exchange in the future, whether for speculation or for the preservation of wealth, but it cannot directly determine the present price because every realistic price always comes from the commodity exchange, and no exchange and no price. In other words, what we are concerned with is not the reason of money generation, but form of money generation, namely we are concerned about money generation as a function of time rather than it as a function of income or interest rate. The potential needs for money which you can use various reasons to explain cannot contribute to price as long as the money does not participate in the exchange, thus the money supply not used to exchange will not occur in (2).

On the other hand, the change in money supply would result in a temporary vacancy of sales value, although sales value will also be achieved through exchanging with the new money supply at the next moment, since the price or sales volume may change. For example, a group of residents spend M(t) to buy houses of P(t)Y(t) through the loan at time t, evidently M(t) = P(t)Y(t). At time t+1, another group of residents spend M(t+1) to buy houses of P(t+1)Y(t+1) through the loan, and M(t+1) = P(t+1)Y(t+1). Thus, we can consider M(t+1) – M(t) as increase in money supply, and this increase can cause a temporary vacancy of sales value M(t+1) – P(t)Y(t). It is this vacancy that encourages sellers to try to maximize sales through adjusting the price by trial and error and also real estate developers to increase or decrease their housing production. Ultimately, new prices and production are produced and the exchange is completed at the level of M(t+1) = P(t+1)Y(t+1). In reality, the gap between M(t+1) and M(t) is often much smaller than the vacancy M(t+1) – P(t)Y(t), therefore we can approximately consider M(t+1) as M(t) if the money supply function M(t) is continuous and smooth.

However, it is necessary to emphasize that (2) is not a generation equation of demand function P(Y), which means (2) is a unique equation of determination of price (path), since, from the perspective of monetary exchange theory, the evolution of price depends only on money supply and production and arises from commodity exchange rather than relationship between supply and demand of products in the traditional economics where the meaning of the exchange is not obvious. In addition, velocity of money is not contained in this dynamical quantity equation, but its significance PY/M will be endogenously exhibited by the system.

Malignant Acceleration in Tech-Finance. Some Further Rumination on Regulations. Thought of the Day 72.1

these-stunning-charts-show-some-of-the-wild-trading-activity-that-came-from-a-dark-pool-this-morning

Regardless of the positive effects of HFT that offers, such as reduced spreads, higher liquidity, and faster price discovery, its negative side is mostly what has caught people’s attention. Several notorious market failures and accidents in recent years all seem to be related to HFT practices. They showed how much risk HFT can involve and how huge the damage can be.

HFT heavily depends on the reliability of the trading algorithms that generate, route, and execute orders. High-frequency traders thus must ensure that these algorithms have been tested completely and thoroughly before they are deployed into the live systems of the financial markets. Any improperly-tested, or prematurely-released algorithms may cause losses to both investors and the exchanges. Several examples demonstrate the extent of the ever-present vulnerabilities.

In August 2012, the Knight Capital Group implemented a new liquidity testing software routine into its trading system, which was running live on the NYSE. The system started making bizarre trading decisions, quadrupling the price of one company, Wizzard Software, as well as bidding-up the price of much larger entities, such as General Electric. Within 45 minutes, the company lost USD 440 million. After this event and the weakening of Knight Capital’s capital base, it agreed to merge with another algorithmic trading firm, Getco, which is the biggest HFT firm in the U.S. today. This example emphasizes the importance of implementing precautions to ensure their algorithms are not mistakenly used.

Another example is Everbright Securities in China. In 2013, state-owned brokerage firm, Everbright Securities Co., sent more than 26,000 mistaken buy orders to the Shanghai Stock Exchange (SSE of RMB 23.4 billion (USD 3.82 billion), pushing its benchmark index up 6 % in two minutes. This resulted in a trading loss of approximately RMB 194 million (USD 31.7 million). In a follow-up evaluative study, the China Securities Regulatory Commission (CSRC) found that there were significant flaws in Everbright’s information and risk management systems.

The damage caused by HFT errors is not limited to specific trading firms themselves, but also may involve stock exchanges and the stability of the related financial market. On Friday, May 18, 2012, the social network giant, Facebook’s stock was issued on the NASDAQ exchange. This was the most anticipated initial public offering (IPO) in its history. However, technology problems with the opening made a mess of the IPO. It attracted HFT traders, and very large order flows were expected, and before the IPO, NASDAQ was confident in its ability to deal with the high volume of orders.

But when the deluge of orders to buy, sell and cancel trades came, NASDAQ’s trading software began to fail under the strain. This resulted in a 30-minute delay on NASDAQ’s side, and a 17-second blackout for all stock trading at the exchange, causing further panic. Scrutiny of the problems immediately led to fines for the exchange and accusations that HFT traders bore some responsibility too. Problems persisted after opening, with many customer orders from institutional and retail buyers unfilled for hours or never filled at all, while others ended up buying more shares than they had intended. This incredible gaffe, which some estimates say cost traders USD 100 million, eclipsed NASDAQ’s achievement in getting Facebook’s initial IPO, the third largest IPO in U.S. history. This incident has been estimated to have cost investors USD 100 million.

Another instance occurred on May 6, 2010, when U.S. financial markets were surprised by what has been referred to ever since as the “Flash Crash” Within less than 30 minutes, the main U.S. stock markets experienced the single largest price declines within a day, with a decline of more than 5 % for many U.S.-based equity products. In addition, the Dow Jones Industrial Average (DJIA), at its lowest point that day, fell by nearly 1,000 points, although it was followed by a rapid rebound. This brief period of extreme intraday volatility demonstrated the weakness of the structure and stability of U.S. financial markets, as well as the opportunities for volatility-focused HFT traders. Although a subsequent investigation by the SEC cleared high-frequency traders of directly having caused the Flash Crash, they were still blamed for exaggerating market volatility, withdrawing liquidity for many U.S.-based equities (FLASH BOYS).

Since the mid-2000s, the average trade size in the U.S. stock market had plummeted, the markets had fragmented, and the gap in time between the public view of the markets and the view of high-frequency traders had widened. The rise of high-frequency trading had been accompanied also by a rise in stock market volatility – over and above the turmoil caused by the 2008 financial crisis. The price volatility within each trading day in the U.S. stock market between 2010 and 2013 was nearly 40 percent higher than the volatility between 2004 and 2006, for instance. There were days in 2011 in which volatility was higher than in the most volatile days of the dot-com bubble. Although these different incidents have different causes, the effects were similar and some common conclusions can be drawn. The presence of algorithmic trading and HFT in the financial markets exacerbates the adverse impacts of trading-related mistakes. It may lead to extremely higher market volatility and surprises about suddenly-diminished liquidity. This raises concerns about the stability and health of the financial markets for regulators. With the continuous and fast development of HFT, larger and larger shares of equity trades were created in the U.S. financial markets. Also, there was mounting evidence of disturbed market stability and caused significant financial losses due to HFT-related errors. This led the regulators to increase their attention and effort to provide the exchanges and traders with guidance on HFT practices They also expressed concerns about high-frequency traders extracting profit at the costs of traditional investors and even manipulating the market. For instance, high-frequency traders can generate a large amount of orders within microseconds to exacerbate a trend. Other types of misconduct include: ping orders, which is using some orders to detect other hidden orders; and quote stuffing, which is issuing a large number of orders to create uncertainty in the market. HFT creates room for these kinds of market abuses, and its blazing speed and huge trade volumes make their detection difficult for regulators. Regulators have taken steps to increase their regulatory authority over HFT activities. Some of the problems that arose in the mid-2000s led to regulatory hearings in the United States Senate on dark pools, flash orders and HFT practices. Another example occurred after the Facebook IPO problem. This led the SEC to call for a limit up-limit down mechanism at the exchanges to prevent trades in individual securities from occurring outside of a specified price range so that market volatility will be under better control. These regulatory actions put stricter requirements on HFT practices, aiming to minimize the market disturbance when many fast trading orders occur within a day.

Some content on this page was disabled on May 30, 2018 as a result of a DMCA takedown notice from W.W. Norton. You can learn more about the DMCA here:

https://en.support.wordpress.com/copyright-and-the-dmca/

Fundamental Theorem of Asset Pricing: Tautological Meeting of Mathematical Martingale and Financial Arbitrage by the Measure of Probability.

thinkstockphotos-496599823

The Fundamental Theorem of Asset Pricing (FTAP hereafter) has two broad tenets, viz.

1. A market admits no arbitrage, if and only if, the market has a martingale measure.

2. Every contingent claim can be hedged, if and only if, the martingale measure is unique.

The FTAP is a theorem of mathematics, and the use of the term ‘measure’ in its statement places the FTAP within the theory of probability formulated by Andrei Kolmogorov (Foundations of the Theory of Probability) in 1933. Kolmogorov’s work took place in a context captured by Bertrand Russell, who observed that

It is important to realise the fundamental position of probability in science. . . . As to what is meant by probability, opinions differ.

In the 1920s the idea of randomness, as distinct from a lack of information, was becoming substantive in the physical sciences because of the emergence of the Copenhagen Interpretation of quantum mechanics. In the social sciences, Frank Knight argued that uncertainty was the only source of profit and the concept was pervading John Maynard Keynes’ economics (Robert Skidelsky Keynes the return of the master).

Two mathematical theories of probability had become ascendant by the late 1920s. Richard von Mises (brother of the Austrian economist Ludwig) attempted to lay down the axioms of classical probability within a framework of Empiricism, the ‘frequentist’ or ‘objective’ approach. To counter–balance von Mises, the Italian actuary Bruno de Finetti presented a more Pragmatic approach, characterised by his claim that “Probability does not exist” because it was only an expression of the observer’s view of the world. This ‘subjectivist’ approach was closely related to the less well-known position taken by the Pragmatist Frank Ramsey who developed an argument against Keynes’ Realist interpretation of probability presented in the Treatise on Probability.

Kolmogorov addressed the trichotomy of mathematical probability by generalising so that Realist, Empiricist and Pragmatist probabilities were all examples of ‘measures’ satisfying certain axioms. In doing this, a random variable became a function while an expectation was an integral: probability became a branch of Analysis, not Statistics. Von Mises criticised Kolmogorov’s generalised framework as un-necessarily complex. About a decade and a half back, the physicist Edwin Jaynes (Probability Theory The Logic Of Science) champions Leonard Savage’s subjectivist Bayesianism as having a “deeper conceptual foundation which allows it to be extended to a wider class of applications, required by current problems of science”.

The objections to measure theoretic probability for empirical scientists can be accounted for as a lack of physicality. Frequentist probability is based on the act of counting; subjectivist probability is based on a flow of information, which, following Claude Shannon, is now an observable entity in Empirical science. Measure theoretic probability is based on abstract mathematical objects unrelated to sensible phenomena. However, the generality of Kolmogorov’s approach made it flexible enough to handle problems that emerged in physics and engineering during the Second World War and his approach became widely accepted after 1950 because it was practically more useful.

In the context of the first statement of the FTAP, a ‘martingale measure’ is a probability measure, usually labelled Q, such that the (real, rather than nominal) price of an asset today, X0, is the expectation, using the martingale measure, of its (real) price in the future, XT. Formally,

X0 = EQ XT

The abstract probability distribution Q is defined so that this equality exists, not on any empirical information of historical prices or subjective judgement of future prices. The only condition placed on the relationship that the martingale measure has with the ‘natural’, or ‘physical’, probability measures usually assigned the label P, is that they agree on what is possible.

The term ‘martingale’ in this context derives from doubling strategies in gambling and it was introduced into mathematics by Jean Ville in a development of von Mises’ work. The idea that asset prices have the martingale property was first proposed by Benoit Mandelbrot in response to an early formulation of Eugene Fama’s Efficient Market Hypothesis (EMH), the two concepts being combined by Fama. For Mandelbrot and Fama the key consequence of prices being martingales was that the current price was independent of the future price and technical analysis would not prove profitable in the long run. In developing the EMH there was no discussion on the nature of the probability under which assets are martingales, and it is often assumed that the expectation is calculated under the natural measure. While the FTAP employs modern terminology in the context of value-neutrality, the idea of equating a current price with a future, uncertain, has ethical ramifications.

The other technical term in the first statement of the FTAP, arbitrage, has long been used in financial mathematics. Liber Abaci Fibonacci (Laurence Sigler Fibonaccis Liber Abaci) discusses ‘Barter of Merchandise and Similar Things’, 20 arms of cloth are worth 3 Pisan pounds and 42 rolls of cotton are similarly worth 5 Pisan pounds; it is sought how many rolls of cotton will be had for 50 arms of cloth. In this case there are three commodities, arms of cloth, rolls of cotton and Pisan pounds, and Fibonacci solves the problem by having Pisan pounds ‘arbitrate’, or ‘mediate’ as Aristotle might say, between the other two commodities.

Within neo-classical economics, the Law of One Price was developed in a series of papers between 1954 and 1964 by Kenneth Arrow, Gérard Debreu and Lionel MacKenzie in the context of general equilibrium, in particular the introduction of the Arrow Security, which, employing the Law of One Price, could be used to price any asset. It was on this principle that Black and Scholes believed the value of the warrants could be deduced by employing a hedging portfolio, in introducing their work with the statement that “it should not be possible to make sure profits” they were invoking the arbitrage argument, which had an eight hundred year history. In the context of the FTAP, ‘an arbitrage’ has developed into the ability to formulate a trading strategy such that the probability, under a natural or martingale measure, of a loss is zero, but the probability of a positive profit is not.

To understand the connection between the financial concept of arbitrage and the mathematical idea of a martingale measure, consider the most basic case of a single asset whose current price, X0, can take on one of two (present) values, XTD < XTU, at time T > 0, in the future. In this case an arbitrage would exist if X0 ≤ XTD < XTU: buying the asset now, at a price that is less than or equal to the future pay-offs, would lead to a possible profit at the end of the period, with the guarantee of no loss. Similarly, if XTD < XTU ≤ X0, short selling the asset now, and buying it back would also lead to an arbitrage. So, for there to be no arbitrage opportunities we require that

XTD < X0 < XTU

This implies that there is a number, 0 < q < 1, such that

X0 = XTD + q(XTU − XTD)

= qXTU + (1−q)XTD

The price now, X0, lies between the future prices, XTU and XTD, in the ratio q : (1 − q) and represents some sort of ‘average’. The first statement of the FTAP can be interpreted simply as “the price of an asset must lie between its maximum and minimum possible (real) future price”.

If X0 < XTD ≤ XTU we have that q < 0 whereas if XTD ≤ XTU < X0 then q > 1, and in both cases q does not represent a probability measure which by Kolmogorov’s axioms, must lie between 0 and 1. In either of these cases an arbitrage exists and a trader can make a riskless profit, the market involves ‘turpe lucrum’. This account gives an insight as to why James Bernoulli, in his moral approach to probability, considered situations where probabilities did not sum to 1, he was considering problems that were pathological not because they failed the rules of arithmetic but because they were unfair. It follows that if there are no arbitrage opportunities then quantity q can be seen as representing the ‘probability’ that the XTU price will materialise in the future. Formally

X0 = qXTU + (1−q) XTD ≡ EQ XT

The connection between the financial concept of arbitrage and the mathematical object of a martingale is essentially a tautology: both statements mean that the price today of an asset must lie between its future minimum and maximum possible value. This first statement of the FTAP was anticipated by Frank Ramsey when he defined ‘probability’ in the Pragmatic sense of ‘a degree of belief’ and argues that measuring ‘degrees of belief’ is through betting odds. On this basis he formulates some axioms of probability, including that a probability must lie between 0 and 1. He then goes on to say that

These are the laws of probability, …If anyone’s mental condition violated these laws, his choice would depend on the precise form in which the options were offered him, which would be absurd. He could have a book made against him by a cunning better and would then stand to lose in any event.

This is a Pragmatic argument that identifies the absence of the martingale measure with the existence of arbitrage and today this forms the basis of the standard argument as to why arbitrages do not exist: if they did the, other market participants would bankrupt the agent who was mis-pricing the asset. This has become known in philosophy as the ‘Dutch Book’ argument and as a consequence of the fact/value dichotomy this is often presented as a ‘matter of fact’. However, ignoring the fact/value dichotomy, the Dutch book argument is an alternative of the ‘Golden Rule’– “Do to others as you would have them do to you.”– it is infused with the moral concepts of fairness and reciprocity (Jeffrey Wattles The Golden Rule).

FTAP is the ethical concept of Justice, capturing the social norms of reciprocity and fairness. This is significant in the context of Granovetter’s discussion of embeddedness in economics. It is conventional to assume that mainstream economic theory is ‘undersocialised’: agents are rational calculators seeking to maximise an objective function. The argument presented here is that a central theorem in contemporary economics, the FTAP, is deeply embedded in social norms, despite being presented as an undersocialised mathematical object. This embeddedness is a consequence of the origins of mathematical probability being in the ethical analysis of commercial contracts: the feudal shackles are still binding this most modern of economic theories.

Ramsey goes on to make an important point

Having any definite degree of belief implies a certain measure of consistency, namely willingness to bet on a given proposition at the same odds for any stake, the stakes being measured in terms of ultimate values. Having degrees of belief obeying the laws of probability implies a further measure of consistency, namely such a consistency between the odds acceptable on different propositions as shall prevent a book being made against you.

Ramsey is arguing that an agent needs to employ the same measure in pricing all assets in a market, and this is the key result in contemporary derivative pricing. Having identified the martingale measure on the basis of a ‘primal’ asset, it is then applied across the market, in particular to derivatives on the primal asset but the well-known result that if two assets offer different ‘market prices of risk’, an arbitrage exists. This explains why the market-price of risk appears in the Radon-Nikodym derivative and the Capital Market Line, it enforces Ramsey’s consistency in pricing. The second statement of the FTAP is concerned with incomplete markets, which appear in relation to Arrow-Debreu prices. In mathematics, in the special case that there are as many, or more, assets in a market as there are possible future, uncertain, states, a unique pricing vector can be deduced for the market because of Cramer’s Rule. If the elements of the pricing vector satisfy the axioms of probability, specifically each element is positive and they all sum to one, then the market precludes arbitrage opportunities. This is the case covered by the first statement of the FTAP. In the more realistic situation that there are more possible future states than assets, the market can still be arbitrage free but the pricing vector, the martingale measure, might not be unique. The agent can still be consistent in selecting which particular martingale measure they choose to use, but another agent might choose a different measure, such that the two do not agree on a price. In the context of the Law of One Price, this means that we cannot hedge, replicate or cover, a position in the market, such that the portfolio is riskless. The significance of the second statement of the FTAP is that it tells us that in the sensible world of imperfect knowledge and transaction costs, a model within the framework of the FTAP cannot give a precise price. When faced with incompleteness in markets, agents need alternative ways to price assets and behavioural techniques have come to dominate financial theory. This feature was realised in The Port Royal Logic when it recognised the role of transaction costs in lotteries.

Conjuncted: Speculatively Accelerated Capital – Trading Outside the Pit.

hft

High Frequency Traders (HFTs hereafter) may anticipate the trades of a mutual fund, for instance, if the mutual fund splits large orders into a series of smaller ones and the initial trades reveal information about the mutual funds’ future trading intentions. HFTs might also forecast order flow if traditional asset managers with similar trading demands do not all trade at the same time, allowing the possibility that the initiation of a trade by one mutual fund could forecast similar future trades by other mutual funds. If an HFT were able to forecast a traditional asset managers’ order flow by either these or some other means, then the HFT could potentially trade ahead of them and profit from the traditional asset manager’s subsequent price impact.

There are two main empirical implications of HFTs engaging in such a trading strategy. The first implication is that HFT trading should lead non-HFT trading – if an HFT buys a stock, non-HFTs should subsequently come into the market and buy those same stocks. Second, since the HFT’s objective would be to profit from non-HFTs’ subsequent price impact, it should be the case that the prices of the stocks they buy rise and those of the stocks they sell fall. These two patterns, together, are consistent with HFTs trading stocks in order to profit from non-HFTs’ future buying and selling pressure. 

While HFTs may in aggregate anticipate non-HFT order flow, it is also possible that among HFTs, some firms’ trades are strongly correlated with future non-HFT order flow, while other firms’ trades have little or no correlation with non-HFT order flow. This may be the case if certain HFTs focus more on strategies that anticipate order flow or if some HFTs are more skilled than other firms. If certain HFTs are better at forecasting order flow or if they focus more on such a strategy, then these HFTs’ trades should be consistently more strongly correlated with future non-HFT trades than are trades from other HFTs. Additionally, if these HFTs are more skilled, then one might expect these HFTs’ trades to be more strongly correlated with future returns. 

Another implication of the anticipatory trading hypothesis is that the correlation between HFT trades and future non-HFT trades should be stronger at times when non-HFTs are impatient. The reason is anticipating buying and selling pressure requires forecasting future trades based on patterns in past trades and orders. To make anticipating their order flow difficult, non-HFTs typically use execution algorithms to disguise their trading intentions. But there is a trade-off between disguising order flow and trading a large position quickly. When non-HFTs are impatient and focused on trading a position quickly, they may not hide their order flow as well, making it easier for HFTs to anticipate their trades. At such times, the correlation between HFT trades and future non-HFT trades should be stronger. 

Financial Entanglement and Complexity Theory. An Adumbration on Financial Crisis.

entanglement

The complex system approach in finance could be described through the concept of entanglement. The concept of entanglement bears the same features as a definition of a complex system given by a group of physicists working in a field of finance (Stanley et al,). As they defined it – in a complex system all depends upon everything. Just as in the complex system the notion of entanglement is a statement acknowledging interdependence of all the counterparties in financial markets including financial and non-financial corporations, the government and the central bank. How to identify entanglement empirically? Stanley H.E. et al formulated the process of scientific study in finance as a search for patterns. Such a search, going on under the auspices of “econophysics”, could exemplify a thorough analysis of a complex and unstructured assemblage of actual data being finalized in the discovery and experimental validation of an appropriate pattern. On the other side of a spectrum, some patterns underlying the actual processes might be discovered due to synthesizing a vast amount of historical and anecdotal information by applying appropriate reasoning and logical deliberations. The Austrian School of Economic Thought which, in its extreme form, rejects application of any formalized systems, or modeling of any kind, could be viewed as an example. A logical question follows out this comparison: Does there exist any intermediate way of searching for regular patters in finance and economics?

Importantly, patterns could be discovered by developing rather simple models of money and debt interrelationships. Debt cycles were studied extensively by many schools of economic thought (Shiller, Robert J._ Akerlof, George A – Animal Spirits: How Human Psychology Drives the Economy, and Why It Matters for Global Capitalism). The modern financial system worked by spreading risk, promoting economic efficiency and providing cheap capital. It had been formed during the years as bull markets in shares and bonds originated in the early 1990s. These markets were propelled by abundance of money, falling interest rates and new information technology. Financial markets, by combining debt and derivatives, could originate and distribute huge quantities of risky structurized products and sell them to different investors. Meanwhile, financial sector debt, only a tenth of the size of non-financial-sector debt in 1980, became half as big by the beginning of the credit crunch in 2007. As liquidity grew, banks could buy more assets, borrow more against them, and enjoy their value rose. By 2007 financial services were making 40% of America’s corporate profits while employing only 5% of its private sector workers. Thanks to cheap money, banks could have taken on more debt and, by designing complex structurized products, they were able to make their investment more profitable and risky. Securitization facilitating the emergence of the “shadow banking” system foments, simultaneously, bubbles on different segments of a global financial market.

Yet over the past decade this system, or a big part of it, began to lose touch with its ultimate purpose: to reallocate deficit resources in accordance with the social priorities. Instead of writing, managing and trading claims on future cashflows for the rest of the economy, finance became increasingly a game for fees and speculation. Due to disastrously lax regulation, investment banks did not lay aside enough capital in case something went wrong, and, as the crisis began in the middle of 2007, credit markets started to freeze up. Qualitatively, after the spectacular Lehman Brothers disaster in September 2008, laminar flows of financial activity came to an end. Banks began to suffer losses on their holdings of toxic securities and were reluctant to lend to one another that led to shortages of funding system. This only intensified in late 2007 when Nothern Rock, a British mortgage lender, experienced a bank run that started in the money markets. All of a sudden, liquidity became in a short supply, debt was unwound, and investors were forced to sell and write down the assets. For several years, up to now, the market counterparties no longer trust each other. As Walter Bagehot, an authority on bank runs, once wrote:

Every banker knows that if he has to prove that he is worth of credit, however good may be his arguments, in fact his credit is gone.

In an entangled financial system, his axiom should be stretched out to the whole market. And it means, precisely, financial meltdown or the crisis. The most fascinating feature of the post-crisis era on financial markets was the continuation of a ubiquitous liquidity expansion. To fight the market squeeze, all the major central banks have greatly expanded their balance sheets. The latter rose, roughly, from about 10 percent to 25-30 percent of GDP for the appropriate economies. For several years after the credit crunch 2007-09, central banks bought trillions of dollars of toxic and government debts thus increasing, without any precedent in modern history, money issuance. Paradoxically, this enormous credit expansion, though accelerating for several years, has been accompanied by a stagnating and depressed real economy. Yet, until now, central bankers are worried with downside risks and threats of price deflation, mainly. Otherwise, a hectic financial activity that is going on along unbounded credit expansion could be transformed by herding into autocatalytic process that, if being subject to accumulation of a new debt, might drive the entire system at a total collapse. From a financial point of view, this systemic collapse appears to be a natural result of unbounded credit expansion which is ‘supported’ with the zero real resources. Since the wealth of investors, as a whole, becomes nothing but the ‘fool’s gold’, financial process becomes a singular one, and the entire system collapses. In particular, three phases of investors’ behavior – hedge finance, speculation, and the Ponzi game, could be easily identified as a sequence of sub-cycles that unwound ultimately in the total collapse.

Capital as a Symbolic Representation of Power. Nitzan’s and Bichler’s Capital as Power: A Study of Order and Creorder.

golem1

The secret to understanding accumulation, lies not in the narrow confines of production and consumption, but in the broader processes and institutions of power. Capital, is neither a material object nor a social relationship embedded in material entities. It is not ‘augmented’ by power. It is, in itself, a symbolic representation of power….

Unlike the elusive liberals, Marxists try to deal with power head on – yet they too end up with a fractured picture. Unable to fit power into Marx’s value analysis, they have split their inquiry into three distinct branches: a neo-Marxian economics that substitutes monopoly for labour values; a cultural analysis whose extreme versions reject the existence of ‘economics’ altogether (and eventually also the existence of any ‘objective’ order); and a state theory that oscillates between two opposite positions – one that prioritizes state power by demoting the ‘laws’ of the economy, and another that endorses the ‘laws’ of the economy by annulling the autonomy of the state. Gradually, each of these branches has developed its own orthodoxies, academic bureaucracies and barriers. And as the fractures have deepened, the capitalist totality that Marx was so keen on uncovering has dissipated….

The commodified structure of capitalism, Marx argues, is anchored in the labour process: the accumulation of capital is denominated in prices; prices reflect labour values; and labour values are determined by the productive labour time necessary to make the commodities. This sequence is intuitively appealing and politically motivating, but it runs into logical and empirical impossibilities at every step of the way. First, it is impossible to differentiate productive from unproductive labour. Second, even if we knew what productive labour was, there would still be no way of knowing how much productive labour goes into a given commodity, and therefore no way of knowing the labour value of that commodity and the amount of surplus value it embodies. And finally, even if labour values were known, there would be no consistent way to convert them into prices and surplus value into profit. So, in the end, Marxism cannot explain the prices of commodities – not in detail and not even approximately. And without a theory of prices, there can be no theory of profit and accumulation and therefore no theory of capitalism….

Modern capitalists are removed from production: they are absentee owners. Their ownership, says Veblen, doesn’t contribute to industry; it merely controls it for profitable ends. And since the owners are absent from industry, the only way for them to exact their profit is by ‘sabotaging’ industry. From this viewpoint, the accumulation of capital is the manifestation not of productive contribution but of organized power.

To be sure, the process by which capitalists ‘translate’ qualitatively different power processes into quantitatively unified measures of earnings and capitalization isn’t very ‘objective’. Filtered through the conventional assessments of accountants and the future speculations of investors, the conversion is deeply inter-subjective. But it is also very real, extremely imposing and, as we shall see, surprisingly well-defined.

These insights can be extended into a broader metaphor of a ‘social hologram’: a framework that integrates the resonating productive interactions of industry with the dissonant power limitations of business. These hologramic spectacles allow us to theorize the power underpinnings of accumulation, explore their historical evolution and understand the ways in which various forms of power are imprinted on and instituted in the corporation…..

Business enterprise diverts and limits industry instead of boosting it; that ‘business as usual’ needs and implies strategic limitation; that most firms are not passive price takers but active price makers, and that their autonomy makes ‘pure’ economics indeterminate; that the ‘normal rate of return’, just like the ancient rate of interest, is a manifestation not of productive yield but of organized power; that the corporation emerged not to enhance productivity but to contain it; that equity and debt have little to do with material wealth and everything to do with systemic power; and, finally, that there is little point talking about the deviations and distortions of ‘financial capital’ simply because there is no ‘productive capital’ to deviate from and distort.

Jonathan Nitzan, Shimshon Bichler- Capital as Power:_ A Study of Order and Creorder 

 

Optimal Hedging…..

hedging

Risk management is important in the practices of financial institutions and other corporations. Derivatives are popular instruments to hedge exposures due to currency, interest rate and other market risks. An important step of risk management is to use these derivatives in an optimal way. The most popular derivatives are forwards, options and swaps. They are basic blocks for all sorts of other more complicated derivatives, and should be used prudently. Several parameters need to be determined in the processes of risk management, and it is necessary to investigate the influence of these parameters on the aims of the hedging policies and the possibility of achieving these goals.

The problem of determining the optimal strike price and optimal hedging ratio is considered, where a put option is used to hedge market risk under a constraint of budget. The chosen option is supposed to finish in-the-money at maturity in the, such that the predicted loss of the hedged portfolio is different from the realized loss. The aim of hedging is to minimize the potential loss of investment under a specified level of confidence. In other words, the optimal hedging strategy is to minimize the Value-at-Risk (VaR) under a specified level of risk.

A stock is supposed to be bought at time zero with price S0, and to be sold at time T with uncertain price ST. In order to hedge the market risk of the stock, the company decides to choose one of the available put options written on the same stock with maturity at time τ, where τ is prior and close to T, and the n available put options are specified by their strike prices Ki (i = 1, 2,··· , n). As the prices of different put options are also different, the company needs to determine an optimal hedge ratio h (0 ≤ h ≤ 1) with respect to the chosen strike price. The cost of hedging should be less than or equal to the predetermined hedging budget C. In other words, the company needs to determine the optimal strike price and hedging ratio under the constraint of hedging budget.

Suppose the market price of the stock is S0 at time zero, the hedge ratio is h, the price of the put option is P0, and the riskless interest rate is r. At time T, the time value of the hedging portfolio is

S0erT + hP0erT —– (1)

and the market price of the portfolio is

ST + h(K − Sτ)+ er(T−τ) —– (2)

therefore the loss of the portfolio is

L = (S0erT + hP0erT) − (ST +h(K−Sτ)+ er(T−τ)) —– (3)

where x+ = max(x, 0), which is the payoff function of put option at maturity.

For a given threshold v, the probability that the amount of loss exceeds v is denoted as

α = Prob{L ≥ v} —– (4)

in other words, v is the Value-at-Risk (VaR) at α percentage level. There are several alternative measures of risk, such as CVaR (Conditional Value-at-Risk), ESF (Expected Shortfall), CTE (Conditional Tail Expectation), and other coherent risk measures. The criterion of optimality is to minimize the VaR of the hedging strategy.

The mathematical model of stock price is chosen to be a geometric Brownian motion, i.e.

dSt/St = μdt + σdBt —– (5)

where St is the stock price at time t (0 < t ≤ T), μ and σ are the drift and the volatility of stock price, and Bt is a standard Brownian motion. The solution of the stochastic differential equation is

St = S0 eσBt + (μ−1/2σ2)t —– (6)

where B0 = 0, and St is lognormally distributed.

Proposition:

For a given threshold of loss v, the probability that the loss exceeds v is

Prob {L ≥ v} = E [I{X ≤ c1} FY (g(X) − X)] + E [I{X ≥ c1} FY (c2 − X)] —– (7)

where E[X] is the expectation of random variable X. I{X < c} is the index function of X such that I{X < c} = 1 when {X < c} is true, otherwise I{X < c} = 0. FY (y) is the cumulative distribution function of random variable Y , and

c1 = 1/σ [ln(K/S0) − (μ−1/2σ2)τ] ,

g(X) = 1/σ [(ln (S0 + hP0)erT − h (K − f(X)) er(T−τ) −v)/S0 − (μ − 1/2σ2) T],

f(X) = S0 eσX + (μ−1/2σ2)τ,

c2 = 1/σ [(ln (S0 + hP0) erT − v)/S0 − (μ− 1/2σ2) T

X and Y are both normally distributed, where X ∼ N(0,√τ), Y ∼ N(0,√(T−τ).

For a specified hedging strategy, Q(v) = Prob {L ≥ v} is a decreasing function of v. The VaR under α level can be obtained from equation

Q(v) = α —– (8)

The expectations in Proposition can be calculated with Monte Carlo simulation methods, and the optimal hedging strategy which has the smallest VaR can be obtained from equation (8) by numerical searching methods….