Bullish or Bearish. Note Quote.

Untitled

The term spread refers to the difference in premiums between the purchase and sale of options. An option spread is the simultaneous purchase of one or more options contracts and sale of the equivalent number of options contracts, in a different series of the class of options. A spread could involve the same underlying: 

  •  Buying and selling calls, or 
  •  Buying and selling puts.

Combining puts and calls into groups of two or more makes it feasible to design derivatives with interesting payoff profiles. The profit and loss outcomes depend on the options used (puts or calls); positions taken (long or short); whether their strike prices are identical or different; and the similarity or difference of their exercise dates. Among directional positions are bullish vertical call spreads, bullish vertical put spreads, bearish vertical spreads, and bearish vertical put spreads. 

If the long position has a higher premium than the short position, this is known as a debit spread, and the investor will be required to deposit the difference in premiums. If the long position has a lower premium than the short position, this is a credit spread, and the investor will be allowed to withdraw the difference in premiums. The spread will be even if the premiums on each side results are the same. 

A potential loss in an option spread is determined by two factors: 

  • Strike price 
  • Expiration date 

If the strike price of the long call is greater than the strike price of the short call, or if the strike price of the long put is less than the strike price of the short put, a margin is required because adverse market moves can cause the short option to suffer a loss before the long option can show a profit.

A margin is also required if the long option expires before the short option. The reason is that once the long option expires, the trader holds an unhedged short position. A good way of looking at margin requirements is that they foretell potential loss. Here are, in a nutshell, the main option spreadings.

A calendar, horizontal, or time spread is the simultaneous purchase and sale of options of the same class with the same exercise prices but with different expiration dates. A vertical, or price or money, spread is the simultaneous purchase and sale of options of the same class with the same expiration date but with different exercise prices. A bull, or call, spread is a type of vertical spread that involves the purchase of the call option with the lower exercise price while selling the call option with the higher exercise price. The result is a debit transaction because the lower exercise price will have the higher premium.

  • The maximum risk is the net debit: the long option premium minus the short option premium. 
  • The maximum profit potential is the difference in the strike prices minus the net debit. 
  • The breakeven is equal to the lower strike price plus the net debit. 

A trader will typically buy a vertical bull call spread when he is mildly bullish. Essentially, he gives up unlimited profit potential in return for reducing his risk. In a vertical bull call spread, the trader is expecting the spread premium to widen because the lower strike price call comes into the money first. 

Vertical spreads are the more common of the direction strategies, and they may be bullish or bearish to reflect the holder’s view of market’s anticipated direction. Bullish vertical put spreads are a combination of a long put with a low strike, and a short put with a higher strike. Because the short position is struck closer to-the-money, this generates a premium credit. 

Bearish vertical call spreads are the inverse of bullish vertical call spreads. They are created by combining a short call with a low strike and a long call with a higher strike. Bearish vertical put spreads are the inverse of bullish vertical put spreads, generated by combining a short put with a low strike and a long put with a higher strike. This is a bearish position taken when a trader or investor expects the market to fall. 

The bull or sell put spread is a type of vertical spread involving the purchase of a put option with the lower exercise price and sale of a put option with the higher exercise price. Theoretically, this is the same action that a bull call spreader would take. The difference between a call spread and a put spread is that the net result will be a credit transaction because the higher exercise price will have the higher premium. 

  • The maximum risk is the difference in the strike prices minus the net credit. 
  • The maximum profit potential equals the net credit. 
  • The breakeven equals the higher strike price minus the net credit. 

The bear or sell call spread involves selling the call option with the lower exercise price and buying the call option with the higher exercise price. The net result is a credit transaction because the lower exercise price will have the higher premium.

A bear put spread (or buy spread) involves selling some of the put option with the lower exercise price and buying the put option with the higher exercise price. This is the same action that a bear call spreader would take. The difference between a call spread and a put spread, however, is that the net result will be a debit transaction because the higher exercise price will have the higher premium. 

  • The maximum risk is equal to the net debit. 
  • The maximum profit potential is the difference in the strike
    prices minus the net debit. 
  • The breakeven equals the higher strike price minus the net debit.

An investor or trader would buy a vertical bear put spread because he or she is mildly bearish, giving up an unlimited profit potential in return for a reduction in risk. In a vertical bear put spread, the trader is expecting the spread premium to widen because the higher strike price put comes into the money first. 

In conclusion, investors and traders who are bullish on the market will either buy a bull call spread or sell a bull put spread. But those who are bearish on the market will either buy a bear put spread or sell a bear call spread. When the investor pays more for the long option than she receives in premium for the short option, then the spread is a debit transaction. In contrast, when she receives more than she pays, the spread is a credit transaction. Credit spreads typically require a margin deposit. 

The Statistical Physics of Stock Markets. Thought of the Day 143.0

This video is an Order Routing Animation

The externalist view argues that we can make sense of, and profit from stock markets’ behavior, or at least few crucial properties of it, by crunching numbers and looking for patterns and regularities in certain sets of data. The notion of data, hence, is a key element in such an understanding and the quantitative side of the problem is prominent even if it does not mean that a qualitative analysis is ignored. The point here that the outside view maintains that it provides a better understanding than the internalist view. To this end, it endorses a functional perspective on finance and stock markets in particular.

The basic idea of the externalist view is that there are general properties and behavior of stock markets that can be detected and studied through mathematical lens, and they do not depend so much on contextual or domain-specific factors. The point at stake here is that the financial systems can be studied and approached at different scales, and it is virtually impossible to produce all the equations describing at a micro level all the objects of the system and their relations. So, in response, this view focuses on those properties that allow us to get an understanding of the behavior of the systems at a global level without having to produce a detailed conceptual and mathematical account of the inner ‘machinery’ of the system. Hence the two roads: The first one is to embrace an emergentist view on stock market, that is a specific metaphysical, ontological, and methodological thesis, while the second one is to embrace a heuristic view, that is the idea that the choice to focus on those properties that are tractable by the mathematical models is a pure problem-solving option.

A typical view of the externalist approach is the one provided, for instance, by statistical physics. In describing collective behavior, this discipline neglects all the conceptual and mathematical intricacies deriving from a detailed account of the inner, individual, and at micro level functioning of a system. Concepts such as stochastic dynamics, self-similarity, correlations (both short- and long-range), and scaling are tools to get this aim. Econophysics is a stock example in this sense: it employs methods taken from mathematics and mathematical physics in order to detect and forecast the driving forces of stock markets and their critical events, such as bubbles, crashes and their tipping points. Under this respect, markets are not ‘dark boxes’: you can see their characteristics from the outside, or better you can see specific dynamics that shape the trends of stock markets deeply and for a long time. Moreover, these dynamics are complex in the technical sense. This means that this class of behavior is such to encompass timescales, ontology, types of agents, ecologies, regulations, laws, etc. and can be detected, even if not strictly predictable. We can focus on the stock markets as a whole, on few of their critical events, looking at the data of prices (or other indexes) and ignoring all the other details and factors since they will be absorbed in these global dynamics. So this view provides a look at stock markets such that not only they do not appear as a unintelligible casino where wild gamblers face each other, but that shows the reasons and the properties of a systems that serve mostly as a means of fluid transactions that enable and ease the functioning of free markets.

Moreover the study of complex systems theory and that of stock markets seem to offer mutual benefits. On one side, complex systems theory seems to offer a key to understand and break through some of the most salient stock markets’ properties. On the other side, stock markets seem to provide a ‘stress test’ of the complexity theory. Didier Sornette expresses the analogies between stock markets and phase transitions, statistical mechanics, nonlinear dynamics, and disordered systems mold the view from outside:

Take our personal life. We are not really interested in knowing in advance at what time we will go to a given store or drive to a highway. We are much more interested in forecasting the major bifurcations ahead of us, involving the few important things, like health, love, and work, that count for our happiness. Similarly, predicting the detailed evolution of complex systems has no real value, and the fact that we are taught that it is out of reach from a fundamental point of view does not exclude the more interesting possibility of predicting phases of evolutions of complex systems that really count, like the extreme events. It turns out that most complex systems in natural and social sciences do exhibit rare and sudden transitions that occur over time intervals that are short compared to the characteristic time scales of their posterior evolution. Such extreme events express more than anything else the underlying “forces” usually hidden by almost perfect balance and thus provide the potential for a better scientific understanding of complex systems.

Phase transitions, critical points, extreme events seem to be so pervasive in stock markets that they are the crucial concepts to explain and, in case, foresee. And complexity theory provides us a fruitful reading key to understand their dynamics, namely their generation, growth and occurrence. Such a reading key proposes a clear-cut interpretation of them, which can be explained again by means of an analogy with physics, precisely with the unstable position of an object. Complexity theory suggests that critical or extreme events occurring at large scale are the outcome of interactions occurring at smaller scales. In the case of stock markets, this means that, unlike many approaches that attempt to account for crashes by searching for ‘mechanisms’ that work at very short time scales, complexity theory indicates that crashes have causes that date back months or year before it. This reading suggests that it is the increasing, inner interaction between the agents inside the markets that builds up the unstable dynamics (typically the financial bubbles) that eventually ends up with a critical event, the crash. But here the specific, final step that triggers the critical event: the collapse of the prices is not the key for its understanding: a crash occurs because the markets are in an unstable phase and any small interference or event may trigger it. The bottom line: the trigger can be virtually any event external to the markets. The real cause of the crash is its overall unstable position, the proximate ‘cause’ is secondary and accidental. Or, in other words, a crash could be fundamentally endogenous in nature, whilst an exogenous, external, shock is simply the occasional triggering factors of it. The instability is built up by a cooperative behavior among traders, who imitate each other (in this sense is an endogenous process) and contribute to form and reinforce trends that converge up to a critical point.

The main advantage of this approach is that the system (the market) would anticipate the crash by releasing precursory fingerprints observable in the stock market prices: the market prices contain information on impending crashes and this implies that:

if the traders were to learn how to decipher and use this information, they would act on it and on the knowledge that others act on it; nevertheless, the crashes would still probably happen. Our results suggest a weaker form of the “weak efficient market hypothesis”, according to which the market prices contain, in addition to the information generally available to all, subtle information formed by the global market that most or all individual traders have not yet learned to decipher and use. Instead of the usual interpretation of the efficient market hypothesis in which traders extract and consciously incorporate (by their action) all information contained in the market prices, we propose that the market as a whole can exhibit “emergent” behavior not shared by any of its constituents.

In a nutshell, the critical events emerge in a self-organized and cooperative fashion as the macro result of the internal and micro interactions of the traders, their imitation and mirroring.

 

Option Spread. Drunken Risibility.

iqoption-trading-platform-binary-options

The term spread refers to the difference in premiums between the purchase and sale of options. An option spread is the simultaneous purchase of one or more options contracts and sale of the equivalent number of options contracts, in a different series of the class of options. A spread could involve the same underlying:

  •  Buying and selling calls, or
  •  Buying and selling puts.

Combining puts and calls into groups of two or more makes it feasible to design derivatives with interesting payoff profiles. The profit and loss outcomes depend on the options used (puts or calls); positions taken (long or short); whether their strike prices are identical or different; and the similarity or difference of their exercise dates. Among directional positions are bullish vertical call spreads, bullish vertical put spreads, bearish vertical spreads, and bearish vertical put spreads.

If the long position has a higher premium than the short position, this is known as a debit spread, and the investor will be required to deposit the difference in premiums. If the long position has a lower premium than the short position, this is a credit spread, and the investor will be allowed to withdraw the difference in premiums. The spread will be even if the premiums on each side results are the same.

A potential loss in an option spread is determined by two factors:

  • Strike price
  • Expiration date

If the strike price of the long call is greater than the strike price of the short call, or if the strike price of the long put is less than the strike price of the short put, a margin is required because adverse market moves can cause the short option to suffer a loss before the long option can show a profit.

A margin is also required if the long option expires before the short option. The reason is that once the long option expires, the trader holds an unhedged short position. A good way of looking at margin requirements is that they foretell potential loss. Here are, in a nutshell, the main option spreadings.

A calendar, horizontal, or time spread is the simultaneous purchase and sale of options of the same class with the same exercise prices but with different expiration dates. A vertical, or price or money, spread is the simultaneous purchase and sale of options of the same class with the same expiration date but with different exercise prices. A bull, or call, spread is a type of vertical spread that involves the purchase of the call option with the lower exercise price while selling the call option with the higher exercise price. The result is a debit transaction because the lower exercise price will have the higher premium.

  • The maximum risk is the net debit: the long option premium minus the short option premium.
  • The maximum profit potential is the difference in the strike prices minus the net debit.
  • The breakeven is equal to the lower strike price plus the net debit.

A trader will typically buy a vertical bull call spread when he is mildly bullish. Essentially, he gives up unlimited profit potential in return for reducing his risk. In a vertical bull call spread, the trader is expecting the spread premium to widen because the lower strike price call comes into the money first.

Vertical spreads are the more common of the direction strategies, and they may be bullish or bearish to reflect the holder’s view of market’s anticipated direction. Bullish vertical put spreads are a combination of a long put with a low strike, and a short put with a higher strike. Because the short position is struck closer to-the-money, this generates a premium credit.

Bearish vertical call spreads are the inverse of bullish vertical call spreads. They are created by combining a short call with a low strike and a long call with a higher strike. Bearish vertical put spreads are the inverse of bullish vertical put spreads, generated by combining a short put with a low strike and a long put with a higher strike. This is a bearish position taken when a trader or investor expects the market to fall.

The bull or sell put spread is a type of vertical spread involving the purchase of a put option with the lower exercise price and sale of a put option with the higher exercise price. Theoretically, this is the same action that a bull call spreader would take. The difference between a call spread and a put spread is that the net result will be a credit transaction because the higher exercise price will have the higher premium.

  • The maximum risk is the difference in the strike prices minus the net credit.
  • The maximum profit potential equals the net credit.
  • The breakeven equals the higher strike price minus the net credit.

The bear or sell call spread involves selling the call option with the lower exercise price and buying the call option with the higher exercise price. The net result is a credit transaction because the lower exercise price will have the higher premium.

A bear put spread (or buy spread) involves selling some of the put option with the lower exercise price and buying the put option with the higher exercise price. This is the same action that a bear call spreader would take. The difference between a call spread and a put spread, however, is that the net result will be a debit transaction because the higher exercise price will have the higher premium.

  • The maximum risk is equal to the net debit.
  • The maximum profit potential is the difference in the strike
    prices minus the net debit.
  • The breakeven equals the higher strike price minus the net debit.

An investor or trader would buy a vertical bear put spread because he or she is mildly bearish, giving up an unlimited profit potential in return for a reduction in risk. In a vertical bear put spread, the trader is expecting the spread premium to widen because the higher strike price put comes into the money first.

So, investors and traders who are bullish on the market will either buy a bull call spread or sell a bull put spread. But those who are bearish on the market will either buy a bear put spread or sell a bear call spread. When the investor pays more for the long option than she receives in premium for the short option, then the spread is a debit transaction. In contrast, when she receives more than she pays, the spread is a credit transaction. Credit spreads typically require a margin deposit.

Synthetic Structured Financial Instruments. Note Quote.

Untitled

An option is common form of a derivative. It’s a contract, or a provision of a contract, that gives one party (the option holder) the right, but not the obligation to perform a specified transaction with another party (the option issuer or option writer) according to specified terms. Options can be embedded into many kinds of contracts. For example, a corporation might issue a bond with an option that will allow the company to buy the bonds back in ten years at a set price. Standalone options trade on exchanges or Over The Counter (OTC). They are linked to a variety of underlying assets. Most exchange-traded options have stocks as their underlying asset but OTC-traded options have a huge variety of underlying assets (bonds, currencies, commodities, swaps, or baskets of assets). There are two main types of options: calls and puts:

  • Call options provide the holder the right (but not the obligation) to purchase an underlying asset at a specified price (the strike price), for a certain period of time. If the stock fails to meet the strike price before the expiration date, the option expires and becomes worthless. Investors buy calls when they think the share price of the underlying security will rise or sell a call if they think it will fall. Selling an option is also referred to as ”writing” an option.
  • Put options give the holder the right to sell an underlying asset at a specified price (the strike price). The seller (or writer) of the put option is obligated to buy the stock at the strike price. Put options can be exercised at any time before the option expires. Investors buy puts if they think the share price of the underlying stock will fall, or sell one if they think it will rise. Put buyers – those who hold a “long” – put are either speculative buyers looking for leverage or “insurance” buyers who want to protect their long positions in a stock for the period of time covered by the option. Put sellers hold a “short” expecting the market to move upward (or at least stay stable) A worst-case scenario for a put seller is a downward market turn. The maximum profit is limited to the put premium received and is achieved when the price of the underlyer is at or above the option’s strike price at expiration. The maximum loss is unlimited for an uncovered put writer.

Coupon is the annual interest rate paid on a bond, expressed as percentage of the face value.

Coupon rate or nominal yield = annual payments ÷ face value of the bond

Current yield = annual payments ÷ market value of the bond

The reason for these terms to be briefed here through their definitions from investopedia lies in the fact that these happen to be pillars of synthetic financial instruments, to which we now take a detour.

According to the International Financial Reporting Standards (IFRS), a synthetic instrument is a financial product designed, acquired, and held to emulate the characteristics of another instrument. For example, such is the case of a floating-rate long-term debt combined with an interest rate swap. This involves

  • Receiving floating payments
  • Making fixed payments, thereby synthesizing a fixed-rate long-term debt

Another example of a synthetic is the output of an option strategy followed by dealers who are selling synthetic futures for a commodity that they hold by using a combination of put and call options. By simultaneously buying a put option in a given commodity, say, gold, and selling the corresponding call option, a trader can construct a position analogous to a short sale in the commodity’s futures market.

Because the synthetic short sale seeks to take advantage of price disparities between call and put options, it tends to be more profitable when call premiums are greater than comparable put premiums. For example, the holder of a synthetic short future will profit if gold prices decrease and incur losses if gold prices increase.

By analogy, a long position in a given commodity’s call option combined with a short sale of the same commodity’s futures creates price protection that is similar to that gained through purchasing put options. A synthetic put seeks to capitalize on disparities between call and put premiums.

Basically, synthetic products are covered options and certificates characterized by identical or similar profit and loss structures when compared with traditional financial instruments, such as equities or bonds. Basket certificates in equities are based on a specific number of selected stocks.

A covered option involves the purchase of an underlying asset, such as equity, bond, currency, or other commodity, and the writing of a call option on that same asset. The writer is paid a premium, which limits his or her loss in the event of a fall in the market value of the underlying. However, his or her potential return from any increase in the asset’s market value is conditioned by gains limited by the option’s strike price.

The concept underpinning synthetic covered options is that of duplicating traditional covered options, which can be achieved by both purchase of the underlying asset and writing of the call option. The purchase price of such a product is that of the underlying, less the premium received for the sale of the call option.

Moreover, synthetic covered options do not contain a hedge against losses in market value of the underlying. A hedge might be emulated by writing a call option or by calculating the return from the sale of a call option into the product price. The option premium, however, tends to limit possible losses in the market value of the underlying.

Alternatively, a synthetic financial instrument is done through a certificate that accords a right, based on either a number of underlyings or on having a value derived from several indicators. This presents a sense of diversification over a range of risk factors. The main types are

  • Index certificates
  • Region certificates
  • Basket certificates

By being based on an official index, index certificates reflect a given market’s behavior. Region certificates are derived from a number of indexes or companies from a given region, usually involving developing countries. Basket certificates are derived from a selection of companies active in a certain industry sector.

An investment in index, region, or basket certificates fundamentally involves the same level of potential loss as a direct investment in the corresponding assets themselves. Their relative advantage is diversification within a given specified range; but risk is not eliminated. Moreover, certificates also carry credit risk associated to the issuer.

Also available in the market are compound financial instruments, a frequently encountered form being that of a debt product with an embedded conversion option. An example of a compound financial instrument is a bond that is convertible into ordinary shares of the issuer. As an accounting standard, the IFRS requires the issuer of such a financial instrument to present separately on the balance sheet the

  • Equity component
  • Liability component

On initial recognition, the fair value of the liability component is the present value of the contractually determined stream of future cash flows, discounted at the rate of interest applied at that time by the market to substantially similar cash flows. These should be characterized by practically the same terms, albeit without a conversion option. The fair value of the option comprises its

  • Time value
  • Intrinsic value (if any)

The IFRS requires that on conversion of a convertible instrument at maturity, the reporting company derecognizes the liability component and recognizes it as equity. Embedded derivatives are an interesting issue inasmuch as some contracts that themselves are not financial instruments may have financial instruments embedded in them. This is the case of a contract to purchase a commodity at a fixed price for delivery at a future date.

Contracts of this type have embedded in them a derivative that is indexed to the price of the commodity, which is essentially a derivative feature within a contract that is not a financial derivative. International Accounting Standard 39 (IAS 39) of the IFRS requires that under certain conditions an embedded derivative is separated from its host contract and treated as a derivative instrument. For instance, the IFRS specifies that each of the individual derivative instruments that together constitute a synthetic financial product represents a contractual right or obligation with its own terms and conditions. Under this perspective,

  • Each is exposed to risks that may differ from the risks to which other financial products are exposed.
  • Each may be transferred or settled separately.

Therefore, when one financial product in a synthetic instrument is an asset and another is a liability, these two do not offset each other. Consequently, they should be presented on an entity’s balance sheet on a net basis, unless they meet specific criteria outlined by the aforementioned accounting standards.

Like synthetics, structured financial products are derivatives. Many are custom-designed bonds, some of which (over the years) have presented a number of problems to their buyers and holders. This is particularly true for those investors who are not so versatile in modern complex instruments and their further-out impact.

Typically, instead of receiving a fixed coupon or principal, a person or company holding a structured note will receive an amount adjusted according to a fairly sophisticated formula. Structured instruments lack transparency; the market, however, seems to like them, the proof being that the amount of money invested in structured notes continues to increase. One of many examples of structured products is the principal exchange-rate-linked security (PERLS). These derivative instruments target changes in currency rates. They are disguised to look like bonds, by structuring them as if they were debt instruments, making it feasible for investors who are not permitted to play in currencies to place bets on the direction of exchange rates.

For instance, instead of just repaying principal, a PERLS may multiply such principal by the change in the value of the dollar against the euro; or twice the change in the value of the dollar against the Swiss franc or the British pound. The fact that this repayment is linked to the foreign exchange rate of different currencies sees to it that the investor might be receiving a lot more than an interest rate on the principal alone – but also a lot less, all the way to capital attrition. (Even capital protection notes involve capital attrition since, in certain cases, no interest is paid over their, say, five-year life cycle.)

Structured note trading is a concept that has been subject to several interpretations, depending on the time frame within which the product has been brought to the market. Many traders tend to distinguish between three different generations of structured notes. The elder, or first generation, usually consists of structured instruments based on just one index, including

  • Bull market vehicles, such as inverse floaters and cap floaters
  • Bear market instruments, which are characteristically more leveraged, an example being the superfloaters

Bear market products became popular in 1993 and 1994. A typical superfloater might pay twice the London Interbank Offered Rate (LIBOR) minus 7 percent for two years. At currently prevailing rates, this means that the superfloater has a small coupon at the beginning that improves only if the LIBOR rises. Theoretically, a coupon that is below current market levels until the LIBOR goes higher is much harder to sell than a big coupon that gets bigger every time rates drop. Still, bear plays find customers.

Second-generation structured notes are different types of exotic options; or, more precisely, they are yet more exotic than superfloaters, which are exotic enough in themselves. There exist serious risks embedded in these instruments, as such risks have never been fully appreciated. Second-generation examples are

  • Range notes, with embedded binary or digital options
  • Quanto notes, which allow investors to take a bet on, say, sterling London Interbank Offered Rates, but get paid in dollar.

There are different versions of such instruments, like you-choose range notes for a bear market. Every quarter the investor has to choose the “range,” a job that requires considerable market knowledge and skill. For instance, if the range width is set to 100 basis points, the investor has to determine at the start of the period the high and low limits within that range, which is far from being a straight job.

Surprisingly enough, there are investors who like this because sometimes they are given an option to change their mind; and they also figure their risk period is really only one quarter. In this, they are badly mistaken. In reality even for banks you-choose notes are much more difficult to hedge than regular range notes because, as very few people appreciate, the hedges are both

  • Dynamic
  • Imperfect

There are as well third-generation notes offering investors exposure to commodity or equity prices in a cross-category sense. Such notes usually appeal to a different class than fixed-income investors. For instance, third-generation notes are sometimes purchased by fund managers who are in the fixed-income market but want to diversify their exposure. In spite of the fact that the increasing sophistication and lack of transparency of structured financial instruments sees to it that they are too often misunderstood, and they are highly risky, a horde of equity-linked and commodity-linked notes are being structured and sold to investors. Examples are LIBOR floaters designed so that the coupon is “LIBOR plus”:

The pros say that flexibly structured options can be useful to sophisticated investors seeking to manage particular portfolio and trading risks. However, as a result of exposure being assumed, and also because of the likelihood that there is no secondary market, transactions in flexibly structured options are not suitable for investors who are not

  • In a position to understand the behavior of their intrinsic value
  • Financially able to bear the risks embedded in them when worst comes to worst

It is the price of novelty, customization, and flexibility offered by synthetic and structured financial instruments that can be expressed in one four-letter word: risk. Risk taking is welcome when we know how to manage our exposure, but it can be a disaster when we don’t – hence, the wisdom of learning ahead of investing the challenges posed by derivatives and how to be in charge of risk control.

Malignant Acceleration in Tech-Finance. Some Further Rumination on Regulations. Thought of the Day 72.1

these-stunning-charts-show-some-of-the-wild-trading-activity-that-came-from-a-dark-pool-this-morning

Regardless of the positive effects of HFT that offers, such as reduced spreads, higher liquidity, and faster price discovery, its negative side is mostly what has caught people’s attention. Several notorious market failures and accidents in recent years all seem to be related to HFT practices. They showed how much risk HFT can involve and how huge the damage can be.

HFT heavily depends on the reliability of the trading algorithms that generate, route, and execute orders. High-frequency traders thus must ensure that these algorithms have been tested completely and thoroughly before they are deployed into the live systems of the financial markets. Any improperly-tested, or prematurely-released algorithms may cause losses to both investors and the exchanges. Several examples demonstrate the extent of the ever-present vulnerabilities.

In August 2012, the Knight Capital Group implemented a new liquidity testing software routine into its trading system, which was running live on the NYSE. The system started making bizarre trading decisions, quadrupling the price of one company, Wizzard Software, as well as bidding-up the price of much larger entities, such as General Electric. Within 45 minutes, the company lost USD 440 million. After this event and the weakening of Knight Capital’s capital base, it agreed to merge with another algorithmic trading firm, Getco, which is the biggest HFT firm in the U.S. today. This example emphasizes the importance of implementing precautions to ensure their algorithms are not mistakenly used.

Another example is Everbright Securities in China. In 2013, state-owned brokerage firm, Everbright Securities Co., sent more than 26,000 mistaken buy orders to the Shanghai Stock Exchange (SSE of RMB 23.4 billion (USD 3.82 billion), pushing its benchmark index up 6 % in two minutes. This resulted in a trading loss of approximately RMB 194 million (USD 31.7 million). In a follow-up evaluative study, the China Securities Regulatory Commission (CSRC) found that there were significant flaws in Everbright’s information and risk management systems.

The damage caused by HFT errors is not limited to specific trading firms themselves, but also may involve stock exchanges and the stability of the related financial market. On Friday, May 18, 2012, the social network giant, Facebook’s stock was issued on the NASDAQ exchange. This was the most anticipated initial public offering (IPO) in its history. However, technology problems with the opening made a mess of the IPO. It attracted HFT traders, and very large order flows were expected, and before the IPO, NASDAQ was confident in its ability to deal with the high volume of orders.

But when the deluge of orders to buy, sell and cancel trades came, NASDAQ’s trading software began to fail under the strain. This resulted in a 30-minute delay on NASDAQ’s side, and a 17-second blackout for all stock trading at the exchange, causing further panic. Scrutiny of the problems immediately led to fines for the exchange and accusations that HFT traders bore some responsibility too. Problems persisted after opening, with many customer orders from institutional and retail buyers unfilled for hours or never filled at all, while others ended up buying more shares than they had intended. This incredible gaffe, which some estimates say cost traders USD 100 million, eclipsed NASDAQ’s achievement in getting Facebook’s initial IPO, the third largest IPO in U.S. history. This incident has been estimated to have cost investors USD 100 million.

Another instance occurred on May 6, 2010, when U.S. financial markets were surprised by what has been referred to ever since as the “Flash Crash” Within less than 30 minutes, the main U.S. stock markets experienced the single largest price declines within a day, with a decline of more than 5 % for many U.S.-based equity products. In addition, the Dow Jones Industrial Average (DJIA), at its lowest point that day, fell by nearly 1,000 points, although it was followed by a rapid rebound. This brief period of extreme intraday volatility demonstrated the weakness of the structure and stability of U.S. financial markets, as well as the opportunities for volatility-focused HFT traders. Although a subsequent investigation by the SEC cleared high-frequency traders of directly having caused the Flash Crash, they were still blamed for exaggerating market volatility, withdrawing liquidity for many U.S.-based equities (FLASH BOYS).

Since the mid-2000s, the average trade size in the U.S. stock market had plummeted, the markets had fragmented, and the gap in time between the public view of the markets and the view of high-frequency traders had widened. The rise of high-frequency trading had been accompanied also by a rise in stock market volatility – over and above the turmoil caused by the 2008 financial crisis. The price volatility within each trading day in the U.S. stock market between 2010 and 2013 was nearly 40 percent higher than the volatility between 2004 and 2006, for instance. There were days in 2011 in which volatility was higher than in the most volatile days of the dot-com bubble. Although these different incidents have different causes, the effects were similar and some common conclusions can be drawn. The presence of algorithmic trading and HFT in the financial markets exacerbates the adverse impacts of trading-related mistakes. It may lead to extremely higher market volatility and surprises about suddenly-diminished liquidity. This raises concerns about the stability and health of the financial markets for regulators. With the continuous and fast development of HFT, larger and larger shares of equity trades were created in the U.S. financial markets. Also, there was mounting evidence of disturbed market stability and caused significant financial losses due to HFT-related errors. This led the regulators to increase their attention and effort to provide the exchanges and traders with guidance on HFT practices They also expressed concerns about high-frequency traders extracting profit at the costs of traditional investors and even manipulating the market. For instance, high-frequency traders can generate a large amount of orders within microseconds to exacerbate a trend. Other types of misconduct include: ping orders, which is using some orders to detect other hidden orders; and quote stuffing, which is issuing a large number of orders to create uncertainty in the market. HFT creates room for these kinds of market abuses, and its blazing speed and huge trade volumes make their detection difficult for regulators. Regulators have taken steps to increase their regulatory authority over HFT activities. Some of the problems that arose in the mid-2000s led to regulatory hearings in the United States Senate on dark pools, flash orders and HFT practices. Another example occurred after the Facebook IPO problem. This led the SEC to call for a limit up-limit down mechanism at the exchanges to prevent trades in individual securities from occurring outside of a specified price range so that market volatility will be under better control. These regulatory actions put stricter requirements on HFT practices, aiming to minimize the market disturbance when many fast trading orders occur within a day.

Some content on this page was disabled on May 30, 2018 as a result of a DMCA takedown notice from W.W. Norton. You can learn more about the DMCA here:

https://en.support.wordpress.com/copyright-and-the-dmca/

Fundamental Theorem of Asset Pricing: Tautological Meeting of Mathematical Martingale and Financial Arbitrage by the Measure of Probability.

thinkstockphotos-496599823

The Fundamental Theorem of Asset Pricing (FTAP hereafter) has two broad tenets, viz.

1. A market admits no arbitrage, if and only if, the market has a martingale measure.

2. Every contingent claim can be hedged, if and only if, the martingale measure is unique.

The FTAP is a theorem of mathematics, and the use of the term ‘measure’ in its statement places the FTAP within the theory of probability formulated by Andrei Kolmogorov (Foundations of the Theory of Probability) in 1933. Kolmogorov’s work took place in a context captured by Bertrand Russell, who observed that

It is important to realise the fundamental position of probability in science. . . . As to what is meant by probability, opinions differ.

In the 1920s the idea of randomness, as distinct from a lack of information, was becoming substantive in the physical sciences because of the emergence of the Copenhagen Interpretation of quantum mechanics. In the social sciences, Frank Knight argued that uncertainty was the only source of profit and the concept was pervading John Maynard Keynes’ economics (Robert Skidelsky Keynes the return of the master).

Two mathematical theories of probability had become ascendant by the late 1920s. Richard von Mises (brother of the Austrian economist Ludwig) attempted to lay down the axioms of classical probability within a framework of Empiricism, the ‘frequentist’ or ‘objective’ approach. To counter–balance von Mises, the Italian actuary Bruno de Finetti presented a more Pragmatic approach, characterised by his claim that “Probability does not exist” because it was only an expression of the observer’s view of the world. This ‘subjectivist’ approach was closely related to the less well-known position taken by the Pragmatist Frank Ramsey who developed an argument against Keynes’ Realist interpretation of probability presented in the Treatise on Probability.

Kolmogorov addressed the trichotomy of mathematical probability by generalising so that Realist, Empiricist and Pragmatist probabilities were all examples of ‘measures’ satisfying certain axioms. In doing this, a random variable became a function while an expectation was an integral: probability became a branch of Analysis, not Statistics. Von Mises criticised Kolmogorov’s generalised framework as un-necessarily complex. About a decade and a half back, the physicist Edwin Jaynes (Probability Theory The Logic Of Science) champions Leonard Savage’s subjectivist Bayesianism as having a “deeper conceptual foundation which allows it to be extended to a wider class of applications, required by current problems of science”.

The objections to measure theoretic probability for empirical scientists can be accounted for as a lack of physicality. Frequentist probability is based on the act of counting; subjectivist probability is based on a flow of information, which, following Claude Shannon, is now an observable entity in Empirical science. Measure theoretic probability is based on abstract mathematical objects unrelated to sensible phenomena. However, the generality of Kolmogorov’s approach made it flexible enough to handle problems that emerged in physics and engineering during the Second World War and his approach became widely accepted after 1950 because it was practically more useful.

In the context of the first statement of the FTAP, a ‘martingale measure’ is a probability measure, usually labelled Q, such that the (real, rather than nominal) price of an asset today, X0, is the expectation, using the martingale measure, of its (real) price in the future, XT. Formally,

X0 = EQ XT

The abstract probability distribution Q is defined so that this equality exists, not on any empirical information of historical prices or subjective judgement of future prices. The only condition placed on the relationship that the martingale measure has with the ‘natural’, or ‘physical’, probability measures usually assigned the label P, is that they agree on what is possible.

The term ‘martingale’ in this context derives from doubling strategies in gambling and it was introduced into mathematics by Jean Ville in a development of von Mises’ work. The idea that asset prices have the martingale property was first proposed by Benoit Mandelbrot in response to an early formulation of Eugene Fama’s Efficient Market Hypothesis (EMH), the two concepts being combined by Fama. For Mandelbrot and Fama the key consequence of prices being martingales was that the current price was independent of the future price and technical analysis would not prove profitable in the long run. In developing the EMH there was no discussion on the nature of the probability under which assets are martingales, and it is often assumed that the expectation is calculated under the natural measure. While the FTAP employs modern terminology in the context of value-neutrality, the idea of equating a current price with a future, uncertain, has ethical ramifications.

The other technical term in the first statement of the FTAP, arbitrage, has long been used in financial mathematics. Liber Abaci Fibonacci (Laurence Sigler Fibonaccis Liber Abaci) discusses ‘Barter of Merchandise and Similar Things’, 20 arms of cloth are worth 3 Pisan pounds and 42 rolls of cotton are similarly worth 5 Pisan pounds; it is sought how many rolls of cotton will be had for 50 arms of cloth. In this case there are three commodities, arms of cloth, rolls of cotton and Pisan pounds, and Fibonacci solves the problem by having Pisan pounds ‘arbitrate’, or ‘mediate’ as Aristotle might say, between the other two commodities.

Within neo-classical economics, the Law of One Price was developed in a series of papers between 1954 and 1964 by Kenneth Arrow, Gérard Debreu and Lionel MacKenzie in the context of general equilibrium, in particular the introduction of the Arrow Security, which, employing the Law of One Price, could be used to price any asset. It was on this principle that Black and Scholes believed the value of the warrants could be deduced by employing a hedging portfolio, in introducing their work with the statement that “it should not be possible to make sure profits” they were invoking the arbitrage argument, which had an eight hundred year history. In the context of the FTAP, ‘an arbitrage’ has developed into the ability to formulate a trading strategy such that the probability, under a natural or martingale measure, of a loss is zero, but the probability of a positive profit is not.

To understand the connection between the financial concept of arbitrage and the mathematical idea of a martingale measure, consider the most basic case of a single asset whose current price, X0, can take on one of two (present) values, XTD < XTU, at time T > 0, in the future. In this case an arbitrage would exist if X0 ≤ XTD < XTU: buying the asset now, at a price that is less than or equal to the future pay-offs, would lead to a possible profit at the end of the period, with the guarantee of no loss. Similarly, if XTD < XTU ≤ X0, short selling the asset now, and buying it back would also lead to an arbitrage. So, for there to be no arbitrage opportunities we require that

XTD < X0 < XTU

This implies that there is a number, 0 < q < 1, such that

X0 = XTD + q(XTU − XTD)

= qXTU + (1−q)XTD

The price now, X0, lies between the future prices, XTU and XTD, in the ratio q : (1 − q) and represents some sort of ‘average’. The first statement of the FTAP can be interpreted simply as “the price of an asset must lie between its maximum and minimum possible (real) future price”.

If X0 < XTD ≤ XTU we have that q < 0 whereas if XTD ≤ XTU < X0 then q > 1, and in both cases q does not represent a probability measure which by Kolmogorov’s axioms, must lie between 0 and 1. In either of these cases an arbitrage exists and a trader can make a riskless profit, the market involves ‘turpe lucrum’. This account gives an insight as to why James Bernoulli, in his moral approach to probability, considered situations where probabilities did not sum to 1, he was considering problems that were pathological not because they failed the rules of arithmetic but because they were unfair. It follows that if there are no arbitrage opportunities then quantity q can be seen as representing the ‘probability’ that the XTU price will materialise in the future. Formally

X0 = qXTU + (1−q) XTD ≡ EQ XT

The connection between the financial concept of arbitrage and the mathematical object of a martingale is essentially a tautology: both statements mean that the price today of an asset must lie between its future minimum and maximum possible value. This first statement of the FTAP was anticipated by Frank Ramsey when he defined ‘probability’ in the Pragmatic sense of ‘a degree of belief’ and argues that measuring ‘degrees of belief’ is through betting odds. On this basis he formulates some axioms of probability, including that a probability must lie between 0 and 1. He then goes on to say that

These are the laws of probability, …If anyone’s mental condition violated these laws, his choice would depend on the precise form in which the options were offered him, which would be absurd. He could have a book made against him by a cunning better and would then stand to lose in any event.

This is a Pragmatic argument that identifies the absence of the martingale measure with the existence of arbitrage and today this forms the basis of the standard argument as to why arbitrages do not exist: if they did the, other market participants would bankrupt the agent who was mis-pricing the asset. This has become known in philosophy as the ‘Dutch Book’ argument and as a consequence of the fact/value dichotomy this is often presented as a ‘matter of fact’. However, ignoring the fact/value dichotomy, the Dutch book argument is an alternative of the ‘Golden Rule’– “Do to others as you would have them do to you.”– it is infused with the moral concepts of fairness and reciprocity (Jeffrey Wattles The Golden Rule).

FTAP is the ethical concept of Justice, capturing the social norms of reciprocity and fairness. This is significant in the context of Granovetter’s discussion of embeddedness in economics. It is conventional to assume that mainstream economic theory is ‘undersocialised’: agents are rational calculators seeking to maximise an objective function. The argument presented here is that a central theorem in contemporary economics, the FTAP, is deeply embedded in social norms, despite being presented as an undersocialised mathematical object. This embeddedness is a consequence of the origins of mathematical probability being in the ethical analysis of commercial contracts: the feudal shackles are still binding this most modern of economic theories.

Ramsey goes on to make an important point

Having any definite degree of belief implies a certain measure of consistency, namely willingness to bet on a given proposition at the same odds for any stake, the stakes being measured in terms of ultimate values. Having degrees of belief obeying the laws of probability implies a further measure of consistency, namely such a consistency between the odds acceptable on different propositions as shall prevent a book being made against you.

Ramsey is arguing that an agent needs to employ the same measure in pricing all assets in a market, and this is the key result in contemporary derivative pricing. Having identified the martingale measure on the basis of a ‘primal’ asset, it is then applied across the market, in particular to derivatives on the primal asset but the well-known result that if two assets offer different ‘market prices of risk’, an arbitrage exists. This explains why the market-price of risk appears in the Radon-Nikodym derivative and the Capital Market Line, it enforces Ramsey’s consistency in pricing. The second statement of the FTAP is concerned with incomplete markets, which appear in relation to Arrow-Debreu prices. In mathematics, in the special case that there are as many, or more, assets in a market as there are possible future, uncertain, states, a unique pricing vector can be deduced for the market because of Cramer’s Rule. If the elements of the pricing vector satisfy the axioms of probability, specifically each element is positive and they all sum to one, then the market precludes arbitrage opportunities. This is the case covered by the first statement of the FTAP. In the more realistic situation that there are more possible future states than assets, the market can still be arbitrage free but the pricing vector, the martingale measure, might not be unique. The agent can still be consistent in selecting which particular martingale measure they choose to use, but another agent might choose a different measure, such that the two do not agree on a price. In the context of the Law of One Price, this means that we cannot hedge, replicate or cover, a position in the market, such that the portfolio is riskless. The significance of the second statement of the FTAP is that it tells us that in the sensible world of imperfect knowledge and transaction costs, a model within the framework of the FTAP cannot give a precise price. When faced with incompleteness in markets, agents need alternative ways to price assets and behavioural techniques have come to dominate financial theory. This feature was realised in The Port Royal Logic when it recognised the role of transaction costs in lotteries.

Conjuncted: Banking – The Collu(i)sion of Housing and Stock Markets

bank-customers-1929-stock-market-crash

There are two main aspects we are to look at here as regards banking. The first aspect is the link between banking and houses. In most countries, lending of money is done on basis of property, especially houses. As collateral for the mortgage, often houses are used. If the value of the house increases, more money can be borrowed from the banks and more money can be injected into society. More investments are generally good for a country. It is therefore of prime importance for a country to keep the house prices high.

The way this is done, is by facilitating borrowing of money, for instance by fiscal stimulation. Most countries have a tax break on mortgages. This, while the effect for the house buyers of these tax breaks is absolutely zero. That is because the price of a house is determined on the market by supply and demand. If neither the supply nor the demand is changing, the price will be fixed by ‘what people can afford’. Imagine there are 100 houses for sale and 100 buyers. Imagine the price on the market will wind up being 100000 Rupees, with a mortgage payment (3% interest rate) being 3 thousand Rupees per year, exactly what people can afford. Now imagine that government makes a tax break for buyers stipulating that they get 50% of the mortgage payment back from the state in a way of fiscal refund. Suddenly, the buyers can afford 6 thousand Rupees per year and the price on the market of the house will rise to 200 thousand Rupees. The net effect for the buyer is zero. Yet, the price of the house has doubled, and this is a very good incentive for the economy. This is the reason why nearly all governments have tax breaks for home owners.

Yet, another way of driving the price of houses up is by reducing the supply. Socialist countries made it a strong point on their agenda that having a home is a human right. They try to build houses for everybody. And this causes the destruction of the economy. Since the supply of houses is so high that the value drops too much, the possibility of investment based on borrowing money with the house as collateral is severely reduced and a collapse of economy is unavoidable. Technically speaking, it is of extreme simplicity to build a house to everybody. Even a villa or a palace. Yet, implementing this idea will imply a recession in economy, since modern economies are based on house prices. It is better to cut off the supply (destroy houses) to help the economy.

The next item of banking is the stock holders. It is often said that the stock market is the axis-of-evil of a capitalist society. Indeed, the stock owners will get the profit of the capital, and the piling up of money will eventually be at the stock owners. However, it is not so that the stock owners are the evil people that care only about money. It is principally the managers that are the culprits. Mostly bank managers.

To give you an example. Imagine I have 2% of each of the three banks, State Bank, Best Bank and Credit Bank. Now imagine that the other 98% of the stock of each bank is placed at the other two banks. State Bank is thus 49% owner of Best Bank, and 49% owner of Credit Bank. In turn, State Bank is owned for 49% by Best Bank and for 49% by Credit Bank. The thing is that I am the full 100% owner of all three banks. As an example, I own directly 2% of State Bank. But I also own 2% of two banks that each own 49% of this bank. And I own 2% of banks that own 49% of banks that own 49% of State Bank. This series adds to 100%. I am the full 100% owner of State Bank. And the same applies to Best Bank and Credit Bank. This is easy to see, since there do not exist other stock owners of the three banks. These banks are fully mine. However, if I go to a stockholders meeting, I will be outvoted on all subjects. Especially on the subject of financial reward for the manager. If today the 10-million-Rupees salary of Arundhati Bhatti of State Bank is discussed, it will get 98% of the votes, namely those of Gautum Ambani representing Best Bank and Mukesh Adani of Credit Bank. They vote in favor, because next week is the stockholders meeting of their banks. This game only ends when Mukesh Adani will be angry with Arundhati Bhatti.

This structure, placing stock at each other’s company is a form of bypassing the stock holders

– the owners – and allow for plundering of a company.

There is a side effect which is as beneficial as the one above. Often, the general manager’s salary is based on a bonus-system; the better a bank performs, the higher the salary of the manager. This high performance can easily be bogus. Imagine the above three banks. The profit it distributed over the shareholders in the form of dividend. Imagine now that each bank makes 2 million profit on normal business operations. Each bank can easily emit 100 million profit in dividend without loss! For example, State Bank distributes 100 million: 2 million to me, 49 million to Best Bank and 49 million to Credit Bank. From these two banks it also gets 49 million Rupees each. Thus, the total flux of money is only 2 million Rupees.

Shareholders often use as a rule-of thumb a target share price of 20 times the dividend. This because that implies a 5% ROI and slightly better than putting the money at a bank (which anyway invests it in that company, gets 5%, and gives you 3%). However, the dividend can be highly misleading. 2 million profit is made, 100 million dividend is paid. Each bank uses this trick. The general managers can present beautiful data and get a fat bonus.

The only thing stopping this game is taxing. What if government decides to put 25% tax on dividend? Suddenly a bank has to pay 25 million where it made only 2 million real profit. The three banks claimed to have made 300 million profit in total, while they factually only made 6 million; the rest came from passing money around to each other. They have to pay 75 million dividend tax. How will they manage?! That is why government gives banks normally a tax break on dividend (except for small stockholders like me). Governments that like to see high profits, since it also fabricates high GDP and thus guarantees low interest rates on their state loans.

Actually, even without taxing, how will they manage to continue presenting nice data in a year where no profit is made on banking activity?

Malthusian Catastrophe.

population-arti19_depositphotos_18606893_m

As long as wealth is growing exponentially, it does not matter that some of the surplus labor is skimmed. If the production of the laborers is growing x% and their wealth grows y% – even if y% < x%, and the wealth of the capital grows faster, z%, with z% > x% – everybody is happy. The workers minimally increased their wealth, even if their productivity has increased tremendously. Nearly all increased labor production has been confiscated by the capital, exorbitant bonuses of bank managers are an example. (Managers, by the way, by definition, do not ’produce’ anything, but only help skim the production of others; it is ‘work’, but not ‘production’. As long as the skimming [money in] is larger than the cost of their work [money out], they will be hired by the capital. For instance, if they can move the workers into producing more for equal pay. If not, out they go).

If the economy is growing at a steady pace (x%), resulting in an exponential growth (1+x/100)n, effectively today’s life can be paid with (promises of) tomorrow’s earnings, ‘borrowing from the future’. (At a shrinking economy, the opposite occurs, paying tomorrow’s life with today’s earnings; having nothing to live on today).

Let’s put that in an equation. The economy of today Ei is defined in terms of growth of economy itself, the difference between today’s economy and tomorrow’s economy, Ei+1 − Ei,

Ei = α(Ei+1 − Ei) —– (1)

with α related to the growth rate, GR ≡ (Ei+1 − Ei)/Ei = 1/α. In a time-differential equation:

E(t) = αdE(t)/dt —– (2)

which has as solution

E(t) = E0e1/α —– (3)

exponential growth.

The problem is that eternal growth of x% is not possible. Our entire society depends on a

continuous growth; it is the fiber of our system. When it stops, everything collapses, if the derivative dE(t)/dt becomes negative, economy itself becomes negative and we start destroying things (E < 0) instead of producing things. If the growth gets relatively smaller, E itself gets smaller, assuming steady borrowing-from-tomorrow factor α (second equation above). But that is a contradiction; if E gets smaller, the derivative must be negative. The only consistent observation is that if E shrinks, E becomes immediately negative! This is what is called a Malthusian Catastrophe.

Now we seem to saturate with our production, we no longer have x% growth, but it is closer to 0. The capital, however, has inertia (viz. The continuing culture in the financial world of huge bonuses, often justified as “well, that is the market. What can we do?!”). The capital continues to increase their skimming of the surplus labor with the same z%. The laborers, therefore, now have a decrease of wealth close to z%. (Note that the capital cannot have a decline, a negative z%, because it would refuse to do something if that something does not make profit).

Many things that we took for granted before, free health care for all, early pension, free education, cheap or free transport (no road tolls, etc.) are more and more under discussion, with an argument that they are “becoming unaffordable”. This label is utter nonsense, when you think of it, since

1) Before, apparently, they were affordable.

2) We have increased productivity of our workers.

1 + 2 = 3) Things are becoming more and more affordable. Unless, they are becoming unaffordable for some (the workers) and not for others (the capitalists).

It might well be that soon we discover that living is unaffordable. The new money M’ in Marx’s equation is used as a starting point in new cycle M → M’. The eternal cycle causes condensation of wealth to the capital, away from the labor power. M keeps growing and growing. Anything that does not accumulate capital, M’ – M < 0, goes bankrupt. Anything that does not grow fast enough, M’ – M ≈ 0, is bought by something that does, reconfigured to have M’ – M large again. Note that these reconfigurations – optimizations of skimming (the laborers never profit form the reconfigurations, they are rather being sacked as a result of them) – are presented by the media as something good, where words as ‘increased synergy’ are used to defend mergers, etc. It alludes to the sponsors of the messages coming to us. Next time you read the word ‘synergy’ in these communications, just replace it with ‘fleecing’.

The capital actually ‘refuses’ to do something if it does not make profit. If M’ is not bigger than M in a step, the step would simply not be done, implying also no Labour Power used and no payment for Labour Power. Ignoring for the moment philanthropists, in capitalistic Utopia capital cannot but grow. If economy is not growing it is therefore always at the cost of labor! Humans, namely, do not have this option of not doing things, because “better to get 99 paise while living costs 1 rupee, i.e., ‘loss’, than get no paisa at all [while living still costs one rupee (haha, excuse me the folly of quixotic living!]”. Death by slow starvation is chosen before rapid death.

In an exponential growing system, everything is OK; Capital grows and reward on labor as well. When the economy stagnates only the labor power (humans) pays the price. It reaches a point of revolution, when the skimming of Labour Power is so big, that this Labour Power (humans) cannot keep itself alive. Famous is the situation of Marie-Antoinette (representing the capital), wife of King Louis XVI of France, who responded to the outcry of the public (Labour Power) who demanded bread (sic!) by saying “They do not have bread? Let them eat cake!” A revolution of the labor power is unavoidable in a capitalist system when it reaches saturation, because the unavoidable increment of the capital is paid by the reduction of wealth of the labor power. That is a mathematical certainty.

Conjuncted: Speculatively Accelerated Capital – Trading Outside the Pit.

hft

High Frequency Traders (HFTs hereafter) may anticipate the trades of a mutual fund, for instance, if the mutual fund splits large orders into a series of smaller ones and the initial trades reveal information about the mutual funds’ future trading intentions. HFTs might also forecast order flow if traditional asset managers with similar trading demands do not all trade at the same time, allowing the possibility that the initiation of a trade by one mutual fund could forecast similar future trades by other mutual funds. If an HFT were able to forecast a traditional asset managers’ order flow by either these or some other means, then the HFT could potentially trade ahead of them and profit from the traditional asset manager’s subsequent price impact.

There are two main empirical implications of HFTs engaging in such a trading strategy. The first implication is that HFT trading should lead non-HFT trading – if an HFT buys a stock, non-HFTs should subsequently come into the market and buy those same stocks. Second, since the HFT’s objective would be to profit from non-HFTs’ subsequent price impact, it should be the case that the prices of the stocks they buy rise and those of the stocks they sell fall. These two patterns, together, are consistent with HFTs trading stocks in order to profit from non-HFTs’ future buying and selling pressure. 

While HFTs may in aggregate anticipate non-HFT order flow, it is also possible that among HFTs, some firms’ trades are strongly correlated with future non-HFT order flow, while other firms’ trades have little or no correlation with non-HFT order flow. This may be the case if certain HFTs focus more on strategies that anticipate order flow or if some HFTs are more skilled than other firms. If certain HFTs are better at forecasting order flow or if they focus more on such a strategy, then these HFTs’ trades should be consistently more strongly correlated with future non-HFT trades than are trades from other HFTs. Additionally, if these HFTs are more skilled, then one might expect these HFTs’ trades to be more strongly correlated with future returns. 

Another implication of the anticipatory trading hypothesis is that the correlation between HFT trades and future non-HFT trades should be stronger at times when non-HFTs are impatient. The reason is anticipating buying and selling pressure requires forecasting future trades based on patterns in past trades and orders. To make anticipating their order flow difficult, non-HFTs typically use execution algorithms to disguise their trading intentions. But there is a trade-off between disguising order flow and trading a large position quickly. When non-HFTs are impatient and focused on trading a position quickly, they may not hide their order flow as well, making it easier for HFTs to anticipate their trades. At such times, the correlation between HFT trades and future non-HFT trades should be stronger. 

High Frequency Traders: A Case in Point.

Events on 6th May 2010:

At 2:32 p.m., against [a] backdrop of unusually high volatility and thinning liquidity, a large fundamental trader (a mutual fund complex) initiated a sell program to sell a total of 75,000 E-Mini [S&P 500 futures] contracts (valued at approximately $4.1 billion) as a hedge to an existing equity position. […] This large fundamental trader chose to execute this sell program via an automated execution algorithm (“Sell Algorithm”) that was programmed to feed orders into the June 2010 E-Mini market to target an execution rate set to 9% of the trading volume calculated over the previous minute, but without regard to price or time. The execution of this sell program resulted in the largest net change in daily position of any trader in the E-Mini since the beginning of the year (from January 1, 2010 through May 6, 2010). [. . . ] This sell pressure was initially absorbed by: high frequency traders (“HFTs”) and other intermediaries in the futures market; fundamental buyers in the futures market; and cross-market arbitrageurs who transferred this sell pressure to the equities markets by opportunistically buying E-Mini contracts and simultaneously selling products like SPY [(S&P 500 exchange-traded fund (“ETF”))], or selling individual equities in the S&P 500 Index. […] Between 2:32 p.m. and 2:45 p.m., as prices of the E-Mini rapidly declined, the Sell Algorithm sold about 35,000 E-Mini contracts (valued at approximately $1.9 billion) of the 75,000 intended. [. . . ] By 2:45:28 there were less than 1,050 contracts of buy-side resting orders in the E-Mini, representing less than 1% of buy-side market depth observed at the beginning of the day. [. . . ] At 2:45:28 p.m., trading on the E-Mini was paused for five seconds when the Chicago Mercantile Exchange (“CME”) Stop Logic Functionality was triggered in order to prevent a cascade of further price declines. […] When trading resumed at 2:45:33 p.m., prices stabilized and shortly thereafter, the E-Mini began to recover, followed by the SPY. [. . . ] Even though after 2:45 p.m. prices in the E-Mini and SPY were recovering from their severe declines, sell orders placed for some individual securities and Exchange Traded Funds (ETFs) (including many retail stop-loss orders, triggered by declines in prices of those securities) found reduced buying interest, which led to further price declines in those securities. […] [B]etween 2:40 p.m. and 3:00 p.m., over 20,000 trades (many based on retail-customer orders) across more than 300 separate securities, including many ETFs, were executed at prices 60% or more away from their 2:40 p.m. prices. [. . . ] By 3:08 p.m., [. . . ] the E-Mini prices [were] back to nearly their pre-drop level [. . . and] most securities had reverted back to trading at prices reflecting true consensus values.

In the ordinary course of business, HFTs use their technological advantage to profit from aggressively removing the last few contracts at the best bid and ask levels and then establishing new best bids and asks at adjacent price levels ahead of an immediacy-demanding customer. As an illustration of this “immediacy absorption” activity, consider the following stylized example, presented in Figure and described below.

Untitled 2

Suppose that we observe the central limit order book for a stock index futures contract. The notional value of one stock index futures contract is $50. The market is very liquid – on average there are hundreds of resting limit orders to buy or sell multiple contracts at either the best bid or the best offer. At some point during the day, due to temporary selling pressure, there is a total of just 100 contracts left at the best bid price of 1000.00. Recognizing that the queue at the best bid is about to be depleted, HFTs submit executable limit orders to aggressively sell a total of 100 contracts, thus completely depleting the queue at the best bid, and very quickly submit sequences of new limit orders to buy a total of 100 contracts at the new best bid price of 999.75, as well as to sell 100 contracts at the new best offer of 1000.00. If the selling pressure continues, then HFTs are able to buy 100 contracts at 999.75 and make a profit of $1,250 dollars among them. If, however, the selling pressure stops and the new best offer price of 1000.00 attracts buyers, then HFTs would very quickly sell 100 contracts (which are at the very front of the new best offer queue), “scratching” the trade at the same price as they bought, and getting rid of the risky inventory in a few milliseconds.

This type of trading activity reduces, albeit for only a few milliseconds, the latency of a price move. Under normal market conditions, this trading activity somewhat accelerates price changes and adds to the trading volume, but does not result in a significant directional price move. In effect, this activity imparts a small “immediacy absorption” cost on all traders, including the market makers, who are not fast enough to cancel the last remaining orders before an imminent price move.

This activity, however, makes it both costlier and riskier for the slower market makers to maintain continuous market presence. In response to the additional cost and risk, market makers lower their acceptable inventory bounds to levels that are too small to offset temporary liquidity imbalances of any significant size. When the diminished liquidity buffer of the market makers is pierced by a sudden order flow imbalance, they begin to demand a progressively greater compensation for maintaining continuous market presence, and prices start to move directionally. Just as the prices are moving directionally and volatility is elevated, immediacy absorption activity of HFTs can exacerbate a directional price move and amplify volatility. Higher volatility further increases the speed at which the best bid and offer queues are being depleted, inducing HFT algorithms to demand immediacy even more, fueling a spike in trading volume, and making it more costly for the market makers to maintain continuous market presence. This forces more risk averse market makers to withdraw from the market, which results in a full-blown market crash.

Empirically, immediacy absorption activity of the HFTs should manifest itself in the data very differently from the liquidity provision activity of the Market Makers. To establish the presence of these differences in the data, we test the following hypotheses:

Hypothesis H1: HFTs are more likely than Market Makers to aggressively execute the last 100 contracts before a price move in the direction of the trade. Market Makers are more likely than HFTs to have the last 100 resting contracts against which aggressive orders are executed.

Hypothesis H2: HFTs trade aggressively in the direction of the price move. Market Makers get run over by a price move.

Hypothesis H3: Both HFTs and Market Makers scratch trades, but HFTs scratch more.

To statistically test our “immediacy absorption” hypotheses against the “liquidity provision” hypotheses, we divide all of the trades during the 405 minute trading day into two subsets: Aggressive Buy trades and Aggressive Sell trades. Within each subset, we further aggregate multiple aggressive buy or sell transactions resulting from the execution of the same order into Aggressive Buy or Aggressive Sell sequences. The intuition is as follows. Often a specific trade is not a stand alone event, but a part of a sequence of transactions associated with the execution of the same order. For example, an order to aggressively sell 10 contracts may result in four Aggressive Sell transactions: for 2 contracts, 1 contract, 4 contracts, and 3 contracts, respectively, due to the specific sequence of resting bids against which this aggressive sell order was be executed. Using the order ID number, we are able to aggregate these four transactions into one Aggressive Sell sequence for 10 contracts.

Testing Hypothesis H1. Aggressive removal of the last 100 contracts by HFTs; passive provision of the last 100 resting contracts by the Market Makers. Using the Aggressive Buy sequences, we label as a “price increase event” all occurrences of trading sequences in which at least 100 contracts consecutively executed at the same price are followed by some number of contracts at a higher price. To examine indications of low latency, we focus on the the last 100 contracts traded before the price increase and the first 100 contracts at the next higher price (or fewer if the price changes again before 100 contracts are executed). Although we do not look directly at the limit order book data, price increase events are defined to capture occasions where traders use executable buy orders to lift the last remaining offers in the limit order book. Using Aggressive sell trades, we define “price decrease events” symmetrically as occurrences of sequences of trades in which 100 contracts executed at the same price are followed by executions at lower prices. These events are intended to capture occasions where traders use executable sell orders to hit the last few best bids in the limit order book. The results are presented in Table below

Untitled 2

For price increase and price decrease events, we calculate each of the six trader categories’ shares of Aggressive and Passive trading volume for the last 100 contracts traded at the “old” price level before the price increase or decrease and the first 100 contracts traded at the “new” price level (or fewer if the number of contracts is less than 100) after the price increase or decrease event.

Table above presents, for the six trader categories, volume shares for the last 100 contracts at the old price and the first 100 contracts at the new price. For comparison, the unconditional shares of aggressive and passive trading volume of each trader category are also reported. Table has four panels covering (A) price increase events on May 3-5, (B) price decrease events on May 3-5, (C) price increase events on May 6, and (D) price decrease events on May 6. In each panel there are six rows of data, one row for each trader category. Relative to panels A and C, the rows for Fundamental Buyers (BUYER) and Fundamental Sellers (SELLER) are reversed in panels B and D to emphasize the symmetry between buying during price increase events and selling during price decrease events. The first two columns report the shares of Aggressive and Passive contract volume for the last 100 contracts before the price change; the next two columns report the shares of Aggressive and Passive volume for up to the next 100 contracts after the price change; and the last two columns report the “unconditional” market shares of Aggressive and Passive sides of all Aggressive buy volume or sell volume. For May 3-5, the data are based on volume pooled across the three days.

Consider panel A, which describes price increase events associated with Aggressive buy trades on May 3-5, 2010. High Frequency Traders participated on the Aggressive side of 34.04% of all aggressive buy volume. Strongly consistent with immediacy absorption hypothesis, the participation rate rises to 57.70% of the Aggressive side of trades on the last 100 contracts of Aggressive buy volume before price increase events and falls to 14.84% of the Aggressive side of trades on the first 100 contracts of Aggressive buy volume after price increase events.

High Frequency Traders participated on the Passive side of 34.33% of all aggressive buy volume. Consistent with hypothesis, the participation rate on the Passive side of Aggressive buy volume falls to 28.72% of the last 100 contracts before a price increase event. It rises to 37.93% of the first 100 contracts after a price increase event.

These results are inconsistent with the idea that high frequency traders behave like textbook market makers, suffering adverse selection losses associated with being picked off by informed traders. Instead, when the price is about to move to a new level, high frequency traders tend to avoid being run over and take the price to the new level with Aggressive trades of their own.

Market Makers follow a noticeably more passive trading strategy than High Frequency Traders. According to panel A, Market Makers are 13.48% of the Passive side of all Aggressive trades, but they are only 7.27% of the Aggressive side of all Aggressive trades. On the last 100 contracts at the old price, Market Makers’ share of volume increases only modestly, from 7.27% to 8.78% of trades. Their share of Passive volume at the old price increases, from 13.48% to 15.80%. These facts are consistent with the interpretation that Market Makers, unlike High Frequency Traders, do engage in a strategy similar to traditional passive market making, buying at the bid price, selling at the offer price, and suffering losses when the price moves against them. These facts are also consistent with the hypothesis that High Frequency Traders have lower latency than Market Makers.

Intuition might suggest that Fundamental Buyers would tend to place the Aggressive trades which move prices up from one tick level to the next. This intuition does not seem to be corroborated by the data. According to panel A, Fundamental Buyers are 21.53% of all Aggressive trades but only 11.61% of the last 100 Aggressive contracts traded at the old price. Instead, Fundamental Buyers increase their share of Aggressive buy volume to 26.17% of the first 100 contracts at the new price.

Taking into account symmetry between buying and selling, panel B shows the results for Aggressive sell trades during May 3-5, 2010, are almost the same as the results for Aggressive buy trades. High Frequency Traders are 34.17% of all Aggressive sell volume, increase their share to 55.20% of the last 100 Aggressive sell contracts at the old price, and decrease their share to 15.04% of the last 100 Aggressive sell contracts at the new price. Market Makers are 7.45% of all Aggressive sell contracts, increase their share to only 8.57% of the last 100 Aggressive sell trades at the old price, and decrease their share to 6.58% of the last 100 Aggressive sell contracts at the new price. Fundamental Sellers’ shares of Aggressive sell trades behave similarly to Fundamental Buyers’ shares of Aggressive Buy trades. Fundamental Sellers are 20.91% of all Aggressive sell contracts, decrease their share to 11.96% of the last 100 Aggressive sell contracts at the old price, and increase their share to 24.87% of the first 100 Aggressive sell contracts at the new price.

Panels C and D report results for Aggressive Buy trades and Aggressive Sell trades for May 6, 2010. Taking into account symmetry between buying and selling, the results for Aggressive buy trades in panel C are very similar to the results for Aggressive sell trades in panel D. For example, Aggressive sell trades by Fundamental Sellers were 17.55% of Aggressive sell volume on May 6, while Aggressive buy trades by Fundamental Buyers were 20.12% of Aggressive buy volume on May 6. In comparison with the share of Fundamental Buyers and in comparison with May 3-5, the Flash Crash of May 6 is associated with a slightly lower – not higher – share of Aggressive sell trades by Fundamental Sellers.

The number of price increase and price decrease events increased dramatically on May 6, consistent with the increased volatility of the market on that day. On May 3-5, there were 4100 price increase events and 4062 price decrease events. On May 6 alone, there were 4101 price increase events and 4377 price decrease events. There were therefore approximately three times as many price increase events per day on May 6 as on the three preceding days.

A comparison of May 6 with May 3-5 reveals significant changes in the trading patterns of High Frequency Traders. Compared with May 3-5 in panels A and B, the share of Aggressive trades by High Frequency Traders drops from 34.04% of Aggressive buys and 34.17% of Aggressive sells on May 3-5 to 26.98% of Aggressive buy trades and 26.29% of Aggressive sell trades on May 6. The share of Aggressive trades for the last 100 contracts at the old price declines by even more. High Frequency Traders’ participation rate on the Aggressive side of Aggressive buy trades drops from 57.70% on May 3-5 to only 38.86% on May 6. Similarly, the participation rate on the Aggressive side of Aggressive sell trades drops from and 55.20% to 38.67%. These declines are largely offset by increases in the participation rate by Opportunistic Traders on the Aggressive side of trades. For example, Opportunistic Traders’ share of the Aggressive side of the last 100 contracts traded at the old price rises from 19.21% to 34.26% for Aggressive buys and from 20.99% to 33.86% for Aggressive sells. These results suggest that some Opportunistic Traders follow trading strategies for which low latency is important, such as index arbitrage, cross-market arbitrage, or opportunistic strategies mimicking market making.

Testing Hypothesis H2. HFTs trade aggressively in the direction of the price move; Market Makers get run over by a price move. To examine this hypothesis, we analyze whether High Frequency Traders use Aggressive trades to trade in the direction of contemporaneous price changes, while Market Makers use Passive trades to trade in the opposite direction from price changes. To this end, we estimate the regression equation

Δyt = α + Φ . Δyt-1 + δ . yt-1 + Σi=120i . Δpt-1 /0.25] + εt

(where yt and Δyt denote inventories and change in inventories of High Frequency Traders for each second of a trading day; t = 0 corresponds to the opening of stock trading on the NYSE at 8:30:00 a.m. CT (9:30:00 ET) and t = 24, 300 denotes the close of Globex at 15:15:00 CT (4:15 p.m. ET); Δpt denotes the price change in index point units between the high-low midpoint of second t-1 and the high-low midpoint of second t. Regressing second-by-second changes in inventory levels of High Frequency Traders on the level of their inventories the previous second, the change in their inventory levels the previous second, the change in prices during the current second, and lagged price changes for each of the previous 20 previous seconds.)

for Passive and Aggressive inventory changes separately.

Untitled

Table above presents the regression results of the two components of change in holdings on lagged inventory, lagged change in holdings and lagged price changes over one second intervals. Panel A and Panel B report the results for May 3-5 and May 6, respectively. Each panel has four columns, reporting estimated coefficients where the dependent variables are net Aggressive volume (Aggressive buys minus Aggressive sells) by High Frequency Traders (∆AHFT), net Passive volume by High Frequency Traders (∆PHFT), net Aggressive volume by Market Makers (∆AMM), and net Passive volume by Market Makers (∆PMM).

We observe that for lagged inventories (NPHFTt−1), the estimated coefficients for Aggressive and Passive trades by High Frequency Traders are δAHFT = −0.005 (t = −9.55) and δPHFT = −0.001 (t = −3.13), respectively. These coefficient estimates have the interpretation that High Frequency Traders use Aggressive trades to liquidate inventories more intensively than passive trades. In contrast, the results for Market Makers are very different. For lagged inventories (NPMMt−1), the estimated coefficients for Aggressive and Passive volume by Market Makers are δAMM = −0.002 (t = −6.73) and δPMM = −0.002 (t = −5.26), respectively. The similarity of these coefficients estimates has the interpretation that Market Makers favor neither Aggressive trades nor Passive trades when liquidating inventories.

For contemporaneous price changes (in the current second) (∆Pt−1), the estimated coefficient Aggressive and Passive volume by High Frequency Traders are β0 = 57.78 (t = 31.94) and β0 = −25.69 (t = −28.61), respectively. For Market Makers, the estimated coefficients for Aggressive and Passive trades are β0 = 6.38 (t = 18.51) and β0 = −19.92 (t = −37.68). These estimated coefficients have the interpretation that in seconds in which prices move up one tick, High Frequency traders are net buyers of about 58 contracts with Aggressive trades and net sellers of about 26 contracts with Passive trades in that same second, while Market Makers are net buyers of about 6 contracts with Aggressive trades and net sellers of about 20 contracts with Passive trades. High Frequency Traders and Market Makers are similar in that they both use Aggressive trades to trade in the direction of price changes, and both use Passive trades to trade against the direction of price changes. High Frequency Traders and Market Makers are different in that Aggressive net purchases by High Frequency Traders are greater in magnitude than the Passive net purchases, while the reverse is true for Market Makers.

For lagged price changes, coefficient estimates for Aggressive trades by High Frequency Traders and Market Makers are positive and statistically significant at lags 1-4 and lags 1-10, respectively. These results have the interpretation that both High Frequency Traders’ and Market Makers’ trade on recent price momentum, but the trading is compressed into a shorter time frame for High Frequency Traders than for Market Makers.

For lagged price changes, coefficient estimates for Passive volume by High Frequency Traders and Market Makers are negative and statistically significant at lags 1 and lags 1-3, respectively. Panel B of Table presents results for May 6. Similar to May 3-5, High Frequency Traders tend to use Aggressive trades more intensely than Passive trades to liquidate inventories, while Market Makers do not show this pattern. Also similar to May 3-5, High Frequency Trades and Market makers use Aggressive trades to trade in the contemporaneous direction of price changes and use Passive trades to trade in the direction opposite price changes, with Aggressive trading greater than Passive trading for High Frequency Traders and the reverse for Market Makers. In comparison with May 3-5, the coefficients are smaller in magnitude on May 6, indicating reduced liquidity at each tick. For lagged price changes, the coefficients associated with Aggressive trading by High Frequency Traders change from positive to negative at lags 1-4, and the positive coefficients associated with Aggressive trading by Market Makers change from being positive and statistically significant at lags lags 1-10 to being positive and statistically significant only at lags 1-3. These results illustrate accelerated trading velocity in the volatile market conditions of May 6.

We further examine how high frequency trading activity is related to market prices. Figure below illustrates how prices change after HFT trading activity in a given second. The upper-left panel presents results for buy trades for May 3-5, the upper right panel presents results for buy trades on May 6, and the lower-left and lower-right present corresponding results for sell trades. For an “event” second in which High Frequency Traders are net buyers, net Aggressive Buyers, and net Passive Buyers value-weighted average prices paid by the High Frequency Traders in that second are subtracted from the value-weighted average prices for all trades in the same second and each of the following 20 seconds. The results are averaged across event seconds, weighted by the magnitude of High Frequency Traders’ net position change in the event second. The upper-left panel presents results for May 3-5, the upper-right panel presents results for May 6, and the lower two panels present results for sell trades calculated analogously. Price differences on the vertical axis are scaled so that one unit equals one tick ($12.50 per contract).

Untitled 2

When High Frequency Traders are net buyers on May 3-5, prices rise by 17% of a tick in the next second. When HFTs execute Aggressively or Passively, prices rise by 20% and 2% of a tick in the next second, respectively. In subsequent seconds, prices in all cases trend downward by about 5% of a tick over the subsequent 19 seconds. For May 3-5, the results are almost symmetric for selling.

When High Frequency Traders are buying on May 6, prices increase by 7% of a tick in the next second. When they are aggressive buyers or passive buyers, prices increase by increase 25% of a tick or decrease by 5% of a tick in the next second, respectively. In subsequent seconds, prices generally tend to drift downwards. The downward drift is especially pronounced after Passive buying, consistent with the interpretation that High Frequency Traders were “run over” when their resting limit buy orders were “run over” in the down phase of the Flash Crash. When High Frequency Traders are net sellers, the results after one second are analogous to buying. After aggressive selling, prices continue to drift down for 20 seconds, consistent with the interpretation that High Frequency Traders made profits from Aggressive sales during the down phase of the Flash Crash.

Testing Hypothesis H3. Both HFTs and Market Makers scratch trades; HFTs scratch more. A textbook market maker will try to buy at the bid price, sell at the offer price, and capture the bid-ask spread as a profit. Sometimes, after buying at the bid price, market prices begin to fall before the market maker can make a one tick profit by selling his inventory at the best offer price. To avoid taking losses in this situation, one component of a traditional market making strategy is to “scratch trades in the presence of changing market conditions by quickly liquidating a position at the same price at which it was acquired. These scratched trades represent inventory management trades designed to lower the cost of adverse selection. Since many competing market makers may try to scratch trades at the same time, traders with the lowest latency will tend to be more successful in their attempts to scratch trades and thus more successful in their ability to avoid losses when market conditions change.

To examine whether and to what extent traders engage in trade scratching, we sequence each trader’s trades for the day using audit trail sequence numbers which not only sort trades by second but also sort trades chronologically within each second. We define an “immediately scratched trade” as a trade with the properties that the next trade in the sorted sequence (1) occurred in the same second, (2) was executed at the same price, (3) was in the opposite direction, i.e., buy followed by sell or sell followed by buy. For each of the trading accounts in our sample, we calculate the number of immediately scratched trades, then compare the number of scratched trades across the six trader categories.

The results of this analysis are presented in the table below. Panel A provides results for May 3-5 and panel B for May 6. In each panel, there are five rows of data, one for each trader category. The first three columns report the total number of trades, the total number of immediately scratched trades, and the percentage of trades that are immediately scratched by traders in five categories. For May 3-6, the reported numbers are from the pooled data.

Untitled 2

This table presents statistics for immediate trade scratching which measures how many times a trader changes his/her direction of trading in a second aggregated over a day. We define a trade direction change as a buy trade right after a sell trade or vice versa at the same price level in the same second.

This table shows that High Frequency Traders scratched 2.84 % of trades on May 3-5 and 4.26 % on May 6; Market Makers scratched 2.49 % of trades on May 3-5 and 5.53 % of trades on May 6. While the percentages of immediately scratched trades by Market Makers is slightly higher than that for High Frequency Traders on May 6, the percentages for both groups are very similar. The fourth, fifth, and sixth columns of the Table report the mean, standard deviation, and median of the number of scratched trades for the traders in each category.

Although the percentages of scratched trades are similar, the mean number of immediately scratched trades by High Frequency Traders is much greater than for Market Makers: 540.56 per day on May 3-5 and 1610.75 on May 6 for High Frequency Traders versus 13.35 and 72.92 for Market Makers. The differences between High Frequency Traders and Market Makers reflect differences in volume traded. The Table shows that High Frequency Traders and Market Makers scratch a significantly larger percentage of their trades than other trader categories.