Philosophical Isomorphism of Category Theory. Note Quote.

One philosophical reason for categorification is that it refines our concept of ‘sameness’ by allowing us to distinguish between isomorphism and equality. In a set, two elements are either the same or different. In a category, two objects can be ‘the same in a way’ while still being different. In other words, they can be isomorphic but not equal. Even more importantly, two objects can be the same in more than one way, since there can be different isomorphisms between them. This gives rise to the notion of the ‘symmetry group’ of an object: its group of automorphisms.

Consider, for example, the fundamental groupoid Π1(X) of a topological space X: the category with points of X as objects and homotopy classes of paths with fixed endpoints as morphisms. This category captures all the homotopy-theoretic information about X in dimensions ≤ 1. The group of automorphisms of an object x in this category is just the fundamental group π1(X,x). If we decategorify the fundamental groupoid of X, we forget how points in X are connected by paths, remembering only whether they are, and we obtain the set of components of X. This captures only the homotopy 0-type of X.

This example shows how decategorification eliminates ‘higher-dimensional information’ about a situation. Categorification is an attempt to recover this information. This example also suggests that we can keep track of the homotopy 2-type of X if we categorify further and distinguish between paths that are equal and paths that are merely isomorphic (i.e., homotopic). For this we should work with a ‘2-category’ having points of X as objects, paths as morphisms, and certain equivalence classes of homotopies between paths as 2-morphisms. In a marvelous self-referential twist, the definition of ‘2-category’ is simply the categorification of the definition of ‘category’. Like a category, a 2-category has a class of objects, but now for any pair x,y of objects there is no longer a set hom(x,y); instead, there is a category hom(x,y). Objects of hom(x,y) are called morphisms of C, and morphisms between them are called 2-morphisms of C. Composition is no longer a function, but rather a functor:

◦: hom(x, y) × hom(y, z) → hom(x, z)

For any object x there is an identity 1x ∈ hom(x,x). And now we have a choice. On the one hand, we can impose associativity and the left and right unit laws strictly, as equational laws. If we do this, we obtain the definition of ‘strict 2-category’. On the other hand, we can impose them only up to natural isomorphism, with these natural isomorphisms satisfying the coherence. This is clearly more compatible with the spirit of categorification. If we do this, we obtain the definition of ‘weak 2-category’. (Strict 2-categories are traditionally known as ‘2-categories’, while weak 2-categories are known as ‘bicategories’.)

The classic example of a 2-category is Cat, which has categories as objects, functors as morphisms, and natural transformations as 2-morphisms. The presence of 2-morphisms gives Cat much of its distinctive flavor, which we would miss if we treated it as a mere category. Indeed, Mac Lane has said that categories were originally invented, not to study functors, but to study natural transformations! A good example of two functors that are not equal, but only naturally isomorphic, are the identity functor and the ‘double dual’ functor on the category of finite-dimensional vector spaces. Given a topological space X, we can form a 2-category Π>sub>2(X) called the ‘fundamental 2-groupoid’ of X. The objects of this 2-category are the points of X. Given x, y ∈ X, the morphisms from x to y are the paths f: [0,1] → X starting at x and ending at y. Finally, given f, g ∈ hom(x, y), the 2-morphisms from f to g are the homotopy classes of paths in hom(x, y) starting at f and ending at g. Since the associative law for composition of paths holds only up to homotopy, this 2-category is a weak 2-category. If we decategorify the fundamental 2-groupoid of X, we obtain its fundamental groupoid.

From 2-categories it is a short step to dreaming of n-categories and even ω-categories — but it is not so easy to make these dreams into smoothly functioning mathematical tools. Roughly speaking, an n-category should be some sort of algebraic structure having objects, 1-morphisms between objects, 2-morphisms between 1-morphisms, and so on up to n-morphisms. There should be various ways of composing j-morphisms for 1 ≤ j ≤ n, and these should satisfy various laws. As with 2-categories, we can try to impose these laws either strictly or weakly.

Untitled

Other approaches to n-categories use j-morphisms with other shapes, such as simplices, or opetopes. We believe that there is basically a single notion of weak n-category lurking behind these different approaches. If this is true, they will eventually be shown to be equivalent, and choosing among them will be merely a matter of convenience. However, the precise meaning of ‘equivalence’ here is itself rather subtle and n-categorical in flavor.

The first challenge to any theory of n-categories is to give an adequate treatment of coherence laws. Composition in an n-category should satisfy equational laws only at the top level, between n-morphisms. Any law concerning j-morphisms for j < n should hold only ‘up to equivalence’. Here a n-morphism is defined to be an ‘equivalence’ if it is invertible, while for j < n a j-morphism is recursively defined to be an equivalence if it is invertible up to equivalence. Equivalence is generally the correct substitute for the notion of equality in n-categorical mathematics. When laws are formulated as equivalences, these equivalences should in turn satisfy coherence laws of their own, but again only up to equivalence, and so on. This becomes ever more complicated and unmanageable with increasing n unless one takes a systematic approach to coherence laws.

The second challenge to any theory of n-categories is to handle certain key examples. First, for any n, there should be an (n + 1)-category nCat, whose objects are (small) n-categories, whose morphisms are suitably weakened functors between these, whose 2-morphisms are suitably weakened natural transformations, and so on. Here by ‘suitably weakened’ we refer to the fact that all laws should hold only up to equivalence. Second, for any topological space X, there should be an n-category Πn(X) whose objects are points of X, whose morphisms are paths, whose 2-morphisms are paths of paths, and so on, where we take homotopy classes only at the top level. Πn(X) should be an ‘n-groupoid’, meaning that all its j-morphisms are equivalences for 0 ≤ j ≤ n. We call Πn(X) the ‘fundamental n-groupoid of X’. Conversely, any n-groupoid should determine a topological space, its ‘geometric realization’.

In fact, these constructions should render the study of n-groupoids equivalent to that of homotopy n-types. A bit of the richness inherent in the concept of n-category becomes apparent when we make the following observation: an (n + 1)-category with only one object can be regarded as special sort of n-category. Suppose that C is an (n+1)-category with one object x. Then we can form the n-category C ̃ by re-indexing: the objects of C ̃ are the morphisms of C, the morphisms of C ̃ are the 2-morphisms of C, and so on. The n-categories we obtain this way have extra structure. In particular, since the objects of C ̃ are really morphisms in C from x to itself, we can ‘multiply’ (that is, compose) them.

The simplest example is this: if C is a category with a single object x, C ̃ is the set of endomorphisms of x. This set is actually a monoid. Conversely, any monoid can be regarded as the monoid of endomorphisms of x for some category with one object x. We summarize this situation by saying that ‘a one-object category is a monoid’. Similarly, a one-object 2-category is a monoidal category. It is natural to expect this pattern to continue in all higher dimensions; in fact, it is probably easiest to cheat and define a monoidal n-category to be an (n + 1)-category with one object.

Things get even more interesting when we iterate this process. Given an (n + k)-category C with only one object, one morphism, and so on up to one (k − 1)-morphism, we can form an n-category whose j-morphisms are the (j + k)-morphisms of C. In doing so we obtain a particular sort of n-category with extra structure and properties, which we call a ‘k-tuply monoidal’ n-category. Table below shows what we expect these to be like for low values of n and k. For example, the Eckmann-Hilton argument shows that a 2-category with one object and one morphism is a commutative monoid. Categorifying this argument, one can show that a 3-category with one object and one morphism is a braided monoidal category. Similarly, we expect that a 4-category with one object, one morphism and one 2-morphism is a symmetric monoidal category, though this has not been worked out in full detail, because of our poor understanding of 4-categories. The fact that both braided and symmetric monoidal categories appear in this table seems to explain why both are natural concepts.

Untitled

In any reasonable approach to n-categories there should be an n-category nCatk whose objects are k-tuply monoidal weak n-categories. One should also be able to treat nCatk as a full sub-(n + k)-category of (n + k)Cat, though even for low n, k this is perhaps not as well known as it should be. Consider for example n = 0, k = 1. The objects of 0Cat1 are one-object categories, or monoids. The morphisms of 0Cat1 are functors between one-object categories, or monoid homomorphisms. But 0Cat1 also has 2-morphisms corresponding to natural transformations.

• Decategorification: (n, k) → (n − 1, k). Let C be a k-tuply monoidal n-category C. Then there should be a k-tuply monoidal (n − 1)-category DecatC whose j-morphisms are the same as those of C for j < n − 1, but whose (n − 1)-morphisms are isomorphism classes of (n − 1)-morphisms of C.

• Discrete categorification: (n, k) → (n + 1, k). There should be a ‘discrete’ k-tuply monoidal (n + 1)-category DiscC having the j-morphisms of C as its j-morphisms for j ≤ n, and only identity (n + 1)-morphisms. The decategorification of DiscC should be C.

• Delooping: (n, k) → (n + 1, k − 1). There should be a (k − 1)-tuply monoidal (n + 1)-category BC with one object obtained by reindexing, the j-morphisms of BC being the (j + 1)-morphisms of C. We use the notation ‘B’ and call BC the ‘delooping’ of C because of its relation to the classifying space construction in topology.

• Looping: (n, k) → (n − 1, k + 1). Given objects x, y in an n-category, there should be an (n − 1)-category hom(x, y). If x = y this should be a monoidal (n−1)-category, and we denote it as end(x). For k > 0, if 1 denotes the unit object of the k-tuply monoidal n-category C, end(1) should be a (k + 1)-tuply monoidal (n − 1)-category. We call this process ‘looping’, and denote the result as ΩC, because of its relation to loop space construction in topology. For k > 0, looping should extend to an (n + k)-functor Ω: nCatk → (n − 1)Catk+1. The case k = 0 is a bit different: we should be able to loop a ‘pointed’ n-category, one having a distinguished object x, by letting ΩC = end(x). In either case, the j-morphisms of ΩC correspond to certain (j − 1)-morphisms of C.

• Forgetting monoidal structure: (n, k) → (n, k−1). By forgetting the kth level of monoidal structure, we should be able to think of C as a (k−1)-tuply monoidal n-category FC. This should extend to an n-functor F: nCatk → nCatk−1.

• Stabilization: (n, k) → (n, k + 1). Though adjoint n-functors are still poorly understood, there should be a left adjoint to forgetting monoidal structure, which is called ‘stabilization’ and denoted by S: nCatk → nCatk+1.

• Forming the generalized center: (n,k) → (n,k+1). Thinking of C as an object of the (n+k)-category nCatk, there should be a (k+1)-tuply monoidal n-category ZC, the ‘generalized center’ of C, given by Ωk(end(C)). In other words, ZC is the largest sub-(n + k + 1)-category of (n + k)Cat having C as its only object, 1C as its only morphism, 11C as its only 2-morphism, and so on up to dimension k. This construction gets its name from the case n = 0, k = 1, where ZC is the usual center of the monoid C. Categorifying leads to the case n = 1, k = 1, which gives a very important construction of braided monoidal categories from monoidal categories. In particular, when C is the monoidal category of representations of a Hopf algebra H, ZC is the braided monoidal category of representations of the quantum double D(H).

Yantra + Yi-Globe = Yi-Yantra. Note Quote.

The lower and the upper semicircles of the Yi-globe,

Untitled

Untitled

where the hexagrams are shown in plane, best serve for direct comparison. There, the structural features common with the yantras are clearly visible: the arrangement of the hexagrams around the center, the concentric circles embedded into one another, and the perfect balance and symmetry.

The analogy between the Yi-globe and the yantras can be recognized in almost every formal detail, if the Chamunda-yantra (Yantra literally means “support” and “instrument”. A Yantra is a geometric design acting as a highly efficient tool for contemplation, concentration and meditation carrying spiritual significance) is taken as an example

Untitled

The similarity between the two symbols is still more complete with respect to the metaphysical contents. Yantras are the symbols of deities, whereby one part represents a god (generally, a goddess) itself, while the other part stands for the cosmic activity (function) attributed to the deity and the power manifested in the latter; thus actually, a yantra symbolizes the whole universe as well. The power of the yantras lies in the concentrated visualization – completed with the vibration of the associated mantras – capable even in itself of raising and directing cosmic energies into the human psyche, whereby man merges into the deity in his mind and, at last, becomes one with the universe, the cosmic wholeness.

When the properties of the two symbols are analyzed, the following cosmological analogies between the Yi-globe and the yantras are found

Untitled

The comparison clearly reveals that the Yi-globe and the yantras represent the same spiritual content and that most of their formal elements are identical as well. Accordingly, it is fully justified to take the Yi-globe as a special yantra.

Figure below demonstrates how easily the Yi-globe transforms into the form of a yantra. Since this yantra perfectly reflects all the connotations of the Yi-globe, its name is Yi-yantra.

Untitled

On the petals (or other geometrical elements) of the yantras, mantras are written. On the Yi-yantra, the hexagrams replace the mantras at the corresponding places. (This replacement is merely formal here, since the function of the mantras manifests only when they are expressed in words.)

Based on the exposed analysis, the connotations of the individual geometrical elements in the Yi-yantra are as follows:

  • The two circlets in the center stand for the two signs of Completion, representing the Center of the World, the starting point of creation, and at the same time the place of final dissolution.
  • The creative forces, which are to give birth to the macrocosm and microcosm, emanate from the center. This process is represented by the hexagon.
  • The eight double trigrams surrounding the hexagon represent the differentiated primal powers arranged according to the Earlier Heaven. The two squares show that they already embrace the created world, but only in inherent (i.e., not manifested) form.
  • The red circle around the squares unites the ten hexagrams on the axis of the Yi-globe. The parallel blue circle is level I of the Yi-globe, whereto the powers of the Receptive extend, and wherefrom changes (forces) direct outwards in the direction of level II. The six orange petals of the lotus (the six hexagrams) show these directions.
  • The next pair of the orange and blue circles, and the twelve orange petals with the twelve hexagrams stand for levels II.
  • The next circle contains eighteen orange petals, representing level III. At its outer circle, the development (evolution) ends. On level III, the golden petals show the opposite direction of the movement.
  • From here, the development is directed inwards (involution). The way goes through levels IV and V, to the final dissolution in the Creative in the Center.
  • The square surrounding the Yi-globe represents the external existence; its gates provide access towards the inward world. The square area stands for the created world, shown by the trigrams indicated therein and arranged according to the Later Heaven.

High Frequency Traders: A Case in Point.

Events on 6th May 2010:

At 2:32 p.m., against [a] backdrop of unusually high volatility and thinning liquidity, a large fundamental trader (a mutual fund complex) initiated a sell program to sell a total of 75,000 E-Mini [S&P 500 futures] contracts (valued at approximately $4.1 billion) as a hedge to an existing equity position. […] This large fundamental trader chose to execute this sell program via an automated execution algorithm (“Sell Algorithm”) that was programmed to feed orders into the June 2010 E-Mini market to target an execution rate set to 9% of the trading volume calculated over the previous minute, but without regard to price or time. The execution of this sell program resulted in the largest net change in daily position of any trader in the E-Mini since the beginning of the year (from January 1, 2010 through May 6, 2010). [. . . ] This sell pressure was initially absorbed by: high frequency traders (“HFTs”) and other intermediaries in the futures market; fundamental buyers in the futures market; and cross-market arbitrageurs who transferred this sell pressure to the equities markets by opportunistically buying E-Mini contracts and simultaneously selling products like SPY [(S&P 500 exchange-traded fund (“ETF”))], or selling individual equities in the S&P 500 Index. […] Between 2:32 p.m. and 2:45 p.m., as prices of the E-Mini rapidly declined, the Sell Algorithm sold about 35,000 E-Mini contracts (valued at approximately $1.9 billion) of the 75,000 intended. [. . . ] By 2:45:28 there were less than 1,050 contracts of buy-side resting orders in the E-Mini, representing less than 1% of buy-side market depth observed at the beginning of the day. [. . . ] At 2:45:28 p.m., trading on the E-Mini was paused for five seconds when the Chicago Mercantile Exchange (“CME”) Stop Logic Functionality was triggered in order to prevent a cascade of further price declines. […] When trading resumed at 2:45:33 p.m., prices stabilized and shortly thereafter, the E-Mini began to recover, followed by the SPY. [. . . ] Even though after 2:45 p.m. prices in the E-Mini and SPY were recovering from their severe declines, sell orders placed for some individual securities and Exchange Traded Funds (ETFs) (including many retail stop-loss orders, triggered by declines in prices of those securities) found reduced buying interest, which led to further price declines in those securities. […] [B]etween 2:40 p.m. and 3:00 p.m., over 20,000 trades (many based on retail-customer orders) across more than 300 separate securities, including many ETFs, were executed at prices 60% or more away from their 2:40 p.m. prices. [. . . ] By 3:08 p.m., [. . . ] the E-Mini prices [were] back to nearly their pre-drop level [. . . and] most securities had reverted back to trading at prices reflecting true consensus values.

In the ordinary course of business, HFTs use their technological advantage to profit from aggressively removing the last few contracts at the best bid and ask levels and then establishing new best bids and asks at adjacent price levels ahead of an immediacy-demanding customer. As an illustration of this “immediacy absorption” activity, consider the following stylized example, presented in Figure and described below.

Untitled 2

Suppose that we observe the central limit order book for a stock index futures contract. The notional value of one stock index futures contract is $50. The market is very liquid – on average there are hundreds of resting limit orders to buy or sell multiple contracts at either the best bid or the best offer. At some point during the day, due to temporary selling pressure, there is a total of just 100 contracts left at the best bid price of 1000.00. Recognizing that the queue at the best bid is about to be depleted, HFTs submit executable limit orders to aggressively sell a total of 100 contracts, thus completely depleting the queue at the best bid, and very quickly submit sequences of new limit orders to buy a total of 100 contracts at the new best bid price of 999.75, as well as to sell 100 contracts at the new best offer of 1000.00. If the selling pressure continues, then HFTs are able to buy 100 contracts at 999.75 and make a profit of $1,250 dollars among them. If, however, the selling pressure stops and the new best offer price of 1000.00 attracts buyers, then HFTs would very quickly sell 100 contracts (which are at the very front of the new best offer queue), “scratching” the trade at the same price as they bought, and getting rid of the risky inventory in a few milliseconds.

This type of trading activity reduces, albeit for only a few milliseconds, the latency of a price move. Under normal market conditions, this trading activity somewhat accelerates price changes and adds to the trading volume, but does not result in a significant directional price move. In effect, this activity imparts a small “immediacy absorption” cost on all traders, including the market makers, who are not fast enough to cancel the last remaining orders before an imminent price move.

This activity, however, makes it both costlier and riskier for the slower market makers to maintain continuous market presence. In response to the additional cost and risk, market makers lower their acceptable inventory bounds to levels that are too small to offset temporary liquidity imbalances of any significant size. When the diminished liquidity buffer of the market makers is pierced by a sudden order flow imbalance, they begin to demand a progressively greater compensation for maintaining continuous market presence, and prices start to move directionally. Just as the prices are moving directionally and volatility is elevated, immediacy absorption activity of HFTs can exacerbate a directional price move and amplify volatility. Higher volatility further increases the speed at which the best bid and offer queues are being depleted, inducing HFT algorithms to demand immediacy even more, fueling a spike in trading volume, and making it more costly for the market makers to maintain continuous market presence. This forces more risk averse market makers to withdraw from the market, which results in a full-blown market crash.

Empirically, immediacy absorption activity of the HFTs should manifest itself in the data very differently from the liquidity provision activity of the Market Makers. To establish the presence of these differences in the data, we test the following hypotheses:

Hypothesis H1: HFTs are more likely than Market Makers to aggressively execute the last 100 contracts before a price move in the direction of the trade. Market Makers are more likely than HFTs to have the last 100 resting contracts against which aggressive orders are executed.

Hypothesis H2: HFTs trade aggressively in the direction of the price move. Market Makers get run over by a price move.

Hypothesis H3: Both HFTs and Market Makers scratch trades, but HFTs scratch more.

To statistically test our “immediacy absorption” hypotheses against the “liquidity provision” hypotheses, we divide all of the trades during the 405 minute trading day into two subsets: Aggressive Buy trades and Aggressive Sell trades. Within each subset, we further aggregate multiple aggressive buy or sell transactions resulting from the execution of the same order into Aggressive Buy or Aggressive Sell sequences. The intuition is as follows. Often a specific trade is not a stand alone event, but a part of a sequence of transactions associated with the execution of the same order. For example, an order to aggressively sell 10 contracts may result in four Aggressive Sell transactions: for 2 contracts, 1 contract, 4 contracts, and 3 contracts, respectively, due to the specific sequence of resting bids against which this aggressive sell order was be executed. Using the order ID number, we are able to aggregate these four transactions into one Aggressive Sell sequence for 10 contracts.

Testing Hypothesis H1. Aggressive removal of the last 100 contracts by HFTs; passive provision of the last 100 resting contracts by the Market Makers. Using the Aggressive Buy sequences, we label as a “price increase event” all occurrences of trading sequences in which at least 100 contracts consecutively executed at the same price are followed by some number of contracts at a higher price. To examine indications of low latency, we focus on the the last 100 contracts traded before the price increase and the first 100 contracts at the next higher price (or fewer if the price changes again before 100 contracts are executed). Although we do not look directly at the limit order book data, price increase events are defined to capture occasions where traders use executable buy orders to lift the last remaining offers in the limit order book. Using Aggressive sell trades, we define “price decrease events” symmetrically as occurrences of sequences of trades in which 100 contracts executed at the same price are followed by executions at lower prices. These events are intended to capture occasions where traders use executable sell orders to hit the last few best bids in the limit order book. The results are presented in Table below

Untitled 2

For price increase and price decrease events, we calculate each of the six trader categories’ shares of Aggressive and Passive trading volume for the last 100 contracts traded at the “old” price level before the price increase or decrease and the first 100 contracts traded at the “new” price level (or fewer if the number of contracts is less than 100) after the price increase or decrease event.

Table above presents, for the six trader categories, volume shares for the last 100 contracts at the old price and the first 100 contracts at the new price. For comparison, the unconditional shares of aggressive and passive trading volume of each trader category are also reported. Table has four panels covering (A) price increase events on May 3-5, (B) price decrease events on May 3-5, (C) price increase events on May 6, and (D) price decrease events on May 6. In each panel there are six rows of data, one row for each trader category. Relative to panels A and C, the rows for Fundamental Buyers (BUYER) and Fundamental Sellers (SELLER) are reversed in panels B and D to emphasize the symmetry between buying during price increase events and selling during price decrease events. The first two columns report the shares of Aggressive and Passive contract volume for the last 100 contracts before the price change; the next two columns report the shares of Aggressive and Passive volume for up to the next 100 contracts after the price change; and the last two columns report the “unconditional” market shares of Aggressive and Passive sides of all Aggressive buy volume or sell volume. For May 3-5, the data are based on volume pooled across the three days.

Consider panel A, which describes price increase events associated with Aggressive buy trades on May 3-5, 2010. High Frequency Traders participated on the Aggressive side of 34.04% of all aggressive buy volume. Strongly consistent with immediacy absorption hypothesis, the participation rate rises to 57.70% of the Aggressive side of trades on the last 100 contracts of Aggressive buy volume before price increase events and falls to 14.84% of the Aggressive side of trades on the first 100 contracts of Aggressive buy volume after price increase events.

High Frequency Traders participated on the Passive side of 34.33% of all aggressive buy volume. Consistent with hypothesis, the participation rate on the Passive side of Aggressive buy volume falls to 28.72% of the last 100 contracts before a price increase event. It rises to 37.93% of the first 100 contracts after a price increase event.

These results are inconsistent with the idea that high frequency traders behave like textbook market makers, suffering adverse selection losses associated with being picked off by informed traders. Instead, when the price is about to move to a new level, high frequency traders tend to avoid being run over and take the price to the new level with Aggressive trades of their own.

Market Makers follow a noticeably more passive trading strategy than High Frequency Traders. According to panel A, Market Makers are 13.48% of the Passive side of all Aggressive trades, but they are only 7.27% of the Aggressive side of all Aggressive trades. On the last 100 contracts at the old price, Market Makers’ share of volume increases only modestly, from 7.27% to 8.78% of trades. Their share of Passive volume at the old price increases, from 13.48% to 15.80%. These facts are consistent with the interpretation that Market Makers, unlike High Frequency Traders, do engage in a strategy similar to traditional passive market making, buying at the bid price, selling at the offer price, and suffering losses when the price moves against them. These facts are also consistent with the hypothesis that High Frequency Traders have lower latency than Market Makers.

Intuition might suggest that Fundamental Buyers would tend to place the Aggressive trades which move prices up from one tick level to the next. This intuition does not seem to be corroborated by the data. According to panel A, Fundamental Buyers are 21.53% of all Aggressive trades but only 11.61% of the last 100 Aggressive contracts traded at the old price. Instead, Fundamental Buyers increase their share of Aggressive buy volume to 26.17% of the first 100 contracts at the new price.

Taking into account symmetry between buying and selling, panel B shows the results for Aggressive sell trades during May 3-5, 2010, are almost the same as the results for Aggressive buy trades. High Frequency Traders are 34.17% of all Aggressive sell volume, increase their share to 55.20% of the last 100 Aggressive sell contracts at the old price, and decrease their share to 15.04% of the last 100 Aggressive sell contracts at the new price. Market Makers are 7.45% of all Aggressive sell contracts, increase their share to only 8.57% of the last 100 Aggressive sell trades at the old price, and decrease their share to 6.58% of the last 100 Aggressive sell contracts at the new price. Fundamental Sellers’ shares of Aggressive sell trades behave similarly to Fundamental Buyers’ shares of Aggressive Buy trades. Fundamental Sellers are 20.91% of all Aggressive sell contracts, decrease their share to 11.96% of the last 100 Aggressive sell contracts at the old price, and increase their share to 24.87% of the first 100 Aggressive sell contracts at the new price.

Panels C and D report results for Aggressive Buy trades and Aggressive Sell trades for May 6, 2010. Taking into account symmetry between buying and selling, the results for Aggressive buy trades in panel C are very similar to the results for Aggressive sell trades in panel D. For example, Aggressive sell trades by Fundamental Sellers were 17.55% of Aggressive sell volume on May 6, while Aggressive buy trades by Fundamental Buyers were 20.12% of Aggressive buy volume on May 6. In comparison with the share of Fundamental Buyers and in comparison with May 3-5, the Flash Crash of May 6 is associated with a slightly lower – not higher – share of Aggressive sell trades by Fundamental Sellers.

The number of price increase and price decrease events increased dramatically on May 6, consistent with the increased volatility of the market on that day. On May 3-5, there were 4100 price increase events and 4062 price decrease events. On May 6 alone, there were 4101 price increase events and 4377 price decrease events. There were therefore approximately three times as many price increase events per day on May 6 as on the three preceding days.

A comparison of May 6 with May 3-5 reveals significant changes in the trading patterns of High Frequency Traders. Compared with May 3-5 in panels A and B, the share of Aggressive trades by High Frequency Traders drops from 34.04% of Aggressive buys and 34.17% of Aggressive sells on May 3-5 to 26.98% of Aggressive buy trades and 26.29% of Aggressive sell trades on May 6. The share of Aggressive trades for the last 100 contracts at the old price declines by even more. High Frequency Traders’ participation rate on the Aggressive side of Aggressive buy trades drops from 57.70% on May 3-5 to only 38.86% on May 6. Similarly, the participation rate on the Aggressive side of Aggressive sell trades drops from and 55.20% to 38.67%. These declines are largely offset by increases in the participation rate by Opportunistic Traders on the Aggressive side of trades. For example, Opportunistic Traders’ share of the Aggressive side of the last 100 contracts traded at the old price rises from 19.21% to 34.26% for Aggressive buys and from 20.99% to 33.86% for Aggressive sells. These results suggest that some Opportunistic Traders follow trading strategies for which low latency is important, such as index arbitrage, cross-market arbitrage, or opportunistic strategies mimicking market making.

Testing Hypothesis H2. HFTs trade aggressively in the direction of the price move; Market Makers get run over by a price move. To examine this hypothesis, we analyze whether High Frequency Traders use Aggressive trades to trade in the direction of contemporaneous price changes, while Market Makers use Passive trades to trade in the opposite direction from price changes. To this end, we estimate the regression equation

Δyt = α + Φ . Δyt-1 + δ . yt-1 + Σi=120i . Δpt-1 /0.25] + εt

(where yt and Δyt denote inventories and change in inventories of High Frequency Traders for each second of a trading day; t = 0 corresponds to the opening of stock trading on the NYSE at 8:30:00 a.m. CT (9:30:00 ET) and t = 24, 300 denotes the close of Globex at 15:15:00 CT (4:15 p.m. ET); Δpt denotes the price change in index point units between the high-low midpoint of second t-1 and the high-low midpoint of second t. Regressing second-by-second changes in inventory levels of High Frequency Traders on the level of their inventories the previous second, the change in their inventory levels the previous second, the change in prices during the current second, and lagged price changes for each of the previous 20 previous seconds.)

for Passive and Aggressive inventory changes separately.

Untitled

Table above presents the regression results of the two components of change in holdings on lagged inventory, lagged change in holdings and lagged price changes over one second intervals. Panel A and Panel B report the results for May 3-5 and May 6, respectively. Each panel has four columns, reporting estimated coefficients where the dependent variables are net Aggressive volume (Aggressive buys minus Aggressive sells) by High Frequency Traders (∆AHFT), net Passive volume by High Frequency Traders (∆PHFT), net Aggressive volume by Market Makers (∆AMM), and net Passive volume by Market Makers (∆PMM).

We observe that for lagged inventories (NPHFTt−1), the estimated coefficients for Aggressive and Passive trades by High Frequency Traders are δAHFT = −0.005 (t = −9.55) and δPHFT = −0.001 (t = −3.13), respectively. These coefficient estimates have the interpretation that High Frequency Traders use Aggressive trades to liquidate inventories more intensively than passive trades. In contrast, the results for Market Makers are very different. For lagged inventories (NPMMt−1), the estimated coefficients for Aggressive and Passive volume by Market Makers are δAMM = −0.002 (t = −6.73) and δPMM = −0.002 (t = −5.26), respectively. The similarity of these coefficients estimates has the interpretation that Market Makers favor neither Aggressive trades nor Passive trades when liquidating inventories.

For contemporaneous price changes (in the current second) (∆Pt−1), the estimated coefficient Aggressive and Passive volume by High Frequency Traders are β0 = 57.78 (t = 31.94) and β0 = −25.69 (t = −28.61), respectively. For Market Makers, the estimated coefficients for Aggressive and Passive trades are β0 = 6.38 (t = 18.51) and β0 = −19.92 (t = −37.68). These estimated coefficients have the interpretation that in seconds in which prices move up one tick, High Frequency traders are net buyers of about 58 contracts with Aggressive trades and net sellers of about 26 contracts with Passive trades in that same second, while Market Makers are net buyers of about 6 contracts with Aggressive trades and net sellers of about 20 contracts with Passive trades. High Frequency Traders and Market Makers are similar in that they both use Aggressive trades to trade in the direction of price changes, and both use Passive trades to trade against the direction of price changes. High Frequency Traders and Market Makers are different in that Aggressive net purchases by High Frequency Traders are greater in magnitude than the Passive net purchases, while the reverse is true for Market Makers.

For lagged price changes, coefficient estimates for Aggressive trades by High Frequency Traders and Market Makers are positive and statistically significant at lags 1-4 and lags 1-10, respectively. These results have the interpretation that both High Frequency Traders’ and Market Makers’ trade on recent price momentum, but the trading is compressed into a shorter time frame for High Frequency Traders than for Market Makers.

For lagged price changes, coefficient estimates for Passive volume by High Frequency Traders and Market Makers are negative and statistically significant at lags 1 and lags 1-3, respectively. Panel B of Table presents results for May 6. Similar to May 3-5, High Frequency Traders tend to use Aggressive trades more intensely than Passive trades to liquidate inventories, while Market Makers do not show this pattern. Also similar to May 3-5, High Frequency Trades and Market makers use Aggressive trades to trade in the contemporaneous direction of price changes and use Passive trades to trade in the direction opposite price changes, with Aggressive trading greater than Passive trading for High Frequency Traders and the reverse for Market Makers. In comparison with May 3-5, the coefficients are smaller in magnitude on May 6, indicating reduced liquidity at each tick. For lagged price changes, the coefficients associated with Aggressive trading by High Frequency Traders change from positive to negative at lags 1-4, and the positive coefficients associated with Aggressive trading by Market Makers change from being positive and statistically significant at lags lags 1-10 to being positive and statistically significant only at lags 1-3. These results illustrate accelerated trading velocity in the volatile market conditions of May 6.

We further examine how high frequency trading activity is related to market prices. Figure below illustrates how prices change after HFT trading activity in a given second. The upper-left panel presents results for buy trades for May 3-5, the upper right panel presents results for buy trades on May 6, and the lower-left and lower-right present corresponding results for sell trades. For an “event” second in which High Frequency Traders are net buyers, net Aggressive Buyers, and net Passive Buyers value-weighted average prices paid by the High Frequency Traders in that second are subtracted from the value-weighted average prices for all trades in the same second and each of the following 20 seconds. The results are averaged across event seconds, weighted by the magnitude of High Frequency Traders’ net position change in the event second. The upper-left panel presents results for May 3-5, the upper-right panel presents results for May 6, and the lower two panels present results for sell trades calculated analogously. Price differences on the vertical axis are scaled so that one unit equals one tick ($12.50 per contract).

Untitled 2

When High Frequency Traders are net buyers on May 3-5, prices rise by 17% of a tick in the next second. When HFTs execute Aggressively or Passively, prices rise by 20% and 2% of a tick in the next second, respectively. In subsequent seconds, prices in all cases trend downward by about 5% of a tick over the subsequent 19 seconds. For May 3-5, the results are almost symmetric for selling.

When High Frequency Traders are buying on May 6, prices increase by 7% of a tick in the next second. When they are aggressive buyers or passive buyers, prices increase by increase 25% of a tick or decrease by 5% of a tick in the next second, respectively. In subsequent seconds, prices generally tend to drift downwards. The downward drift is especially pronounced after Passive buying, consistent with the interpretation that High Frequency Traders were “run over” when their resting limit buy orders were “run over” in the down phase of the Flash Crash. When High Frequency Traders are net sellers, the results after one second are analogous to buying. After aggressive selling, prices continue to drift down for 20 seconds, consistent with the interpretation that High Frequency Traders made profits from Aggressive sales during the down phase of the Flash Crash.

Testing Hypothesis H3. Both HFTs and Market Makers scratch trades; HFTs scratch more. A textbook market maker will try to buy at the bid price, sell at the offer price, and capture the bid-ask spread as a profit. Sometimes, after buying at the bid price, market prices begin to fall before the market maker can make a one tick profit by selling his inventory at the best offer price. To avoid taking losses in this situation, one component of a traditional market making strategy is to “scratch trades in the presence of changing market conditions by quickly liquidating a position at the same price at which it was acquired. These scratched trades represent inventory management trades designed to lower the cost of adverse selection. Since many competing market makers may try to scratch trades at the same time, traders with the lowest latency will tend to be more successful in their attempts to scratch trades and thus more successful in their ability to avoid losses when market conditions change.

To examine whether and to what extent traders engage in trade scratching, we sequence each trader’s trades for the day using audit trail sequence numbers which not only sort trades by second but also sort trades chronologically within each second. We define an “immediately scratched trade” as a trade with the properties that the next trade in the sorted sequence (1) occurred in the same second, (2) was executed at the same price, (3) was in the opposite direction, i.e., buy followed by sell or sell followed by buy. For each of the trading accounts in our sample, we calculate the number of immediately scratched trades, then compare the number of scratched trades across the six trader categories.

The results of this analysis are presented in the table below. Panel A provides results for May 3-5 and panel B for May 6. In each panel, there are five rows of data, one for each trader category. The first three columns report the total number of trades, the total number of immediately scratched trades, and the percentage of trades that are immediately scratched by traders in five categories. For May 3-6, the reported numbers are from the pooled data.

Untitled 2

This table presents statistics for immediate trade scratching which measures how many times a trader changes his/her direction of trading in a second aggregated over a day. We define a trade direction change as a buy trade right after a sell trade or vice versa at the same price level in the same second.

This table shows that High Frequency Traders scratched 2.84 % of trades on May 3-5 and 4.26 % on May 6; Market Makers scratched 2.49 % of trades on May 3-5 and 5.53 % of trades on May 6. While the percentages of immediately scratched trades by Market Makers is slightly higher than that for High Frequency Traders on May 6, the percentages for both groups are very similar. The fourth, fifth, and sixth columns of the Table report the mean, standard deviation, and median of the number of scratched trades for the traders in each category.

Although the percentages of scratched trades are similar, the mean number of immediately scratched trades by High Frequency Traders is much greater than for Market Makers: 540.56 per day on May 3-5 and 1610.75 on May 6 for High Frequency Traders versus 13.35 and 72.92 for Market Makers. The differences between High Frequency Traders and Market Makers reflect differences in volume traded. The Table shows that High Frequency Traders and Market Makers scratch a significantly larger percentage of their trades than other trader categories.

Abstract Expressions of Time’s Modalities. Thought of the Day 21.0

00_Pask_Archtecture_of_Knowledge_24

According to Gregory Bateson,

What we mean by information — the elementary unit of information — is a difference which makes a difference, and it is able to make a difference because the neural pathways along which it travels and is continually transformed are themselves provided with energy. The pathways are ready to be triggered. We may even say that the question is already implicit in them.

In other words, we always need to know some second order logic, and presuppose a second order of “order” (cybernetics) usually shared within a distinct community, to realize what a certain claim, hypothesis or theory means. In Koichiro Matsuno’s opinion Bateson’s phrase

must be a prototypical example of second-order logic in that the difference appearing both in the subject and predicate can accept quantification. Most statements framed in second-order logic are not decidable. In order to make them decidable or meaningful, some qualifier needs to be used. A popular example of such a qualifier is a subjective observer. However, the point is that the subjective observer is not limited to Alice or Bob in the QBist parlance.

This is what is necessitated in order understand the different viewpoints in logic of mathematicians, physicists and philosophers in the dispute about the existence of time. An essential aspect of David Bohm‘s “implicate order” can be seen in the grammatical formulation of theses such as the law of motion:

While it is legitimate in its own light, the physical law of motion alone framed in eternal time referable in the present tense, whether in classical or quantum mechanics, is not competent enough to address how the now could be experienced. … Measurement differs from the physical law of motion as much as the now in experience differs from the present tense in description. The watershed separating between measurement and the law of motion is in the distinction between the now and the present tense. Measurement is thus subjective and agential in making a punctuation at the moment of now. (Matsuno)

The distinction between experiencing and capturing experience of time in terms of language is made explicit in Heidegger’s Being and Time

… by passing away constantly, time remains as time. To remain means: not to disappear, thus, to presence. Thus time is determined by a kind of Being. How, then, is Being supposed to be determined by time?

Koichiro Matsuno’s comment on this is:

Time passing away is an abstraction from accepting the distinction of the grammatical tenses, while time remaining as time refers to the temporality of the durable now prior to the abstraction of the tenses.

Therefore, when trying to understand the “local logics/phenomenologies” of the individual disciplines (mathematics physics, philosophy, etc., including their fields), one should be aware of the fact that the capabilities of our scientific language are not limitless:

…the now of the present moment is movable and dynamic in updating the present perfect tense in the present progressive tense. That is to say, the now is prior and all of the grammatical tenses including the ubiquitous present tense are the abstract derivatives from the durable now. (Matsuno)

This presupposes the adequacy of mathematical abstractions specifically invented or adopted and elaborated for the expression of more sophisticated modalities of time’s now than those currently used in such formalisms as temporal logic.

Knowledge Within and Without: The Upanishadic Tradition (1)

www.krishnapath.org

All perceptible matter comes from a primary substance, or tenuity beyond conception, filling all space, the akasha or luminiferous ether, which is acted upon by the life giving Prana or creative force, calling into existence, in never-ending cycles all things and phenomena – Nikola Tesla

Teilhard de Chardin:

In the eyes of the physicist, nothing exists legitimately, at least up to now, except the without of things. The same intellectual attitude is still permissible in the bacteriologist, whose cultures (apart from substantial difficulties) are treated as laboratory reagents. But it is still more difficult in the realm of plants. It tends to become a gamble in the case of a biologist studying the behavior of insects or coelenterates. It seems merely futile with regard to the vertebrates. Finally, it breaks down completely with man, in whom the existence of a within can no longer be evaded, because it is a subject of a direct intuition and the substance of all knowledge. It is impossible to deny that, deep within ourselves, “an interior” appears at the heart of beings, as it were seen through a rent. This is enough to ensure that, in one degree or another, this “interior” should obtrude itself as existing everywhere in nature from all time. Since the stuff of the universe has an inner aspect at one point of itself, there is necessarily a double to its structure, that is to say in every region of space and time-in the same way for instance, as it is granular: co-extensive with their Without, there is a Within to things.

Both Indian thought and modern scientific thought accept a fundamental unity behind the world of variety. That basic unitary reality evolves into all that we see around us in the world. This view is a few thousand years old in India; We find it in the Samkhyan and Vedantic schools of Indian thought; and they expound it very much on the lines followed by modern thought. In his address to the Chicago Parliament of Religions in 1893, Vivekananda said:

All science is bound to come to this conclusion in the long run, Manifestation and not creation, is the word of science today, and the Hindu is only glad that what he has been cherishing in his bosom for ages is going to be taught in more forcible language, and with further light from the latest conclusions of science.

The Samkhyan school uses two terms to represent Nature or Pradhana: Prakrti denoting Nature in its unmodified state, and Vikrti denoting nature in its modified state. The Vedanta similarly speaks of Brahman as the inactive state, and Maya or Shakti as the active state of one and the same primordial non-dual reality. But the Brahman of the Vedanta is the unity of both the spiritual and the non-spiritual, the non-physical and the physical aspects of the universe.

So as the first answer to the question, ‘What is the world?’ we get this child’s answer in his growing knowledge of the discrete entities and events of the outer world and their inter-connections. The second answer is the product of scientific thought, which gives us the knowledge of the one behind the many. All the entities and events of the world are but the modifications or evolutions of one primordial basis reality, be it nature, space- time or cosmic dust.

Although modern scientific thought does not yet have a place for any spiritual reality or principle, scientists like Chardin and Julian Huxley are trying to find a proper place for the experience of the spiritual in the scientific picture of the universe. When this is achieved, the scientific picture, which is close to Vedanta already, will become closer still, and the synthesis of the knowledge of the ‘without’ and the ‘within’ of things will give us the total view of the universe. This is wisdom according to Vedanta, whereas all partial views are just pieces of knowledge or information only.

The Upanishads deal with this ‘within’ of things. Theirs in fact, is the most outstanding contribution on this subject in the human cultural legacy. They term this aspect of reality of things pratyak chaitanya or pratyak atman or pratyak tattva; and they contain the fascinating account of the stages by which the human mind rose from crude beginnings to clear, wholly spiritual heights in the realization of this reality.

How does the world look when we view it from the outside? We seek an answer from the physical sciences. How does it look when we view from the inside? We seek an answer from the non-physical sciences, including the science of religion. And philosophy, as understood in the Upanishadic tradition, is the synthesis of these two answers: Brahmavidyā is Sarvavidyāpratishthā, as the Mundaka Upanishad puts it.

क्षेत्रक्षेत्रज्ञयोर्ज्ञानं यत्तज्ज्ञानम् मतं मम

kṣetrakṣetrajñayorjñānaṃ yattajjñānam mataṃ mama

“The unified knowledge of the ‘without’ and the ‘within’ of things is true knowledge according to Me, as Krishna says in the Gita” (Bhagavad-Gita chapter 13, 2nd Shloka).

From this total viewpoint there is neither inside nor outside; they are relative concepts depending upon some sort of a reference point, e.g.the body; as such, they move within the framework of relativity. Reality knows neither ‘inside’ nor ‘outside’; it is ever full. But these relative concepts are helpful in our approach to the understanding of the total reality.

Thus we find that our knowledge of the manifold of experience the idam, also involves something else, namely, the unity behind the manifold. This unity behind the manifold, which is not perceptible to the senses, is indicated by the term adah meaning ‘that’, indicating something far away, unlike the ‘this’ of the sense experience. ‘This’ is the correlative of ‘that’; ‘this’ is the changeable aspect of reality; ‘that’ is its unchangeable aspect. If ‘this’ refers to something given in sense experience, ‘that’ refers to something transcendental, beyond the experience of the senses. To say ‘this’ therefore also implies at the same time something that is beyond ‘this’. This is an effect as such, it is visible and palpable; and behind it lies the cause, the invisible and the impalpable. Adah, ‘that’, represents the invisible behind the visible, the transcendental behind the empirical, a something that is beyond time and space. In religion this something is called ‘God’. In philosophy it is called tat or adah, That, Brahman, the ultimate Reality, the cause, the ground, and the goal of the universe.

So this verse first tells us that beyond and behind the manifested universe is the reality of Brahman, which is the fullness of pure Being; it then tells us about this world of becoming which, being nothing but Brahman, is also the ‘Full’. From the view of total Reality, it is all ‘fullness’ everywhere, in space-time as well as beyond space-time. Then the verse adds:

पूर्णस्य पूर्णमादाय पूर्णमेवाशिष्यते

pūrṇasya pūrṇamādāya pūrṇamevāśiṣyate

‘From the Fullness of Brahman has come the fullness of the universe, leaving alone Fullness as the remainder.’

What, then, is the point of view or level from which the sentiments of this verse proceed? It is that of the total Reality, the Absolute and the Infinite, in which as we have read earlier, the ‘within’ and the ‘without’ of things merge. The Upanishads call it as ocean of Sachchidānanda, the unity of absolute existence, absolute awareness, and absolute bliss. Itself beyond all distinctions of time and space, it yet manifests itself through all such distinctions. To the purified vision of the Upanishadic sages, this whole universe appeared as the fullness of Being, which was, which is, which shall ever be. In the Bhagavad-Gita (VII. 26) Krshna says:

वेदाहं समतीतानि वर्तमानानि चार्जुन ।
भविष्याणि च भूतानि मां तु वेद न कश्चन ॥

vedāhaṃ samatītāni vartamānāni cārjuna |
bhaviṣyāṇi ca bhūtāni māṃ tu veda na kaścana ||

‘I, O Arjuna, know the beings that are of the past, that are of the present, and that are to come in future; but Me no one knows.’

That fullness of the true Me, says Krshna, is beyond all these limited categories, such as space and time, cause and effect, and substance and attribute.