Dynamics of Point Particles: Orthogonality and Proportionality

optical

Let γ be a smooth, future-directed, timelike curve with unit tangent field ξa in our background spacetime (M, gab). We suppose that some massive point particle O has (the image of) this curve as its worldline. Further, let p be a point on the image of γ and let λa be a vector at p. Then there is a natural decomposition of λa into components proportional to, and orthogonal to, ξa:

λa = (λbξba + (λa −(λbξba) —– (1)

Here, the first part of the sum is proportional to ξa, whereas the second one is orthogonal to ξa.

These are standardly interpreted, respectively, as the “temporal” and “spatial” components of λa relative to ξa (or relative to O). In particular, the three-dimensional vector space of vectors at p orthogonal to ξa is interpreted as the “infinitesimal” simultaneity slice of O at p. If we introduce the tangent and orthogonal projection operators

kab = ξa ξb —– (2)

hab = gab − ξa ξb —– (3)

then the decomposition can be expressed in the form

λa = kab λb + hab λb —– (4)

We can think of kab and hab as the relative temporal and spatial metrics determined by ξa. They are symmetric and satisfy

kabkbc = kac —– (5)

habhbc = hac —– (6)

Many standard textbook assertions concerning the kinematics and dynamics of point particles can be recovered using these decomposition formulas. For example, suppose that the worldline of a second particle O′ also passes through p and that its four-velocity at p is ξ′a. (Since ξa and ξ′a are both future-directed, they are co-oriented; i.e., ξa ξ′a > 0.) We compute the speed of O′ as determined by O. To do so, we take the spatial magnitude of ξ′a relative to O and divide by its temporal magnitude relative to O:

v = speed of O′ relative to O = ∥hab ξ′b∥ / ∥kab ξ′b∥ —– (7)

For any vector μa, ∥μa∥ is (μaμa)1/2 if μ is causal, and it is (−μaμa)1/2 otherwise.

We have from equations 2, 3, 5 and 6

∥kab ξ′b∥ = (kab ξ′b kac ξ′c)1/2 = (kbc ξ′bξ′c)1/2 = (ξ′bξb)

and

∥hab ξ′b∥ = (−hab ξ′b hac ξ′c)1/2 = (−hbc ξ′bξ′c)1/2 = ((ξ′bξb)2 − 1)1/2

so

v = ((ξ’bξb)2 − 1)1/2 / (ξ′bξb) < 1 —– (8)

Thus, as measured by O, no massive particle can ever attain the maximal speed 1. We note that equation (8) implies that

(ξ′bξb) = 1/√(1 – v2) —– (9)

It is a basic fact of relativistic life that there is associated with every point particle, at every event on its worldline, a four-momentum (or energy-momentum) vector Pa that is tangent to its worldline there. The length ∥Pa∥ of this vector is what we would otherwise call the mass (or inertial mass or rest mass) of the particle. So, in particular, if Pa is timelike, we can write it in the form Pa =mξa, where m = ∥Pa∥ > 0 and ξa is the four-velocity of the particle. No such decomposition is possible when Pa is null and m = ∥Pa∥ = 0.

Suppose a particle O with positive mass has four-velocity ξa at a point, and another particle O′ has four-momentum Pa there. The latter can either be a particle with positive mass or mass 0. We can recover the usual expressions for the energy and three-momentum of the second particle relative to O if we decompose Pa in terms of ξa. By equations (4) and (2), we have

Pa = (Pbξb) ξa + habPb —– (10)

the first part of the sum is the energy component, while the second is the three-momentum. The energy relative to O is the coefficient in the first term: E = Pbξb. If O′ has positive mass and Pa = mξ′a, this yields, by equation (9),

E = m (ξ′bξb) = m/√(1 − v2) —– (11)

(If we had not chosen units in which c = 1, the numerator in the final expression would have been mc2 and the denominator √(1 − (v2/c2)). The three−momentum relative to O is the second term habPb in the decomposition of Pa, i.e., the component of Pa orthogonal to ξa. It follows from equations (8) and (9) that it has magnitude

p = ∥hab mξ′b∥ = m((ξ′bξb)2 − 1)1/2 = mv/√(1 − v2) —– (12)

Interpretive principle asserts that the worldlines of free particles with positive mass are the images of timelike geodesics. It can be thought of as a relativistic version of Newton’s first law of motion. Now we consider acceleration and a relativistic version of the second law. Once again, let γ : I → M be a smooth, future-directed, timelike curve with unit tangent field ξa. Just as we understand ξa to be the four-velocity field of a massive point particle (that has the image of γ as its worldline), so we understand ξnnξa – the directional derivative of ξa in the direction ξa – to be its four-acceleration field (or just acceleration) field). The four-acceleration vector at any point is orthogonal to ξa. (This is, since ξannξa) = 1/2 ξnnaξa) = 1/2 ξnn (1) = 0). The magnitude ∥ξnnξa∥ of the four-acceleration vector at a point is just what we would otherwise describe as the curvature of γ there. It is a measure of the rate at which γ “changes direction.” (And γ is a geodesic precisely if its curvature vanishes everywhere).

The notion of spacetime acceleration requires attention. Consider an example. Suppose you decide to end it all and jump off the tower. What would your acceleration history be like during your final moments? One is accustomed in such cases to think in terms of acceleration relative to the earth. So one would say that you undergo acceleration between the time of your jump and your calamitous arrival. But on the present account, that description has things backwards. Between jump and arrival, you are not accelerating. You are in a state of free fall and moving (approximately) along a spacetime geodesic. But before the jump, and after the arrival, you are accelerating. The floor of the observation deck, and then later the sidewalk, push you away from a geodesic path. The all-important idea here is that we are incorporating the “gravitational field” into the geometric structure of spacetime, and particles traverse geodesics iff they are acted on by no forces “except gravity.”

The acceleration of our massive point particle – i.e., its deviation from a geodesic trajectory – is determined by the forces acting on it (other than “gravity”). If it has mass m, and if the vector field Fa on I represents the vector sum of the various (non-gravitational) forces acting on it, then the particle’s four-acceleration ξnnξa satisfies

Fa = mξnnξa —– (13)

This is Newton’s second law of motion. Consider an example. Electromagnetic fields are represented by smooth, anti-symmetric fields Fab. If a particle with mass m > 0, charge q, and four-velocity field ξa is present, the force exerted by the field on the particle at a point is given by qFabξb. If we use this expression for the left side of equation (13), we arrive at the Lorentz law of motion for charged particles in the presence of an electromagnetic field:

qFabξb = mξbbξa —– (14)

This equation makes geometric sense. The acceleration field on the right is orthogonal to ξa. But so is the force field on the left, since ξa(Fabξb) = ξaξbFab = ξaξbF(ab), and F(ab) = 0 by the anti-symmetry of Fab.

Advertisement

Black Hole Complementarity: The Case of the Infalling Observer

The four postulates of black hole complementarity are:

Postulate 1: The process of formation and evaporation of a black hole, as viewed by a distant observer, can be described entirely within the context of standard quantum theory. In particular, there exists a unitary S-matrix which describes the evolution from infalling matter to outgoing Hawking-like radiation.

Postulate 2: Outside the stretched horizon of a massive black hole, physics can be described to good approximation by a set of semi-classical field equations.

Postulate 3: To a distant observer, a black hole appears to be a quantum system with discrete energy levels. The dimension of the subspace of states describing a black hole of mass M is the exponential of the Bekenstein entropy S(M).

We take as implicit in postulate 2 that the semi-classical field equations are those of a low energy effective field theory with local Lorentz invariance. These postulates do not refer to the experience of an infalling observer, but states a ‘certainty,’ which for uniformity we label as a further postulate:

Postulate 4: A freely falling observer experiences nothing out of the ordinary when crossing the horizon.

To be more specific, we will assume that postulate 4 means both that any low-energy dynamics this observer can probe near his worldline is well-described by familiar Lorentz-invariant effective field theory and also that the probability for an infalling observer to encounter a quantum with energy E ≫ 1/rs (measured in the infalling frame) is suppressed by an exponentially decreasing adiabatic factor as predicted by quantum field theory in curved spacetime. We will argue that postulates 1, 2, and 4 are not consistent with one another for a sufficiently old black hole.

Consider a black hole that forms from collapse of some pure state and subsequently decays. Dividing the Hawking radiation into an early part and a late part, postulate 1 implies that the state of the Hawking radiation is pure,

|Ψ⟩= ∑ii⟩E ⊗|i⟩L —– (1)

Here we have taken an arbitrary complete basis |i⟩L for the late radiation. We use postulates 1, 2, and 3 to make the division after the Page time when the black hole has emitted half of its initial Bekenstein-Hawking entropy; we will refer to this as an ‘old’ black hole. The number of states in the early subspace will then be much larger than that in the late subspace and, as a result, for typical states |Ψ⟩ the reduced density matrix describing the late-time radiation is close to the identity. We can therefore construct operators acting on the early radiation, whose action on |Ψ⟩ is equal to that of a projection operator onto any given subspace of the late radiation.

To simplify the discussion, we treat gray-body factors by taking the transmission coefficients T to have unit magnitude for a few low partial waves and to vanish for higher partial waves. Since the total radiated energy is finite, this allows us to think of the Hawking radiation as defining a finite-dimensional Hilbert space.

Now, consider an outgoing Hawking mode in the later part of the radiation. We take this mode to be a localized packet with width of order rs corresponding to a superposition of frequencies O(r−1s). Note that postulate 2 allows us to assign a unique observer-independent s lowering operator b to this mode. We can project onto eigenspaces of the number operator bb. In other words, an observer making measurements on the early radiation can know the number of photons that will be present in a given mode of the late radiation.

Following postulate 2, we can now relate this Hawking mode to one at earlier times, as long as we stay outside the stretched horizon. The earlier mode is blue-shifted, and so may have frequency ω* much larger than O(r−1s) though still sub-Planckian.

Next consider an infalling observer and the associated set of infalling modes with lowering operators a. Hawking radiation arises precisely because

b = ∫0 dω B(ω)aω + C(ω)aω —– (2)

so that the full state cannot be both an a-vacuum (a|Ψ⟩ = 0) and a bb eigenstate. Here again we have used our simplified gray-body factors.

The application of postulates 1 and 2 has thus led to the conclusion that the infalling observer will encounter high-energy modes. Note that the infalling observer need not have actually made the measurement on the early radiation: to guarantee the presence of the high energy quanta it is enough that it is possible, just as shining light on a two-slit experiment destroys the fringes even if we do not observe the scattered light. Here we make the implicit assumption that the measurements of the infalling observer can be described in terms of an effective quantum field theory. Instead we could simply suppose that if he chooses to measure bb he finds the expected eigenvalue, while if he measures the noncommuting operator aa instead he finds the expected vanishing value. But this would be an extreme modification of the quantum mechanics of the observer, and does not seem plausible.

Figure below gives a pictorial summary of our argument, using ingoing Eddington-Finkelstein coordinates. The support of the mode b is shaded. At large distance it is a well-defined Hawking photon, in a predicted eigenstate of bb by postulate 1. The observer encounters it when its wavelength is much shorter: the field must be in the ground state aωaω = 0, by postulate 4, and so cannot be in an eigenstate of bb. But by postulate 2, the evolution of the mode outside the horizon is essentially free, so this is a contradiction.

Untitled

Figure: Eddington-Finkelstein coordinates, showing the infalling observer encountering the outgoing Hawking mode (shaded) at a time when its size is ω−1* ≪ rs. If the observer’s measurements are given by an eigenstate of aa, postulate 1 is violated; if they are given by an eigenstate of bb, postulate 4 is violated; if the result depends on when the observer falls in, postulate 2 is violated.

To restate our paradox in brief, the purity of the Hawking radiation implies that the late radiation is fully entangled with the early radiation, and the absence of drama for the infalling observer implies that it is fully entangled with the modes behind the horizon. This is tantamount to cloning. For example, it violates strong subadditivity of the entropy,

SAB + SBC ≥ SB + SABC —– (3)

Let A be the early Hawking modes, B be outgoing Hawking mode, and C be its interior partner mode. For an old black hole, the entropy is decreasing and so SAB < SA. The absence of infalling drama means that SBC = 0 and so SABC = SA. Subadditivity then gives SA ≥ SB + SA, which fails substantially since the density matrix for system B by itself is thermal.

Actually, assuming the Page argument, the inequality is violated even more strongly: for an old black hole the entropy decrease is maximal, SAB = SA − SB, so that we get from subadditivity that SA ≥ 2SB + SA.

Note that the measurement of Nb takes place entirely outside the horizon, while the measurement of Na (real excitations above the infalling vacuum) must involve a region that extends over both sides of the horizon. These are noncommuting measurements, but by measuring Nb the observer can infer something about what would have happened if Na had been measured instead. For an analogy, consider a set of identically prepared spins. If each is measured along the x-axis and found to be +1/2, we can infer that a measurement along the z-axis would have had equal probability to return +1/2 and −1/2. The multiple spins are needed to reduce statistical variance; similarly in our case the observer would need to measure several modes Nb to have confidence that he was actually entangled with the early radiation. One might ask if there could be a possible loophole in the argument: A physical observer will have a nonzero mass, and so the mass and entropy of the black hole will increase after he falls in. However, we may choose to consider a particular Hawking wavepacket which is already separated from the streched horizon by a finite amount when it is encountered by the infalling observer. Thus by postulate 2 the further evolution of this mode is semiclassical and not affected by the subsequent merging of the observer with the black hole. In making this argument we are also assuming that the dynamics of the stretched horizon is causal.

Thus far the asymptotically flat discussion applies to a black hole that is older than the Page time; we needed this in order to frame a sharp paradox using the entanglement with the Hawking radiation. However, we are discussing what should be intrinsic properties of the black hole, not dependent on its entanglement with some external system. After the black hole scrambling time, almost every small subsystem of the black hole is in an almost maximally mixed state. So if the degrees of freedom sampled by the infalling observer can be considered typical, then they are ‘old’ in an intrinsic sense. Our conclusions should then hold. If the black hole is a fast scrambler the scrambling time is rs ln(rs/lP), after which we have to expect either drama for the infalling observer or novel physics outside the black hole.

We note that the three postulates that are in conflict – purity of the Hawking radiation, absence of infalling drama, and semiclassical behavior outside the horizon — are widely held even by those who do not explicitly label them as ‘black hole complementarity.’ For example, one might imagine that if some tunneling process were to cause a shell of branes to appear at the horizon, an infalling observer would just go ‘splat,’ and of course Postulate 4 would not hold.

Diagrammatic Political Via The Exaptive Processes

thing politics v2x copy

The principle of individuation is the operation that in the matter of taking form, by means of topological conditions […] carries out an energy exchange between the matter and the form until the unity leads to a state – the energy conditions express the whole system. Internal resonance is a state of the equilibrium. One could say that the principle of individuation is the common allagmatic system which requires this realization of the energy conditions the topological conditions […] it can produce the effects in all the points of the system in an enclosure […]

This operation rests on the singularity or starting from a singularity of average magnitude, topologically definite.

If we throw in a pinch of Gilbert Simondon’s concept of transduction there’s a basis recipe, or toolkit, for exploring the relational intensities between the three informal (theoretical) dimensions of knowledge, power and subjectification pursued by Foucault with respect to formal practice. Supplanting Foucault’s process of subjectification with Simondon’s more eloquent process of individuation marks an entry for imagining the continuous, always partial, phase-shifting resolutions of the individual. This is not identity as fixed and positionable, it’s a preindividual dynamic that affects an always becoming- individual. It’s the pre-formative as performative. Transduction is a process of individuation. It leads to individuated beings, such as things, gadgets, organisms, machines, self and society, which could be the object of knowledge. It is an ontogenetic operation which provisionally resolves incompatibilities between different orders or different zones of a domain.

What is at stake in the bigger picture, in a diagrammatic politics, is double-sided. Just as there is matter in expression and expression in matter, there is event-value in an  exchange-value paradigm, which in fact amplifies the force of its power relations. The economic engine of our time feeds on event potential becoming-commodity. It grows and flourishes on the mass production of affective intensities. Reciprocally, there are degrees of exchange-value in eventness. It’s the recursive loopiness of our current Creative Industries diagram in which the social networking praxis of Web 2.0 is emblematic and has much to learn.

High Frequency Traders: A Case in Point.

Events on 6th May 2010:

At 2:32 p.m., against [a] backdrop of unusually high volatility and thinning liquidity, a large fundamental trader (a mutual fund complex) initiated a sell program to sell a total of 75,000 E-Mini [S&P 500 futures] contracts (valued at approximately $4.1 billion) as a hedge to an existing equity position. […] This large fundamental trader chose to execute this sell program via an automated execution algorithm (“Sell Algorithm”) that was programmed to feed orders into the June 2010 E-Mini market to target an execution rate set to 9% of the trading volume calculated over the previous minute, but without regard to price or time. The execution of this sell program resulted in the largest net change in daily position of any trader in the E-Mini since the beginning of the year (from January 1, 2010 through May 6, 2010). [. . . ] This sell pressure was initially absorbed by: high frequency traders (“HFTs”) and other intermediaries in the futures market; fundamental buyers in the futures market; and cross-market arbitrageurs who transferred this sell pressure to the equities markets by opportunistically buying E-Mini contracts and simultaneously selling products like SPY [(S&P 500 exchange-traded fund (“ETF”))], or selling individual equities in the S&P 500 Index. […] Between 2:32 p.m. and 2:45 p.m., as prices of the E-Mini rapidly declined, the Sell Algorithm sold about 35,000 E-Mini contracts (valued at approximately $1.9 billion) of the 75,000 intended. [. . . ] By 2:45:28 there were less than 1,050 contracts of buy-side resting orders in the E-Mini, representing less than 1% of buy-side market depth observed at the beginning of the day. [. . . ] At 2:45:28 p.m., trading on the E-Mini was paused for five seconds when the Chicago Mercantile Exchange (“CME”) Stop Logic Functionality was triggered in order to prevent a cascade of further price declines. […] When trading resumed at 2:45:33 p.m., prices stabilized and shortly thereafter, the E-Mini began to recover, followed by the SPY. [. . . ] Even though after 2:45 p.m. prices in the E-Mini and SPY were recovering from their severe declines, sell orders placed for some individual securities and Exchange Traded Funds (ETFs) (including many retail stop-loss orders, triggered by declines in prices of those securities) found reduced buying interest, which led to further price declines in those securities. […] [B]etween 2:40 p.m. and 3:00 p.m., over 20,000 trades (many based on retail-customer orders) across more than 300 separate securities, including many ETFs, were executed at prices 60% or more away from their 2:40 p.m. prices. [. . . ] By 3:08 p.m., [. . . ] the E-Mini prices [were] back to nearly their pre-drop level [. . . and] most securities had reverted back to trading at prices reflecting true consensus values.

In the ordinary course of business, HFTs use their technological advantage to profit from aggressively removing the last few contracts at the best bid and ask levels and then establishing new best bids and asks at adjacent price levels ahead of an immediacy-demanding customer. As an illustration of this “immediacy absorption” activity, consider the following stylized example, presented in Figure and described below.

Untitled 2

Suppose that we observe the central limit order book for a stock index futures contract. The notional value of one stock index futures contract is $50. The market is very liquid – on average there are hundreds of resting limit orders to buy or sell multiple contracts at either the best bid or the best offer. At some point during the day, due to temporary selling pressure, there is a total of just 100 contracts left at the best bid price of 1000.00. Recognizing that the queue at the best bid is about to be depleted, HFTs submit executable limit orders to aggressively sell a total of 100 contracts, thus completely depleting the queue at the best bid, and very quickly submit sequences of new limit orders to buy a total of 100 contracts at the new best bid price of 999.75, as well as to sell 100 contracts at the new best offer of 1000.00. If the selling pressure continues, then HFTs are able to buy 100 contracts at 999.75 and make a profit of $1,250 dollars among them. If, however, the selling pressure stops and the new best offer price of 1000.00 attracts buyers, then HFTs would very quickly sell 100 contracts (which are at the very front of the new best offer queue), “scratching” the trade at the same price as they bought, and getting rid of the risky inventory in a few milliseconds.

This type of trading activity reduces, albeit for only a few milliseconds, the latency of a price move. Under normal market conditions, this trading activity somewhat accelerates price changes and adds to the trading volume, but does not result in a significant directional price move. In effect, this activity imparts a small “immediacy absorption” cost on all traders, including the market makers, who are not fast enough to cancel the last remaining orders before an imminent price move.

This activity, however, makes it both costlier and riskier for the slower market makers to maintain continuous market presence. In response to the additional cost and risk, market makers lower their acceptable inventory bounds to levels that are too small to offset temporary liquidity imbalances of any significant size. When the diminished liquidity buffer of the market makers is pierced by a sudden order flow imbalance, they begin to demand a progressively greater compensation for maintaining continuous market presence, and prices start to move directionally. Just as the prices are moving directionally and volatility is elevated, immediacy absorption activity of HFTs can exacerbate a directional price move and amplify volatility. Higher volatility further increases the speed at which the best bid and offer queues are being depleted, inducing HFT algorithms to demand immediacy even more, fueling a spike in trading volume, and making it more costly for the market makers to maintain continuous market presence. This forces more risk averse market makers to withdraw from the market, which results in a full-blown market crash.

Empirically, immediacy absorption activity of the HFTs should manifest itself in the data very differently from the liquidity provision activity of the Market Makers. To establish the presence of these differences in the data, we test the following hypotheses:

Hypothesis H1: HFTs are more likely than Market Makers to aggressively execute the last 100 contracts before a price move in the direction of the trade. Market Makers are more likely than HFTs to have the last 100 resting contracts against which aggressive orders are executed.

Hypothesis H2: HFTs trade aggressively in the direction of the price move. Market Makers get run over by a price move.

Hypothesis H3: Both HFTs and Market Makers scratch trades, but HFTs scratch more.

To statistically test our “immediacy absorption” hypotheses against the “liquidity provision” hypotheses, we divide all of the trades during the 405 minute trading day into two subsets: Aggressive Buy trades and Aggressive Sell trades. Within each subset, we further aggregate multiple aggressive buy or sell transactions resulting from the execution of the same order into Aggressive Buy or Aggressive Sell sequences. The intuition is as follows. Often a specific trade is not a stand alone event, but a part of a sequence of transactions associated with the execution of the same order. For example, an order to aggressively sell 10 contracts may result in four Aggressive Sell transactions: for 2 contracts, 1 contract, 4 contracts, and 3 contracts, respectively, due to the specific sequence of resting bids against which this aggressive sell order was be executed. Using the order ID number, we are able to aggregate these four transactions into one Aggressive Sell sequence for 10 contracts.

Testing Hypothesis H1. Aggressive removal of the last 100 contracts by HFTs; passive provision of the last 100 resting contracts by the Market Makers. Using the Aggressive Buy sequences, we label as a “price increase event” all occurrences of trading sequences in which at least 100 contracts consecutively executed at the same price are followed by some number of contracts at a higher price. To examine indications of low latency, we focus on the the last 100 contracts traded before the price increase and the first 100 contracts at the next higher price (or fewer if the price changes again before 100 contracts are executed). Although we do not look directly at the limit order book data, price increase events are defined to capture occasions where traders use executable buy orders to lift the last remaining offers in the limit order book. Using Aggressive sell trades, we define “price decrease events” symmetrically as occurrences of sequences of trades in which 100 contracts executed at the same price are followed by executions at lower prices. These events are intended to capture occasions where traders use executable sell orders to hit the last few best bids in the limit order book. The results are presented in Table below

Untitled 2

For price increase and price decrease events, we calculate each of the six trader categories’ shares of Aggressive and Passive trading volume for the last 100 contracts traded at the “old” price level before the price increase or decrease and the first 100 contracts traded at the “new” price level (or fewer if the number of contracts is less than 100) after the price increase or decrease event.

Table above presents, for the six trader categories, volume shares for the last 100 contracts at the old price and the first 100 contracts at the new price. For comparison, the unconditional shares of aggressive and passive trading volume of each trader category are also reported. Table has four panels covering (A) price increase events on May 3-5, (B) price decrease events on May 3-5, (C) price increase events on May 6, and (D) price decrease events on May 6. In each panel there are six rows of data, one row for each trader category. Relative to panels A and C, the rows for Fundamental Buyers (BUYER) and Fundamental Sellers (SELLER) are reversed in panels B and D to emphasize the symmetry between buying during price increase events and selling during price decrease events. The first two columns report the shares of Aggressive and Passive contract volume for the last 100 contracts before the price change; the next two columns report the shares of Aggressive and Passive volume for up to the next 100 contracts after the price change; and the last two columns report the “unconditional” market shares of Aggressive and Passive sides of all Aggressive buy volume or sell volume. For May 3-5, the data are based on volume pooled across the three days.

Consider panel A, which describes price increase events associated with Aggressive buy trades on May 3-5, 2010. High Frequency Traders participated on the Aggressive side of 34.04% of all aggressive buy volume. Strongly consistent with immediacy absorption hypothesis, the participation rate rises to 57.70% of the Aggressive side of trades on the last 100 contracts of Aggressive buy volume before price increase events and falls to 14.84% of the Aggressive side of trades on the first 100 contracts of Aggressive buy volume after price increase events.

High Frequency Traders participated on the Passive side of 34.33% of all aggressive buy volume. Consistent with hypothesis, the participation rate on the Passive side of Aggressive buy volume falls to 28.72% of the last 100 contracts before a price increase event. It rises to 37.93% of the first 100 contracts after a price increase event.

These results are inconsistent with the idea that high frequency traders behave like textbook market makers, suffering adverse selection losses associated with being picked off by informed traders. Instead, when the price is about to move to a new level, high frequency traders tend to avoid being run over and take the price to the new level with Aggressive trades of their own.

Market Makers follow a noticeably more passive trading strategy than High Frequency Traders. According to panel A, Market Makers are 13.48% of the Passive side of all Aggressive trades, but they are only 7.27% of the Aggressive side of all Aggressive trades. On the last 100 contracts at the old price, Market Makers’ share of volume increases only modestly, from 7.27% to 8.78% of trades. Their share of Passive volume at the old price increases, from 13.48% to 15.80%. These facts are consistent with the interpretation that Market Makers, unlike High Frequency Traders, do engage in a strategy similar to traditional passive market making, buying at the bid price, selling at the offer price, and suffering losses when the price moves against them. These facts are also consistent with the hypothesis that High Frequency Traders have lower latency than Market Makers.

Intuition might suggest that Fundamental Buyers would tend to place the Aggressive trades which move prices up from one tick level to the next. This intuition does not seem to be corroborated by the data. According to panel A, Fundamental Buyers are 21.53% of all Aggressive trades but only 11.61% of the last 100 Aggressive contracts traded at the old price. Instead, Fundamental Buyers increase their share of Aggressive buy volume to 26.17% of the first 100 contracts at the new price.

Taking into account symmetry between buying and selling, panel B shows the results for Aggressive sell trades during May 3-5, 2010, are almost the same as the results for Aggressive buy trades. High Frequency Traders are 34.17% of all Aggressive sell volume, increase their share to 55.20% of the last 100 Aggressive sell contracts at the old price, and decrease their share to 15.04% of the last 100 Aggressive sell contracts at the new price. Market Makers are 7.45% of all Aggressive sell contracts, increase their share to only 8.57% of the last 100 Aggressive sell trades at the old price, and decrease their share to 6.58% of the last 100 Aggressive sell contracts at the new price. Fundamental Sellers’ shares of Aggressive sell trades behave similarly to Fundamental Buyers’ shares of Aggressive Buy trades. Fundamental Sellers are 20.91% of all Aggressive sell contracts, decrease their share to 11.96% of the last 100 Aggressive sell contracts at the old price, and increase their share to 24.87% of the first 100 Aggressive sell contracts at the new price.

Panels C and D report results for Aggressive Buy trades and Aggressive Sell trades for May 6, 2010. Taking into account symmetry between buying and selling, the results for Aggressive buy trades in panel C are very similar to the results for Aggressive sell trades in panel D. For example, Aggressive sell trades by Fundamental Sellers were 17.55% of Aggressive sell volume on May 6, while Aggressive buy trades by Fundamental Buyers were 20.12% of Aggressive buy volume on May 6. In comparison with the share of Fundamental Buyers and in comparison with May 3-5, the Flash Crash of May 6 is associated with a slightly lower – not higher – share of Aggressive sell trades by Fundamental Sellers.

The number of price increase and price decrease events increased dramatically on May 6, consistent with the increased volatility of the market on that day. On May 3-5, there were 4100 price increase events and 4062 price decrease events. On May 6 alone, there were 4101 price increase events and 4377 price decrease events. There were therefore approximately three times as many price increase events per day on May 6 as on the three preceding days.

A comparison of May 6 with May 3-5 reveals significant changes in the trading patterns of High Frequency Traders. Compared with May 3-5 in panels A and B, the share of Aggressive trades by High Frequency Traders drops from 34.04% of Aggressive buys and 34.17% of Aggressive sells on May 3-5 to 26.98% of Aggressive buy trades and 26.29% of Aggressive sell trades on May 6. The share of Aggressive trades for the last 100 contracts at the old price declines by even more. High Frequency Traders’ participation rate on the Aggressive side of Aggressive buy trades drops from 57.70% on May 3-5 to only 38.86% on May 6. Similarly, the participation rate on the Aggressive side of Aggressive sell trades drops from and 55.20% to 38.67%. These declines are largely offset by increases in the participation rate by Opportunistic Traders on the Aggressive side of trades. For example, Opportunistic Traders’ share of the Aggressive side of the last 100 contracts traded at the old price rises from 19.21% to 34.26% for Aggressive buys and from 20.99% to 33.86% for Aggressive sells. These results suggest that some Opportunistic Traders follow trading strategies for which low latency is important, such as index arbitrage, cross-market arbitrage, or opportunistic strategies mimicking market making.

Testing Hypothesis H2. HFTs trade aggressively in the direction of the price move; Market Makers get run over by a price move. To examine this hypothesis, we analyze whether High Frequency Traders use Aggressive trades to trade in the direction of contemporaneous price changes, while Market Makers use Passive trades to trade in the opposite direction from price changes. To this end, we estimate the regression equation

Δyt = α + Φ . Δyt-1 + δ . yt-1 + Σi=120i . Δpt-1 /0.25] + εt

(where yt and Δyt denote inventories and change in inventories of High Frequency Traders for each second of a trading day; t = 0 corresponds to the opening of stock trading on the NYSE at 8:30:00 a.m. CT (9:30:00 ET) and t = 24, 300 denotes the close of Globex at 15:15:00 CT (4:15 p.m. ET); Δpt denotes the price change in index point units between the high-low midpoint of second t-1 and the high-low midpoint of second t. Regressing second-by-second changes in inventory levels of High Frequency Traders on the level of their inventories the previous second, the change in their inventory levels the previous second, the change in prices during the current second, and lagged price changes for each of the previous 20 previous seconds.)

for Passive and Aggressive inventory changes separately.

Untitled

Table above presents the regression results of the two components of change in holdings on lagged inventory, lagged change in holdings and lagged price changes over one second intervals. Panel A and Panel B report the results for May 3-5 and May 6, respectively. Each panel has four columns, reporting estimated coefficients where the dependent variables are net Aggressive volume (Aggressive buys minus Aggressive sells) by High Frequency Traders (∆AHFT), net Passive volume by High Frequency Traders (∆PHFT), net Aggressive volume by Market Makers (∆AMM), and net Passive volume by Market Makers (∆PMM).

We observe that for lagged inventories (NPHFTt−1), the estimated coefficients for Aggressive and Passive trades by High Frequency Traders are δAHFT = −0.005 (t = −9.55) and δPHFT = −0.001 (t = −3.13), respectively. These coefficient estimates have the interpretation that High Frequency Traders use Aggressive trades to liquidate inventories more intensively than passive trades. In contrast, the results for Market Makers are very different. For lagged inventories (NPMMt−1), the estimated coefficients for Aggressive and Passive volume by Market Makers are δAMM = −0.002 (t = −6.73) and δPMM = −0.002 (t = −5.26), respectively. The similarity of these coefficients estimates has the interpretation that Market Makers favor neither Aggressive trades nor Passive trades when liquidating inventories.

For contemporaneous price changes (in the current second) (∆Pt−1), the estimated coefficient Aggressive and Passive volume by High Frequency Traders are β0 = 57.78 (t = 31.94) and β0 = −25.69 (t = −28.61), respectively. For Market Makers, the estimated coefficients for Aggressive and Passive trades are β0 = 6.38 (t = 18.51) and β0 = −19.92 (t = −37.68). These estimated coefficients have the interpretation that in seconds in which prices move up one tick, High Frequency traders are net buyers of about 58 contracts with Aggressive trades and net sellers of about 26 contracts with Passive trades in that same second, while Market Makers are net buyers of about 6 contracts with Aggressive trades and net sellers of about 20 contracts with Passive trades. High Frequency Traders and Market Makers are similar in that they both use Aggressive trades to trade in the direction of price changes, and both use Passive trades to trade against the direction of price changes. High Frequency Traders and Market Makers are different in that Aggressive net purchases by High Frequency Traders are greater in magnitude than the Passive net purchases, while the reverse is true for Market Makers.

For lagged price changes, coefficient estimates for Aggressive trades by High Frequency Traders and Market Makers are positive and statistically significant at lags 1-4 and lags 1-10, respectively. These results have the interpretation that both High Frequency Traders’ and Market Makers’ trade on recent price momentum, but the trading is compressed into a shorter time frame for High Frequency Traders than for Market Makers.

For lagged price changes, coefficient estimates for Passive volume by High Frequency Traders and Market Makers are negative and statistically significant at lags 1 and lags 1-3, respectively. Panel B of Table presents results for May 6. Similar to May 3-5, High Frequency Traders tend to use Aggressive trades more intensely than Passive trades to liquidate inventories, while Market Makers do not show this pattern. Also similar to May 3-5, High Frequency Trades and Market makers use Aggressive trades to trade in the contemporaneous direction of price changes and use Passive trades to trade in the direction opposite price changes, with Aggressive trading greater than Passive trading for High Frequency Traders and the reverse for Market Makers. In comparison with May 3-5, the coefficients are smaller in magnitude on May 6, indicating reduced liquidity at each tick. For lagged price changes, the coefficients associated with Aggressive trading by High Frequency Traders change from positive to negative at lags 1-4, and the positive coefficients associated with Aggressive trading by Market Makers change from being positive and statistically significant at lags lags 1-10 to being positive and statistically significant only at lags 1-3. These results illustrate accelerated trading velocity in the volatile market conditions of May 6.

We further examine how high frequency trading activity is related to market prices. Figure below illustrates how prices change after HFT trading activity in a given second. The upper-left panel presents results for buy trades for May 3-5, the upper right panel presents results for buy trades on May 6, and the lower-left and lower-right present corresponding results for sell trades. For an “event” second in which High Frequency Traders are net buyers, net Aggressive Buyers, and net Passive Buyers value-weighted average prices paid by the High Frequency Traders in that second are subtracted from the value-weighted average prices for all trades in the same second and each of the following 20 seconds. The results are averaged across event seconds, weighted by the magnitude of High Frequency Traders’ net position change in the event second. The upper-left panel presents results for May 3-5, the upper-right panel presents results for May 6, and the lower two panels present results for sell trades calculated analogously. Price differences on the vertical axis are scaled so that one unit equals one tick ($12.50 per contract).

Untitled 2

When High Frequency Traders are net buyers on May 3-5, prices rise by 17% of a tick in the next second. When HFTs execute Aggressively or Passively, prices rise by 20% and 2% of a tick in the next second, respectively. In subsequent seconds, prices in all cases trend downward by about 5% of a tick over the subsequent 19 seconds. For May 3-5, the results are almost symmetric for selling.

When High Frequency Traders are buying on May 6, prices increase by 7% of a tick in the next second. When they are aggressive buyers or passive buyers, prices increase by increase 25% of a tick or decrease by 5% of a tick in the next second, respectively. In subsequent seconds, prices generally tend to drift downwards. The downward drift is especially pronounced after Passive buying, consistent with the interpretation that High Frequency Traders were “run over” when their resting limit buy orders were “run over” in the down phase of the Flash Crash. When High Frequency Traders are net sellers, the results after one second are analogous to buying. After aggressive selling, prices continue to drift down for 20 seconds, consistent with the interpretation that High Frequency Traders made profits from Aggressive sales during the down phase of the Flash Crash.

Testing Hypothesis H3. Both HFTs and Market Makers scratch trades; HFTs scratch more. A textbook market maker will try to buy at the bid price, sell at the offer price, and capture the bid-ask spread as a profit. Sometimes, after buying at the bid price, market prices begin to fall before the market maker can make a one tick profit by selling his inventory at the best offer price. To avoid taking losses in this situation, one component of a traditional market making strategy is to “scratch trades in the presence of changing market conditions by quickly liquidating a position at the same price at which it was acquired. These scratched trades represent inventory management trades designed to lower the cost of adverse selection. Since many competing market makers may try to scratch trades at the same time, traders with the lowest latency will tend to be more successful in their attempts to scratch trades and thus more successful in their ability to avoid losses when market conditions change.

To examine whether and to what extent traders engage in trade scratching, we sequence each trader’s trades for the day using audit trail sequence numbers which not only sort trades by second but also sort trades chronologically within each second. We define an “immediately scratched trade” as a trade with the properties that the next trade in the sorted sequence (1) occurred in the same second, (2) was executed at the same price, (3) was in the opposite direction, i.e., buy followed by sell or sell followed by buy. For each of the trading accounts in our sample, we calculate the number of immediately scratched trades, then compare the number of scratched trades across the six trader categories.

The results of this analysis are presented in the table below. Panel A provides results for May 3-5 and panel B for May 6. In each panel, there are five rows of data, one for each trader category. The first three columns report the total number of trades, the total number of immediately scratched trades, and the percentage of trades that are immediately scratched by traders in five categories. For May 3-6, the reported numbers are from the pooled data.

Untitled 2

This table presents statistics for immediate trade scratching which measures how many times a trader changes his/her direction of trading in a second aggregated over a day. We define a trade direction change as a buy trade right after a sell trade or vice versa at the same price level in the same second.

This table shows that High Frequency Traders scratched 2.84 % of trades on May 3-5 and 4.26 % on May 6; Market Makers scratched 2.49 % of trades on May 3-5 and 5.53 % of trades on May 6. While the percentages of immediately scratched trades by Market Makers is slightly higher than that for High Frequency Traders on May 6, the percentages for both groups are very similar. The fourth, fifth, and sixth columns of the Table report the mean, standard deviation, and median of the number of scratched trades for the traders in each category.

Although the percentages of scratched trades are similar, the mean number of immediately scratched trades by High Frequency Traders is much greater than for Market Makers: 540.56 per day on May 3-5 and 1610.75 on May 6 for High Frequency Traders versus 13.35 and 72.92 for Market Makers. The differences between High Frequency Traders and Market Makers reflect differences in volume traded. The Table shows that High Frequency Traders and Market Makers scratch a significantly larger percentage of their trades than other trader categories.