Black Hole Complementarity: The Case of the Infalling Observer

The four postulates of black hole complementarity are:

Postulate 1: The process of formation and evaporation of a black hole, as viewed by a distant observer, can be described entirely within the context of standard quantum theory. In particular, there exists a unitary S-matrix which describes the evolution from infalling matter to outgoing Hawking-like radiation.

Postulate 2: Outside the stretched horizon of a massive black hole, physics can be described to good approximation by a set of semi-classical field equations.

Postulate 3: To a distant observer, a black hole appears to be a quantum system with discrete energy levels. The dimension of the subspace of states describing a black hole of mass M is the exponential of the Bekenstein entropy S(M).

We take as implicit in postulate 2 that the semi-classical field equations are those of a low energy effective field theory with local Lorentz invariance. These postulates do not refer to the experience of an infalling observer, but states a ‘certainty,’ which for uniformity we label as a further postulate:

Postulate 4: A freely falling observer experiences nothing out of the ordinary when crossing the horizon.

To be more specific, we will assume that postulate 4 means both that any low-energy dynamics this observer can probe near his worldline is well-described by familiar Lorentz-invariant effective field theory and also that the probability for an infalling observer to encounter a quantum with energy E ≫ 1/rs (measured in the infalling frame) is suppressed by an exponentially decreasing adiabatic factor as predicted by quantum field theory in curved spacetime. We will argue that postulates 1, 2, and 4 are not consistent with one another for a sufficiently old black hole.

Consider a black hole that forms from collapse of some pure state and subsequently decays. Dividing the Hawking radiation into an early part and a late part, postulate 1 implies that the state of the Hawking radiation is pure,

|Ψ⟩= ∑ii⟩E ⊗|i⟩L —– (1)

Here we have taken an arbitrary complete basis |i⟩L for the late radiation. We use postulates 1, 2, and 3 to make the division after the Page time when the black hole has emitted half of its initial Bekenstein-Hawking entropy; we will refer to this as an ‘old’ black hole. The number of states in the early subspace will then be much larger than that in the late subspace and, as a result, for typical states |Ψ⟩ the reduced density matrix describing the late-time radiation is close to the identity. We can therefore construct operators acting on the early radiation, whose action on |Ψ⟩ is equal to that of a projection operator onto any given subspace of the late radiation.

To simplify the discussion, we treat gray-body factors by taking the transmission coefficients T to have unit magnitude for a few low partial waves and to vanish for higher partial waves. Since the total radiated energy is finite, this allows us to think of the Hawking radiation as defining a finite-dimensional Hilbert space.

Now, consider an outgoing Hawking mode in the later part of the radiation. We take this mode to be a localized packet with width of order rs corresponding to a superposition of frequencies O(r−1s). Note that postulate 2 allows us to assign a unique observer-independent s lowering operator b to this mode. We can project onto eigenspaces of the number operator bb. In other words, an observer making measurements on the early radiation can know the number of photons that will be present in a given mode of the late radiation.

Following postulate 2, we can now relate this Hawking mode to one at earlier times, as long as we stay outside the stretched horizon. The earlier mode is blue-shifted, and so may have frequency ω* much larger than O(r−1s) though still sub-Planckian.

Next consider an infalling observer and the associated set of infalling modes with lowering operators a. Hawking radiation arises precisely because

b = ∫0 dω B(ω)aω + C(ω)aω —– (2)

so that the full state cannot be both an a-vacuum (a|Ψ⟩ = 0) and a bb eigenstate. Here again we have used our simplified gray-body factors.

The application of postulates 1 and 2 has thus led to the conclusion that the infalling observer will encounter high-energy modes. Note that the infalling observer need not have actually made the measurement on the early radiation: to guarantee the presence of the high energy quanta it is enough that it is possible, just as shining light on a two-slit experiment destroys the fringes even if we do not observe the scattered light. Here we make the implicit assumption that the measurements of the infalling observer can be described in terms of an effective quantum field theory. Instead we could simply suppose that if he chooses to measure bb he finds the expected eigenvalue, while if he measures the noncommuting operator aa instead he finds the expected vanishing value. But this would be an extreme modification of the quantum mechanics of the observer, and does not seem plausible.

Figure below gives a pictorial summary of our argument, using ingoing Eddington-Finkelstein coordinates. The support of the mode b is shaded. At large distance it is a well-defined Hawking photon, in a predicted eigenstate of bb by postulate 1. The observer encounters it when its wavelength is much shorter: the field must be in the ground state aωaω = 0, by postulate 4, and so cannot be in an eigenstate of bb. But by postulate 2, the evolution of the mode outside the horizon is essentially free, so this is a contradiction.

Untitled

Figure: Eddington-Finkelstein coordinates, showing the infalling observer encountering the outgoing Hawking mode (shaded) at a time when its size is ω−1* ≪ rs. If the observer’s measurements are given by an eigenstate of aa, postulate 1 is violated; if they are given by an eigenstate of bb, postulate 4 is violated; if the result depends on when the observer falls in, postulate 2 is violated.

To restate our paradox in brief, the purity of the Hawking radiation implies that the late radiation is fully entangled with the early radiation, and the absence of drama for the infalling observer implies that it is fully entangled with the modes behind the horizon. This is tantamount to cloning. For example, it violates strong subadditivity of the entropy,

SAB + SBC ≥ SB + SABC —– (3)

Let A be the early Hawking modes, B be outgoing Hawking mode, and C be its interior partner mode. For an old black hole, the entropy is decreasing and so SAB < SA. The absence of infalling drama means that SBC = 0 and so SABC = SA. Subadditivity then gives SA ≥ SB + SA, which fails substantially since the density matrix for system B by itself is thermal.

Actually, assuming the Page argument, the inequality is violated even more strongly: for an old black hole the entropy decrease is maximal, SAB = SA − SB, so that we get from subadditivity that SA ≥ 2SB + SA.

Note that the measurement of Nb takes place entirely outside the horizon, while the measurement of Na (real excitations above the infalling vacuum) must involve a region that extends over both sides of the horizon. These are noncommuting measurements, but by measuring Nb the observer can infer something about what would have happened if Na had been measured instead. For an analogy, consider a set of identically prepared spins. If each is measured along the x-axis and found to be +1/2, we can infer that a measurement along the z-axis would have had equal probability to return +1/2 and −1/2. The multiple spins are needed to reduce statistical variance; similarly in our case the observer would need to measure several modes Nb to have confidence that he was actually entangled with the early radiation. One might ask if there could be a possible loophole in the argument: A physical observer will have a nonzero mass, and so the mass and entropy of the black hole will increase after he falls in. However, we may choose to consider a particular Hawking wavepacket which is already separated from the streched horizon by a finite amount when it is encountered by the infalling observer. Thus by postulate 2 the further evolution of this mode is semiclassical and not affected by the subsequent merging of the observer with the black hole. In making this argument we are also assuming that the dynamics of the stretched horizon is causal.

Thus far the asymptotically flat discussion applies to a black hole that is older than the Page time; we needed this in order to frame a sharp paradox using the entanglement with the Hawking radiation. However, we are discussing what should be intrinsic properties of the black hole, not dependent on its entanglement with some external system. After the black hole scrambling time, almost every small subsystem of the black hole is in an almost maximally mixed state. So if the degrees of freedom sampled by the infalling observer can be considered typical, then they are ‘old’ in an intrinsic sense. Our conclusions should then hold. If the black hole is a fast scrambler the scrambling time is rs ln(rs/lP), after which we have to expect either drama for the infalling observer or novel physics outside the black hole.

We note that the three postulates that are in conflict – purity of the Hawking radiation, absence of infalling drama, and semiclassical behavior outside the horizon — are widely held even by those who do not explicitly label them as ‘black hole complementarity.’ For example, one might imagine that if some tunneling process were to cause a shell of branes to appear at the horizon, an infalling observer would just go ‘splat,’ and of course Postulate 4 would not hold.

High Frequency Markets and Leverage

0*o9wpWk6YyXYGxntK

Leverage effect is a well-known stylized fact of financial data. It refers to the negative correlation between price returns and volatility increments: when the price of an asset is increasing, its volatility drops, while when it decreases, the volatility tends to become larger. The name “leverage” comes from the following interpretation of this phenomenon: When an asset price declines, the associated company becomes automatically more leveraged since the ratio of its debt with respect to the equity value becomes larger. Hence the risk of the asset, namely its volatility, should become more important. Another economic interpretation of the leverage effect, inverting causality, is that the forecast of an increase of the volatility should be compensated by a higher rate of return, which can only be obtained through a decrease in the asset value.

Some statistical methods enabling us to use high frequency data have been built to measure volatility. In financial engineering, it has become clear in the late eighties that it is necessary to introduce leverage effect in derivatives pricing frameworks in order to accurately reproduce the behavior of the implied volatility surface. This led to the rise of famous stochastic volatility models, where the Brownian motion driving the volatility is (negatively) correlated with that driving the price for stochastic volatility models.

Traditional explanations for leverage effect are based on “macroscopic” arguments from financial economics. Could microscopic interactions between agents naturally lead to leverage effect at larger time scales? We would like to know whether part of the foundations for leverage effect could be microstructural. To do so, our idea is to consider a very simple agent-based model, encoding well-documented and understood behaviors of market participants at the microscopic scale. Then we aim at showing that in the long run, this model leads to a price dynamic exhibiting leverage effect. This would demonstrate that typical strategies of market participants at the high frequency level naturally induce leverage effect.

One could argue that transactions take place at the finest frequencies and prices are revealed through order book type mechanisms. Therefore, it is an obvious fact that leverage effect arises from high frequency properties. However, under certain market conditions, typical high frequency behaviors, having probably no connection with the financial economics concepts, may give rise to some leverage effect at the low frequency scales. It is important to emphasize that leverage effect should be fully explained by high frequency features.

Another important stylized fact of financial data is the rough nature of the volatility process. Indeed, for a very wide range of assets, historical volatility time-series exhibit a behavior which is much rougher than that of a Brownian motion. More precisely, the dynamics of the log-volatility are typically very well modeled by a fractional Brownian motion with Hurst parameter around 0.1, that is a process with Hölder regularity of order 0.1. Furthermore, using a fractional Brownian motion with small Hurst index also enables to reproduce very accurately the features of the volatility surface.

hurst_fbm

The fact that for basically all reasonably liquid assets, volatility is rough, with the same order of magnitude for the roughness parameter, is of course very intriguing. Tick-by-tick price model is based on a bi-dimensional Hawkes process, which is a bivariate point process (Nt+, Nt)t≥0 taking values in (R+)2 and with intensity (λ+t, λt) of the form

Untitled

Here μ+ and μ are positive constants and the functions (φi)i=1,…4 are non-negative with associated matrix called kernel matrix. Hawkes processes are said to be self-exciting, in the sense that the instantaneous jump probability depends on the location of the past events. Hawkes processes are nowadays of standard use in finance, not only in the field of microstructure but also in risk management or contagion modeling. The Hawkes process generates behavior that mimics financial data in a pretty impressive way. And back-fitting, yields coorespndingly good results.  Some key problems remain the same whether you use a simple Brownian motion model or this marvelous technical apparatus.

In short, back-fitting only goes so far.

  • The essentially random nature of living systems can lead to entirely different outcomes if said randomness had occurred at some other point in time or magnitude. Due to randomness, entirely different groups would likely succeed and fail every time the “clock” was turned back to time zero, and the system allowed to unfold all over again. Goldman Sachs would not be the “vampire squid”. The London whale would never have been. This will boggle the mind if you let it.

  • Extraction of unvarying physical laws governing a living system from data is in many cases is NP-hard. There are far many varieties of actors and variety of interactions for the exercise to be tractable.

  • Given the possibility of their extraction, the nature of the components of a living system are not fixed and subject to unvarying physical laws – not even probability laws.

  • The conscious behavior of some actors in a financial market can change the rules of the game, some of those rules some of the time, or complete rewire the system form the bottom-up. This is really just an extension of the former point.

  • Natural mutations over time lead to markets reworking their laws over time through an evolutionary process, with never a thought of doing so.

ee2bb4_8eaf3fa3c14d4960aceae022db54340c

Thus, in this approach, Nt+ corresponds to the number of upward jumps of the asset in the time interval [0,t] and Nt to the number of downward jumps. Hence, the instantaneous probability to get an upward (downward) jump depends on the arrival times of the past upward and downward jumps. Furthermore, by construction, the price process lives on a discrete grid, which is obviously a crucial feature of high frequency prices in practice.

This simple tick-by-tick price model enables to encode very easily the following important stylized facts of modern electronic markets in the context of high frequency trading:

  1. Markets are highly endogenous, meaning that most of the orders have no real economic motivation but are rather sent by algorithms in reaction to other orders.
  2. Mechanisms preventing statistical arbitrages take place on high frequency markets. Indeed, at the high frequency scale, building strategies which are on average profitable is hardly possible.
  3. There is some asymmetry in the liquidity on the bid and ask sides of the order book. This simply means that buying and selling are not symmetric actions. Indeed, consider for example a market maker, with an inventory which is typically positive. She is likely to raise the price by less following a buy order than to lower the price following the same size sell order. This is because its inventory becomes smaller after a buy order, which is a good thing for her, whereas it increases after a sell order.
  4. A significant proportion of transactions is due to large orders, called metaorders, which are not executed at once but split in time by trading algorithms.

    In a Hawkes process framework, the first of these properties corresponds to the case of so-called nearly unstable Hawkes processes, that is Hawkes processes for which the stability condition is almost saturated. This means the spectral radius of the kernel matrix integral is smaller than but close to unity. The second and third ones impose a specific structure on the kernel matrix and the fourth one leads to functions φi with heavy tails.