Regulating the Velocities of Dark Pools. Thought of the Day 72.0

hft-robots630

On 22 September 2010 the SEC chair Mary Schapiro signaled US authorities were considering the introduction of regulations targeted at HFT:

…High frequency trading firms have a tremendous capacity to affect the stability and integrity of the equity markets. Currently, however, high frequency trading firms are subject to very little in the way of obligations either to protect that stability by promoting reasonable price continuity in tough times, or to refrain from exacerbating price volatility.

However regulating an industry working towards moving as fast as the speed of light is no ordinary administrative task: – Modern finance is undergoing a fundamental transformation. Artificial intelligence, mathematical models, and supercomputers have replaced human intelligence, human deliberation, and human execution…. Modern finance is becoming cyborg finance – an industry that is faster, larger, more complex, more global, more interconnected, and less human. C W Lin proposes a number of principles for regulating this cyber finance industry:

  1. Update antiquated paradigms of reasonable investors and compartmentalised institutions, and confront the emerging institutional realities, and realise the old paradigms of governance of markets may be ill-suited for the new finance industry;
  2. Enhance disclosure which recognises the complexity and technological capacities of the new finance industry;
  3. Adopt regulations to moderate the velocities of finance realising that as these approach the speed of light they may contain more risks than rewards for the new financial industry;
  4. Introduce smarter coordination harmonising financial regulation beyond traditional spaces of jurisdiction.

Electronic markets will require international coordination, surveillance and regulation. The high-frequency trading environment has the potential to generate errors and losses at a speed and magnitude far greater than that in a floor or screen-based trading environment… Moreover, issues related to risk management of these technology-dependent trading systems are numerous and complex and cannot be addressed in isolation within domestic financial markets. For example, placing limits on high-frequency algorithmic trading or restricting Un-filtered sponsored access and co-location within one jurisdiction might only drive trading firms to another jurisdiction where controls are less stringent.

In these regulatory endeavours it will be vital to remember that all innovation is not intrinsically good and might be inherently dangerous, and the objective is to make a more efficient and equitable financial system, not simply a faster system: Despite its fast computers and credit derivatives, the current financial system does not seem better at transferring funds from savers to borrowers than the financial system of 1910. Furthermore as Thomas Piketty‘s Capital in the Twenty-First Century amply demonstrates any thought of the democratisation of finance induced by the huge expansion of superannuation funds together with the increased access to finance afforded by credit cards and ATM machines, is something of a fantasy, since levels of structural inequality have endured through these technological transformations. The tragedy is that under the guise of technological advance and sophistication we could be destroying the capacity of financial markets to fulfil their essential purpose, as Haldane eloquently states:

An efficient capital market transfers savings today into investment tomorrow and growth the day after. In that way, it boosts welfare. Short-termism in capital markets could interrupt this transfer. If promised returns the day after tomorrow fail to induce saving today, there will be no investment tomorrow. If so, long-term growth and welfare would be the casualty.

Advertisement

Causality

Quantum_Computer

Causation is a form of event generation. To present an explicit definition of causation requires introducing some ontological concepts to formally characterize what is understood by ‘event’.

The concept of individual is the basic primitive concept of any ontological theory. Individuals associate themselves with other individuals to yield new individuals. It follows that they satisfy a calculus, and that they are rigorously characterized only through the laws of such a calculus. These laws are set with the aim of reproducing the way real things associate. Specifically, it is postulated that every individual is an element of a set s in such a way that the structure S = ⟨s, ◦, ◻⟩ is a commutative monoid of idempotents. This is a simple additive semi-group with neutral element.

In the structure S, s is the set of all individuals, the element ◻ ∈ s is a fiction called the null individual, and the binary operation ◦ is the association of individuals. Although S is a mathematical entity, the elements of s are not, with the only exception of ◻, which is a fiction introduced to form a calculus. The association of any element of s with ◻ yields the same element. The following definitions characterize the composition of individuals.

1. x ∈ s is composed ⇔ (∃ y, z) s (x = y ◦ z)
2. x ∈ s is simple ⇔ ∼ (∃ y, z) s (x = y ◦ z)
3. x ⊂ y ⇔ x ◦ y = y (x is part of y ⇔ x ◦ y = y)
4. Comp(x) ≡ {y ∈ s|y ⊂ x} is the composition of x.

Real things are distinguished from abstract individuals because they have a number of properties in addition to their capability of association. These properties can be intrinsic (Pi) or relational (Pr). The intrinsic properties are inherent and they are represented by predicates or unary applications, whereas relational properties depend upon more than a single thing and are represented by n-ary predicates, with n ≥ 1. Examples of intrinsic properties are electric charge and rest mass, whereas velocity of macroscopic bodies and volume are relational properties.

An individual with its properties make up a thing X : X =< x, P(x) >

Here P(x) is the collection of properties of the individual x. A material thing is an individual with concrete properties, i.e. properties that can change in some respect.

The state of a thing X is a set of functions S(X) from a domain of reference M (a set that can be enumerable or nondenumerable) to the set of properties PX. Every function in S(X) represents a property in PX. The set of the physically accessible states of a thing X is the lawful state space of X : SL(X). The state of a thing is represented by a point in SL(X). A change of a thing is an ordered pair of states. Only changing things can be material. Abstract things cannot change since they have only one state (their properties are fixed by definition).

A legal statement is a restriction upon the state functions of a given class of things. A natural law is a property of a class of material things represented by an empirically corroborated legal statement.

The ontological history h(X) of a thing X is a subset of SL(X) defined by h(X) = {⟨t, F(t)⟩|t ∈ M}

where t is an element of some auxiliary set M, and F are the functions that represent the properties of X.

If a thing is affected by other things we can introduce the following definition:

h(Y/X ) : “history of the thing Y in presence of the thing X”.

Let h(X) and h(Y) be the histories of the things X and Y, respectively. Then

h(Y/X) = {⟨t,H(t)⟩|t ∈ M},

where H≠ F is the total state function of Y as affected by the existence of X, and F is the total state function of X in the absence of Y. The history of Y in presence of X is different from the history of Y without X .

We can now introduce the notion of action:

X ▷ Y : “X acts on Y”

X ▷ Y =def h(Y/X) ≠ h(Y)

An event is a change of a thing X, i.e. an ordered pair of states:

(s1, s2) ∈ EL(X) = SL(X) × SL(X)

The space EL(X) is called the event space of X.

Causality is a relation between events, i.e. a relation between changes of states of concrete things. It is not a relation between things. Only the related concept of ‘action’ is a relation between things. Specifically,

C'(x): “an event in a thing x is caused by some unspecified event exxi“.

C'(x) =def (∃ exxi) [exxi ∈ EL(X) ⇔ xi ▷ x.

C(x, y): “an event in a thing x is caused by an event in a thing y”.

C(x, y) =def (∃ exy) [exy ∈ EL(x) ⇔ y ▷ x

In the above definitions, the notation exy indicates in the superscript the thing x to whose event space belongs the event e, whereas the subscript denotes the thing that acted triggering the event. The implicit arguments of both C’ and C are events, not things. Causation is a form of event generation. The crucial point is that a given event in the lawful event space EL(x) is caused by an action of a thing y iff the event happens only conditionally to the action, i.e., it would not be the case of exy without an action of y upon x. Time does not appear in this definition, allowing causal relations in space-time without a global time orientability or even instantaneous and non-local causation. If causation is non-local under some circumstances, e.g. when a quantum system is prepared in a specific state of polarization or spin, quantum entanglement poses no problem to realism and determinism. The quantum theory describes an aspect of a reality that is ontologically determined and with non-local relations. Under any circumstances the postulates of Special Relativity are violated, since no physical system ever crosses the barrier of the speed of light.

Black Holes. Thought of the Day 23.0

bhdiagram_1

The formation of black holes can be understood, at least partially, within the context of general relativity. According to general relativity the gravitational collapse leads to a spacetime singularity. But this spacetime singularity can not be adequately described within general relativity, because the equivalence principle of general relativity is not valid for spacetime singularities; therefore, general relativity does not give a complete description of black holes. The same problem exists with regard to the postulated initial singularity of the expanding cosmos. In these cases, quantum mechanics and quantum field theory also reach their limit; they are not applicable for highly curved spacetimes. For a certain curving parameter (the famous Planck scale), gravity has the same strength as the other interactions; then it is not possible to ignore gravity in the context of a quantum field theoretical description. So, there exists no theory which would be able to describe gravitational collapses or which could explain, why (although they are predicted by general relativity) they don’t happen, or why there is no spacetime singularity. And the real problems start, if one brings general relativity and quantum field theory together to describe black holes. Then it comes to rather strange forms of contradictions, and the mutual conceptual incompatibility of general relativity and quantum field theory becomes very clear:

Black holes are according to general relativity surrounded by an event horizon. Material objects and radiation can enter the black hole, but nothing inside its event horizon can leave this region, because the gravitational pull is strong enough to hold back even radiation; the escape velocity is greater than the speed of light. Not even photons can leave a black hole. Black holes have a mass; in the case of the Schwarzschild metrics, they have exclusively a mass. In the case of the Reissner-Nordström metrics, they have a mass and an electric charge; in case of the Kerr metrics, they have a mass and an angular momentum; and in case of the Kerr-Newman metrics, they have mass, electric charge and angular momentum. These are, according to the no-hair theorem, all the characteristics a black hole has at its disposal. Let’s restrict the argument in the following to the Reissner-Nordström metrics in which a black hole has only mass and electric charge. In the classical picture, the electric charge of a black hole becomes noticeable in form of a force exerted on an electrically charged probe outside its event horizon. In the quantum field theoretical picture, interactions are the result of the exchange of virtual interaction bosons, in case of an electric charge: virtual photons. But how can photons be exchanged between an electrically charged black hole and an electrically charged probe outside its event horizon, if no photon can leave a black hole – which can be considered a definition of a black hole? One could think, that virtual photons, mediating electrical interaction, are possibly able (in contrast to real photons, representing radiation) to leave the black hole. But why? There is no good reason and no good answer for that within our present theoretical framework. The same problem exists for the gravitational interaction, for the gravitational pull of the black hole exerted on massive objects outside its event horizon, if the gravitational force is understood as an exchange of gravitons between massive objects, as the quantum field theoretical picture in its extrapolation to gravity suggests. How could (virtual) gravitons leave a black hole at all?

There are three possible scenarios resulting from the incompatibility of our assumptions about the characteristics of a black hole, based on general relativity, and on the picture quantum field theory draws with regard to interactions:

(i) Black holes don’t exist in nature. They are a theoretical artifact, demonstrating the asymptotic inadequacy of Einstein’s general theory of relativity. Only a quantum theory of gravity will explain where the general relativistic predictions fail, and why.

(ii) Black holes exist, as predicted by general relativity, and they have a mass and, in some cases, an electric charge, both leading to physical effects outside the event horizon. Then, we would have to explain, how these effects are realized physically. The quantum field theoretical picture of interactions is either fundamentally wrong, or we would have to explain, why virtual photons behave completely different, with regard to black holes, from real radiation photons. Or the features of a black hole – mass, electric charge and angular momentum – would be features imprinted during its formation onto the spacetime surrounding the black hole or onto its event horizon. Then, interactions between a black hole and its environment would rather be interactions between the environment and the event horizon or even interactions within the environmental spacetime.

(iii) Black holes exist as the product of gravitational collapses, but they do not exert any effects on their environment. This is the craziest of all scenarios. For this scenario, general relativity would have to be fundamentally wrong. In contrast to the picture given by general relativity, black holes would have no physically effective features at all: no mass, no electric charge, no angular momentum, nothing. And after the formation of a black hole, there would be no spacetime curvature, because there remains no mass. (Or, the spacetime curvature has to result from other effects.) The mass and the electric charge of objects falling (casually) into a black hole would be irretrievably lost. They would simply disappear from the universe, when they pass the event horizon. Black holes would not exert any forces on massive or electrically charged objects in their environment. They would not pull any massive objects into their event horizon and increase thereby their mass. Moreover, their event horizon would mark a region causally disconnected with our universe: a region outside of our universe. Everything falling casually into the black hole, or thrown intentionally into this region, would disappear from the universe.

Cosmology: Friedmann-Lemaître Universes

slide_14

Cosmology starts by assuming that the large-scale evolution of spacetime can be determined by applying Einstein’s field equations of Gravitation everywhere: global evolution will follow from local physics. The standard models of cosmology are based on the assumption that once one has averaged over a large enough physical scale, isotropy is observed by all fundamental observers (the preferred family of observers associated with the average motion of matter in the universe). When this isotropy is exact, the universe is spatially homogeneous as well as isotropic. The matter motion is then along irrotational and shearfree geodesic curves with tangent vector ua, implying the existence of a canonical time-variable t obeying ua = −t,a. The Robertson-Walker (‘RW’) geometries used to describe the large-scale structure of the universe embody these symmetries exactly. Consequently they are conformally flat, that is, the Weyl tensor is zero:

Cijkl := Rijkl + 1/2(Rikgjl + Rjlgik − Ril gjk − Rjkgil) − 1/6R(gikgjl − gilgjk) = 0 —– (1)

this tensor represents the free gravitational field, enabling non-local effects such as tidal forces and gravitational waves which do not occur in the exact RW geometries.

Comoving coordinates can be chosen so that the metric takes the form:

ds2 = −dt2 + S2(t)dσ2, ua = δa0 (a=0,1,2,3) —– (2)

where S(t) is the time-dependent scale factor, and the worldlines with tangent vector ua = dxa/dt represent the histories of fundamental observers. The space sections {t = const} are surfaces of homogeneity and have maximal symmetry: they are 3-spaces of constant curvature K = k/S2(t) where k is the sign of K. The normalized metric dσ2 characterizes a 3-space of normalized constant curvature k; coordinates (r, θ, φ) can be chosen such that

2 = dr2 + f2(r) dθ2 + sin2θdφ2 —– (3)

where f (r) = {sin r, r, sinh r} if k = {+1, 0, −1} respectively. The rate of expansion at any time t is characterized by the Hubble parameter H(t) = S ̇/S.

To determine the metric’s evolution in time, one applies the Einstein Field Equations, showing the effect of matter on space-time curvature, to the metric (2,3). Because of local isotropy, the matter tensor Tab necessarily takes a perfect fluid form relative to the preferred worldlines with tangent vector ua:

Tab = (μ + p/c2)uaub + (p/c2)gab —– (4)

, c is the speed of light. The energy density μ(t) and pressure term p(t)/c2 are the timelike and spacelike eigenvalues of Tab. The integrability conditions for the Einstein Field Equations are the energy-density conservation equation

Tab;b = 0 ⇔ μ ̇ + (μ + p/c2)3S ̇/S = 0 —– (5)

This becomes determinate when a suitable equation of state function w := pc2/μ relates the pressure p to the energy density μ and temperature T : p = w(μ,T)μ/c2 (w may or may not be constant). Baryons have {pbar = 0 ⇔ w = 0} and radiation has {pradc2 = μrad/3 ⇔ w = 1/3,μrad = aT4rad}, which by (5) imply

μbar ∝ S−3, μrad ∝ S−4, Trad ∝ S−1 —– (6)

The scale factor S(t) obeys the Raychaudhuri equation

3S ̈/S = -1/2 κ(μ + 3p/c2) + Λ —– (7)

, where κ is the gravitational constant and Λ is the cosmological constant. A cosmological constant can also be regarded as a fluid with pressure p related to the energy density μ by {p = −μc2 ⇔ w = −1}. This shows that the active gravitational mass density of the matter and fields present is μgrav := μ + 3p/c2. For ordinary matter this will be positive:

μ + 3p/c2 > 0 ⇔ w > −1/3 —– (8)

(the ‘Strong Energy Condition’), so ordinary matter will tend to cause the universe to decelerate (S ̈ < 0). It is also apparent that a positive cosmological constant on its own will cause an accelerating expansion (S ̈ > 0). When matter and a cosmological constant are both present, either result may occur depending on which effect is dominant. The first integral of equations (5, 7) when S ̇≠ 0 is the Friedmann equation

S ̇2/S2 = κμ/3 + Λ/3 – k/S2 —– (9)

This is just the Gauss equation relating the 3-space curvature to the 4-space curvature, showing how matter directly causes a curvature of 3-spaces. Because of the spacetime symmetries, the ten Einstein Filed Equations are equivalent to the two equations (7, 9). Models of this kind, that is with a Robertson-Walker (‘RW’) geometry with metric (2, 3) and dynamics governed by equations (5, 7, 9), are called Friedmann-Lemaître universes (‘FL’). The Friedmann equation (9) controls the expansion of the universe, and the conservation equation (5) controls the density of matter as the universe expands; when S ̇≠ 0 , equation (7) will necessarily hold if (5, 9) are both satisfied. Given a determinate matter description (specifying the equation of state w = w(μ, T) explicitly or implicitly) for each matter component, the existence and uniqueness of solutions follows both for a single matter component and for a combination of different kinds of matter, for example μ = μbar + μrad + μcdm + μν where we include cold dark matter (cdm) and neutrinos (ν). Initial data for such solutions at an arbitrary time t0 (eg. today) consists of,

• The Hubble constant H0 := (S ̇/S)0 = 100h km/sec/Mpc;

• A dimensionless density parameter Ωi0 := κμi0/3H02 for each type of matter present (labelled by i);

• If Λ ≠ 0, either ΩΛ0 := Λ/3H20, or the dimensionless deceleration parameter q := −(S ̈/S) H−20.

Given the equations of state for the matter, this data then determines a unique solution {S(t), μ(t)}, i.e. a unique corresponding universe history. The total matter density is the sum of the terms Ωi0 for each type of matter present, for example

Ωm0 = Ωbar0 + Ωrad0 + Ωcdm0 + Ων0, —– (10)

and the total density parameter Ω0 is the sum of that for matter and for the cosmological constant:

Ω0 = Ωm0 + ΩΛ0 —– (11)

Evaluating the Raychaudhuri equation (7) at the present time gives an important relation between these parameters: when the pressure term p/c2 can be ignored relative to the matter term μ (as is plausible at the present time, and assuming we represent ‘dark energy’ as a cosmological constant.),

q0 = 1/2 Ωm0 − ΩΛ0 —– (12)

This shows that a cosmological constant Λ can cause an acceleration (negative q0); if it vanishes, the expression simplifies: Λ = 0 ⇒ q = 1 Ωm0, showing how matter causes a deceleration of the universe. Evaluating the Friedmann equation (9) at the time t0, the spatial curvature is
K0:= k/S02 = H020 − 1) —– (13)
The value Ω0 = 1 corresponds to spatially flat universes (K0 = 0), separating models with positive spatial curvature (Ω0 > 1 ⇔ K0 > 0) from those with negative spatial curvature (Ω0 < 1 ⇔ K0 < 0).
The FL models are the standard models of modern cosmology, surprisingly effective in view of their extreme geometrical simplicity. One of their great strengths is their explanatory role in terms of making explicit the way the local gravitational effect of matter and radiation determines the evolution of the universe as a whole, this in turn forming the dynamic background for local physics (including the evolution of the matter and radiation).