The Case of Morphisms of Representation Corresponding to A-Module Holomorphisms. Part 2

1-s2.0-S0001870811003495-fx003

Representations of a quiver can be interpreted as modules over a non-commutative algebra A(Q) whose elements are linear combinations of paths in Q.

Let Q be a quiver. A non-trivial path in Q is a sequence of arrows am…a0 such that h(ai−1) = t(ai) for i = 1,…, m:

Untitled

The path is p = am…a0. Writing t(p) = t(a0) and saying that p starts at t(a0) and, similarly, writing h(p) = h(am) and saying that p finishes at h(am). For each vertex i ∈ Q0, we denote by ei the trivial path which starts and finishes at i. Two paths p and q are compatible if t(p) = h(q) and, in this case, the composition pq can defined by juxtaposition of p and q. The length l(p) of a path is the number of arrows it contains; in particular, a trivial path has length zero.

The path algebra A(Q) of a quiver Q is the complex vector space with basis consisting of all paths in Q, equipped with the multiplication in which the product pq of paths p and q is defined to be the composition pq if t(p) = h(q), and 0 otherwise. Composition of paths is non-commutative; in most cases, if p and q can be composed one way, then they cannot be composed the other way, and even if they can, usually pq ≠ qp. Hence the path algebra is indeed non-commutative.

Let us define Al ⊂ A to be the subspace spanned by paths of length l. Then A = ⊕l≥0Al is a graded C-algebra. The subring A0 ⊂ A spanned by the trivial paths ei is a semisimple ring in which the elements ei are orthogonal idempotents, in other words eiej = ei when i = j, and 0 otherwise. The algebra A is finite-dimensional precisely if Q has no directed cycles.

The category of finite-dimensional representations of a quiver Q is isomorphic to the category of finitely generated left A(Q)-modules. Let (V, φ) be a representation of Q. We can then define a left module V over the algebra A = A(Q) as follows: as a vector space it is

V = ⊕i∈Q0 Vi

and the A-module structure is extended linearly from

eiv = v, v ∈ Mi

= 0, v ∈ Mj for j ≠ i

for i ∈ Qand

av = φa(vt(a)), v ∈ Vt(a)

= 0, v ∈ Vj for j ≠ t(a)

for a ∈ Q1. This construction can be inverted as follows: given a left A-module V, we set Vi = eiV for i ∈ Q0 and define the map φa: Vt(a) → Vh(a) by v ↦ a(v). Morphisms of representations of (Q, V) correspond to A-module homomorphisms.

Disjointed Regularity in Open Classes of Elementary Topology

rosetta

Let x, y, … denote first-order structures in St𝜏, x ≈ y will denote isomorphism.

x ∼n,𝜏 y means that there is a sequence 0 ≠ I0 ⊆ …. ⊆ In of sets of 𝜏-partial isomorphism of finite domain so that, for i < j ≤ n, f ∈ Ii and a ∈ x (respectively, b ∈ y), there is g ∈ Ij such that g ⊇ f and a ∈ Dom(g) (respectively, b ∈ Im(g)). The later is called the extension property.

x ∼𝜏 y means the above holds for an infinite chain 0 ≠ I0 ⊆ …. ⊆ In ⊆ …

Fraïssé’s characterization of elementary equivalence says that for finite relational vocabularies: x ≡ y iff x ∼n,𝜏 y. To have it available for vocabularies containing function symbols add the complexity of terms in atomic formulas to the quantifier rank. It is well known that for countable x, y : x ∼𝜏 y implies x ≈ y.

Given a vocabulary 𝜏 let 𝜏 be a disjoint renaming of 𝜏. If x, y ∈ St𝜏 have the same power, let y be an isomorphic copy of y sharing the universe with x and renamed to be of type 𝜏. In this context, (x, y) will denote the 𝜏 ∪ 𝜏-structure that results of expanding x with the relations of y.

Lemma: There is a vocabulary 𝜏+ ⊇ 𝜏 ∪ 𝜏 such that for each finite vocabulary 𝜏0 ⊆ 𝜏 there is a sequence of elementary classes 𝛥1 ⊇ 𝛥2 ⊇ 𝛥3 ⊇ …. in St𝜏+ such that if 𝜋 = 𝜋𝜏+,𝜏∪𝜏 then (1) 𝜋(𝛥𝑛) = {(x,y) : |x| = |y| ≥ 𝜔, x ≡n,𝜏0 y}, (2) 𝜋(⋂n 𝛥n) = {(x, y) : |x| = |y| ≥ 𝜔, x ∼𝜏0 y}. Moreover, ⋂n𝛥n is the reduct of an elementary class.

Proof. Let 𝛥 be the class of structures (x, y, <, a, I) where < is a discrete linear order with minimum but no maximum and I codes for each c ≤ a a family Ic = {I(c, i, −, −)}i∈x of partial 𝜏0-𝜏0–isomorphisms from x into y, such that for c < c’ ≤ a : Ic ⊆ Ic and the extension property holds. Describe this by a first-order sentence 𝜃𝛥 of type 𝜏+ ⊇ 𝜏0 ∪ 𝜏0 and set 𝛥𝑛 = ModL(𝜃𝛥 ∧ ∃≥n x(x ≤ a)}. Then condition (1) in the Lemma is granted by Fraïssé’s characterization and the fact that x being (2) is granted because (x, y, <, a, I) ∈ ⋂n𝛥n iff < contains an infinite increasing 𝜔-chain below a, a ∑11 condition.

A topology on St𝜏 is invariant if its open (closed) classes are closed under isomorphic structures. Of course, it is superfluous if we identify isomorphic structures.

Theorem: Let Γ be a regular compact topology finer than the elementary topology on each class St𝜏 such that the countable structures are dense in St𝜏 and reducts and renamings are continuous for these topologies. Then Γ𝜏 is the elementary topology ∀ 𝜏.

Proof: We show that any pair of disjoint closed classes C1, C2 of Γ𝜏 may be separated by an elementary class. Assume this is not the case since Ci are compact in the topology Γ𝜏 then they are compact for the elementary topology and, by regularity of the latter, ∃ xi ∈ Ci such that x1 ≡ x2 in L𝜔𝜔(𝜏). The xi must be infinite, otherwise they would be isomorphic contradicting the disjointedness of the Ci. By normality of Γ𝜏, there are towers Ui ⊆ Ci ⊆ Ui ⊆ Ci, i = 1,2, separating the Ci with Ui, Ui open and Ci, Ci closed in Γ𝜏 and disjoint. Let I be a first-order sentence of type 𝜏 ⊇ 𝜏 such that (z, ..) |= I ⇔ z is infinite, and let π be the corresponding reduct operation. For fixed n ∈ ω and the finite 𝜏0  ⊆ 𝜏, let t be a first-order sentence describing the common ≡n,𝜏0 – equivalence class of x1, x2. As,

(xi,..) ∈ Mod𝜏(I) ∩ π-1 Mod(t) ∩ π-1Ui, i = 1, 2,..

and this class is open in Γ𝜏‘ by continuity of π, then by the density hypothesis there are countable xi ∈ Ui , i = 1, 2, such that x1n,𝜏 x2. Thus for some expansion of (x1, x2),

(x, x,..) ∈ 𝛥n,𝜏0 ∩ 𝜋1−1(𝐶1) ∩ (𝜌𝜋2)−1(C2) —– (1)

where 𝛥𝑛,𝜏0 is the class of Lemma, 𝜋1, 𝜋2 are reducts, and 𝜌 is a renaming:

𝜋1(x1, x2, …) = x1 𝜋1 : St𝜏+ → St𝜏∪𝜏 → St𝜏

𝜋2(x1, x2, …) = x2 𝜋2 : St𝜏+ → St𝜏∪𝜏 → St𝜏

𝜌(x2) = x2 𝜌 : St𝜏 → St𝜏

Since the classes (1) are closed by continuity of the above functors then ⋂n𝛥n,𝜏0 ∩ 𝜋1−1(C1) ∩ (𝜌𝜋2)−1(C2) is non-emtpy by compactness of Γ𝜏+. But ⋂n𝛥n,𝜏0 = 𝜋(V) with V elementary of type 𝜏++ ⊇ 𝜏+. Then

V ∩ π-1π1-1(U1) ∩ π-1(ρπ2)-1 (U2) ≠ 0

is open for ΓL++ and the density condition it must contain a countable structure (x1, x*2, ..). Thus (x1, x*2, ..) ∈ ∩n 𝛥𝑛,𝜏0, with xi ∈ Ui ⊆ Ci. It follows that x1 ~𝜏0 x2 and thus x1 |𝜏0 ≈ x2 |𝜏0. Let δ𝜏0 be a first-order sentence of type 𝜏 ∪ 𝜏* ∪{h} such that (x, y*, h) |= δ𝜏0 ⇔ h : x |𝜏0 ≈ y|𝜏0. By compactness,

(∩𝜏0fin𝜏 Mod𝜏∪𝜏*∪{f}𝜏0)) ∩ π1-1(C1) ∩ (ρπ2)-1(C2) ≠ 0

and we have h : x1 ≈ x2, xi ∈ Ci, contradicting the disjointedness of Ci. Finally, if C is a closed class of Γ𝜏 and x ∉ C, clΓ𝜏{x} is disjoint from C by regularity of Γ𝜏. Then clΓ𝜏{x} and C may be separated by open classes of elementary topology, which implies C is closed in this topology.

Universal Turing Machine: Algorithmic Halting

169d342be4ac9fdca10d1c8c9c04c3df

A natural number x will be identified with the x’th binary string in lexicographic order (Λ,0,1,00,01,10,11,000…), and a set X of natural numbers will be identified with its characteristic sequence, and with the real number between 0 and 1 having that sequence as its dyadic expansion. The length of a string x will be denoted |x|, the n’th bit of an infinite sequence X will be noted X(n), and the initial n bits of X will be denoted Xn. Concatenation of strings p and q will be denoted pq.

We now define the information content (and later the depth) of finite strings using a universal Turing machine U. A universal Turing machine may be viewed as a partial recursive function of two arguments. It is universal in the sense that by varying one argument (“program”) any partial recursive function of the other argument (“data”) can be obtained. In the usual machine formats, program, data and output are all finite strings, or, equivalently, natural numbers. However, it is not possible to take a uniformly weighted average over a countably infinite set. Chaitin’s universal machine has two tapes: a read-only one-way tape containing the infinite program; and an ordinary two-way read/write tape, which is used for data input, intermediate work, and output, all of which are finite strings. Our machine differs from Chaitin’s in having some additional auxiliary storage (e.g. another read/write tape) which is needed only to improve the time efficiency of simulations.

We consider only terminating computations, during which, of course, only a finite portion of the program tape can be read. Therefore, the machine’s behavior can still be described by a partial recursive function of two string arguments U(p, w), if we use the first argument to represent that portion of the program that is actually read in the course of a particular computation. The expression U (p, w) = x will be used to indicate that the U machine, started with any infinite sequence beginning with p on its program tape and the finite string w on its data tape, performs a halting computation which reads exactly the initial portion p of the program, and leaves output data x on the data tape at the end of the computation. In all other cases (reading less than p, more than p, or failing to halt), the function U(p, w) is undefined. Wherever U(p, w) is defined, we say that p is a self-delimiting program to compute x from w, and we use T(p, w) to represent the time (machine cycles) of the computation. Often we will consider computations without input data; in that case we abbreviate U(p, Λ) and T(p, Λ) as U(p) and T(p) respectively.

The self-delimiting convention for the program tape forces the domain of U and T, for each data input w, to be a prefix set, that is, a set of strings no member of which is the extension of any other member. Any prefix set S obeys the Kraft inequality

p∈S 2−|p| ≤ 1

Besides being self-delimiting with regard to its program tape, the U machine must be efficiently universal in the sense of being able to simulate any other machine of its kind (Turing machines with self-delimiting program tape) with at most an additive constant constant increase in program size and a linear increase in execution time.

Without loss of generality we assume that there exists for the U machine a constant prefix r which has the effect of stacking an instruction to restart the computation when it would otherwise end. This gives the machine the ability to concatenate programs to run consecutively: if U(p, w) = x and U(q, x) = y, then U(rpq, w) = y. Moreover, this concatenation should be efficient in the sense that T (rpq, w) should exceed T (p, w) + T (q, x) by at most O(1). This efficiency of running concatenated programs can be realized with the help of the auxiliary storage to stack the restart instructions.

Sometimes we will generalize U to have access to an “oracle” A, i.e. an infinite look-up table which the machine can consult in the course of its computation. The oracle may be thought of as an arbitrary 0/1-valued function A(x) which the machine can cause to be evaluated by writing the argument x on a special tape and entering a special state of the finite control unit. In the next machine cycle the oracle responds by sending back the value A(x). The time required to evaluate the function is thus linear in the length of its argument. In particular we consider the case in which the information in the oracle is random, each location of the look-up table having been filled by an independent coin toss. Such a random oracle is a function whose values are reproducible, but otherwise unpredictable and uncorrelated.

Let {φAi (p, w): i = 0,1,2…} be an acceptable Gödel numbering of A-partial recursive functions of two arguments and {φAi (p, w)} an associated Blum complexity measure, henceforth referred to as time. An index j is called self-delimiting iff, for all oracles A and all values w of the second argument, the set { x : φAj (x, w) is defined} is a prefix set. A self-delimiting index has efficient concatenation if there exists a string r such that for all oracles A and all strings w, x, y, p, and q,if φAj (p, w) = x and φAj (q, x) = y, then φAj(rpq, w) = y and φAj (rpq, w) = φAj (p, w) + φAj (q, x) + O(1). A self-delimiting index u with efficient concatenation is called efficiently universal iff, for every self-delimiting index j with efficient concatenation, there exists a simulation program s and a linear polynomial L such that for all oracles A and all strings p and w, and

φAu(sp, w) = φAj (p, w)

and

ΦAu(sp, w) ≤ L(ΦAj (p, w))

The functions UA(p,w) and TA(p,w) are defined respectively as φAu(p, w) and ΦAu(p, w), where u is an efficiently universal index.

For any string x, the minimal program, denoted x∗, is min{p : U(p) = x}, the least self-delimiting program to compute x. For any two strings x and w, the minimal program of x relative to w, denoted (x/w)∗, is defined similarly as min{p : U(p,w) = x}.

By contrast to its minimal program, any string x also has a print program, of length |x| + O(log|x|), which simply transcribes the string x from a verbatim description of x contained within the program. The print program is logarithmically longer than x because, being self-delimiting, it must indicate the length as well as the contents of x. Because it makes no effort to exploit redundancies to achieve efficient coding, the print program can be made to run quickly (e.g. linear time in |x|, in the present formalism). Extra information w may help, but cannot significantly hinder, the computation of x, since a finite subprogram would suffice to tell U to simply erase w before proceeding. Therefore, a relative minimal program (x/w)∗ may be much shorter than the corresponding absolute minimal program x∗, but can only be longer by O(1), independent of x and w.

A string is compressible by s bits if its minimal program is shorter by at least s bits than the string itself, i.e. if |x∗| ≤ |x| − s. Similarly, a string x is said to be compressible by s bits relative to a string w if |(x/w)∗| ≤ |x| − s. Regardless of how compressible a string x may be, its minimal program x∗ is compressible by at most an additive constant depending on the universal computer but independent of x. [If (x∗)∗ were much smaller than x∗, then the role of x∗ as minimal program for x would be undercut by a program of the form “execute the result of executing (x∗)∗.”] Similarly, a relative minimal program (x/w)∗ is compressible relative to w by at most a constant number of bits independent of x or w.

The algorithmic probability of a string x, denoted P(x), is defined as {∑2−|p| : U(p) = x}. This is the probability that the U machine, with a random program chosen by coin tossing and an initially blank data tape, will halt with output x. The time-bounded algorithmic probability, Pt(x), is defined similarly, except that the sum is taken only over programs which halt within time t. We use P(x/w) and Pt(x/w) to denote the analogous algorithmic probabilities of one string x relative to another w, i.e. for computations that begin with w on the data tape and halt with x on the data tape.

The algorithmic entropy H(x) is defined as the least integer greater than −log2P(x), and the conditional entropy H(x/w) is defined similarly as the least integer greater than − log2P(x/w). Among the most important properties of the algorithmic entropy is its equality, to within O(1), with the size of the minimal program:

∃c∀x∀wH(x/w) ≤ |(x/w)∗| ≤ H(x/w) + c

The first part of the relation, viz. that algorithmic entropy should be no greater than minimal program size, is obvious, because of the minimal program’s own contribution to the algorithmic probability. The second half of the relation is less obvious. The approximate equality of algorithmic entropy and minimal program size means that there are few near-minimal programs for any given input/output pair (x/w), and that every string gets an O(1) fraction of its algorithmic probability from its minimal program.

Finite strings, such as minimal programs, which are incompressible or nearly so are called algorithmically random. The definition of randomness for finite strings is necessarily a little vague because of the ±O(1) machine-dependence of H(x) and, in the case of strings other than self-delimiting programs, because of the question of how to count the information encoded in the string’s length, as opposed to its bit sequence. Roughly speaking, an n-bit self-delimiting program is considered random (and therefore not ad-hoc as a hypothesis) iff its information content is about n bits, i.e. iff it is incompressible; while an externally delimited n-bit string is considered random iff its information content is about n + H(n) bits, enough to specify both its length and its contents.

For infinite binary sequences (which may be viewed also as real numbers in the unit interval, or as characteristic sequences of sets of natural numbers) randomness can be defined sharply: a sequence X is incompressible, or algorithmically random, if there is an O(1) bound to the compressibility of its initial segments Xn. Intuitively, an infinite sequence is random if it is typical in every way of sequences that might be produced by tossing a fair coin; in other words, if it belongs to no informally definable set of measure zero. Algorithmically random sequences constitute a larger class, including sequences such as Ω which can be specified by ineffective definitions.

The busy beaver function B(n) is the greatest number computable by a self-delimiting program of n bits or fewer. The halting set K is {x : φx(x) converges}. This is the standard representation of the halting problem.

The self-delimiting halting set K0 is the (prefix) set of all self-delimiting programs for the U machine that halt: {p : U(p) converges}.

K and K0 are readily computed from one another (e.g. by regarding the self-delimiting programs as a subset of ordinary programs, the first 2n bits of K0 can be recovered from the first 2n+O(1) bits of K; by encoding each n-bit ordinary program as a self-delimiting program of length n + O(log n), the first 2n bits of K can be recovered from the first 2n+O(log n) bits of K0.)

The halting probability Ω is defined as {2−|p| : U(p) converges}, the probability that the U machine would halt on an infinite input supplied by coin tossing. Ω is thus a real number between 0 and 1.

The first 2n bits of K0 can be computed from the first n bits of Ω, by enumerating halting programs until enough have halted to account for all but 2−n of the total halting probability. The time required for this decoding (between B(n − O(1)) and B(n + H(n) + O(1)) grows faster than any computable function of n. Although K0 is only slowly computable from Ω, the first n bits of Ω can be rapidly computed from the first 2n+H(n)+O(1) bits of K0, by asking about the halting of programs of the form “enumerate halting programs until (if ever) their cumulative weight exceeds q, then halt”, where q is an n-bit rational number…

|, ||, |||, ||||| . The Non-Metaphysics of Unprediction. Thought of the day 67.1

1*pYpVJs_n4yh_3fnGaCbwKA

The cornerstone of Hilbert’s philosophy of mathematics was the so-called finitary standpoint. This methodological standpoint consists in a restriction of mathematical thought to objects which are “intuitively present as immediate experience prior to all thought,” and to those operations on and methods of reasoning about such objects which do not require the introduction of abstract concepts, in particular, require no appeal to completed infinite totalities.

Hilbert characterized the domain of finitary reasoning in a well-known paragraph:

[A]s a condition for the use of logical inferences and the performance of logical operations, something must already be given to our faculty of representation, certain extra-logical concrete objects that are intuitively present as immediate experience prior to all thought. If logical inference is to be reliable, it must be possible to survey these objects completely in all their parts, and the fact that they occur, that they differ from one another, and that they follow each other, or are concatenated, is immediately given intuitively, together with the objects, as something that can neither be reduced to anything else nor requires reduction. This is the basic philosophical position that I consider requisite for mathematics and, in general, for all scientific thinking, understanding, and communication. [Hilbert in German + DJVU link here in English]

These objects are, for Hilbert, the signs. For the domain of contentual number theory, the signs in question are sequences of strokes (“numerals”) such as

|, ||, |||, ||||| .

The question of how exactly Hilbert understood the numerals is difficult to answer. What is clear in any case is that they are logically primitive, i.e., they are neither concepts (as Frege’s numbers are) nor sets. For Hilbert, the important issue is not primarily their metaphysical status (abstract versus concrete in the current sense of these terms), but that they do not enter into logical relations, e.g., they cannot be predicated of anything.

Sometimes Hilbert’s view is presented as if Hilbert claimed that the numbers are signs on paper. It is important to stress that this is a misrepresentation, that the numerals are not physical objects in the sense that truths of elementary number theory are dependent only on external physical facts or even physical possibilities. Hilbert made too much of the fact that for all we know, neither the infinitely small nor the infinitely large are actualized in physical space and time, yet he certainly held that the number of strokes in a numeral is at least potentially infinite. It is also essential to the conception that the numerals are sequences of one kind of sign, and that they are somehow dependent on being grasped as such a sequence, that they do not exist independently of our intuition of them. Only our seeing or using “||||” as a sequence of 4 strokes as opposed to a sequence of 2 symbols of the form “||” makes “||||” into the numeral that it is. This raises the question of individuation of stroke symbols. An alternative account would have numerals be mental constructions. According to Hilber, the numerals are given in our representation, but they are not merely subjective “mental cartoons”.

One version of this view would be to hold that the numerals are types of stroke-symbols as represented in intuition. At first glance, this seems to be a viable reading of Hilbert. It takes care of the difficulties that the reading of numerals-as-tokens (both physical and mental) faces, and it gives an account of how numerals can be dependent on their intuitive construction while at the same time not being created by thought.

Types are ordinarily considered to be abstract objects and not located in space or time. Taking the numerals as intuitive representations of sign types might commit us to taking these abstract objects as existing independently of their intuitive representation. That numerals are “space- and timeless” is a consequence that already thought could be drawn from Hilbert’s statements. The reason is that a view on which numerals are space- and timeless objects existing independently of us would be committed to them existing simultaneously as a completed totality, and this is exactly what Hilbert is objecting to.

It is by no means compatible, however, with Hilbert’s basic thoughts to introduce the numbers as ideal objects “with quite different determinations from those of sensible objects,” “which exist entirely independent of us.” By this we would go beyond the domain of the immediately certain. In particular, this would be evident in the fact that we would consequently have to assume the numbers as all existing simultaneously. But this would mean to assume at the outset that which Hilbert considers to be problematic.  Another open question in this regard is exactly what Hilbert meant by “concrete.” He very likely did not use it in the same sense as it is used today, i.e., as characteristic of spatio-temporal physical objects in contrast to “abstract” objects. However, sign types certainly are different from full-fledged abstracta like pure sets in that all their tokens are concrete.

Now what is the epistemological status of the finitary objects? In order to carry out the task of providing a secure foundation for infinitary mathematics, access to finitary objects must be immediate and certain. Hilbert’s philosophical background was broadly Kantian. Hilbert’s characterization of finitism often refers to Kantian intuition, and the objects of finitism as objects given intuitively. Indeed, in Kant’s epistemology, immediacy is a defining characteristic of intuitive knowledge. The question is, what kind of intuition is at play? Whereas the intuition involved in Hilbert’s early papers was a kind of perceptual intuition, in later writings it is identified as a form of pure intuition in the Kantian sense. Hilbert later sees the finite mode of thought as a separate source of a priori knowledge in addition to pure intuition (e.g., of space) and reason, claiming that he has “recognized and characterized the third source of knowledge that accompanies experience and logic.” Hilbert justifies finitary knowledge in broadly Kantian terms (without however going so far as to provide a transcendental deduction), characterizing finitary reasoning as the kind of reasoning that underlies all mathematical, and indeed, scientific, thinking, and without which such thought would be impossible.

The simplest finitary propositions are those about equality and inequality of numerals. The finite standpoint moreover allows operations on finitary objects. Here the most basic is that of concatenation. The concatenation of the numerals || and ||| is communicated as “2 + 3,” and the statement that || concatenated with ||| results in the same numeral as ||| concatenated with || by “2 + 3 = 3 + 2.” In actual proof-theoretic practice, as well as explicitly, these basic operations are generalized to operations defined by recursion, paradigmatically, primitive recursion, e.g., multiplication and exponentiation. Roughly, a primitive recursive definition of a numerical operation is one in which the function to be defined, f , is given by two equations

f(0, m) = g(m)

f(n′, m) = h(n, m, f(n, m)),

where g and h are functions already defined, and n′ is the successor numeral to n. For instance, if we accept the function g(m) = m (the constant function) and h(n, m, k) = m + k as finitary, then the equations above define a finitary function, in this case, multiplication f (n, m) = n × m. Similarly, finitary judgments may involve not just equality or inequality but also basic decidable properties, such as “is a prime.” This is finitarily acceptable as long as the characteristic function of such a property is itself finitary: For instance, the operation which transforms a numeral to | if it is prime and to || otherwise can be defined by primitive recursion and is hence finitary. Such finitary propositions may be combined by the usual logical operations of conjunction, disjunction, negation, but also bounded quantification. The problematic finitary propositions are those that express general facts about numerals such as that 1 + n = n + 1 for any given numeral n. It is problematic because, for Hilbert it is from the finitist point of view incapable of being negated. By this he means that the contradictory proposition that there is a numeral n for which 1 + n ≠ n + 1 is not finitarily meaningful. A finitary general proposition is not to be understood as an infinite conjunction but only as a hypothetical judgment that comes to assert something when a numeral is given. Even though they are problematic in this sense, general finitary statements are of particular importance to Hilbert’s proof theory, since the statement of consistency of a formal system T is of such a general form: for any given sequence p of formulas, p is not a derivation of a contradiction in T. Even though in general existential statements are not finitarily meaningful, they may be given finitary meaning if the witness is given by a finitary function. For instance, the finitary content of Euclid’s theorem that for every prime p there is a prime > p, is that given a specific prime p one can produce, by a finitary operation, another prime > p (viz., by testing all numbers between p and p! + 1.).

ε-calculus and Hilbert’s Contentual Number Theory: Proselytizing Intuitionism. Thought of the Day 67.0

Untitled

Hilbert came to reject Russell’s logicist solution to the consistency problem for arithmetic, mainly for the reason that the axiom of reducibility cannot be accepted as a purely logical axiom. He concluded that the aim of reducing set theory, and with it the usual methods of analysis, to logic, has not been achieved today and maybe cannot be achieved at all. At the same time, Brouwer’s intuitionist mathematics gained currency. In particular, Hilbert’s former student Hermann Weyl converted to intuitionism.

According to Hilbert, there is a privileged part of mathematics, contentual elementary number theory, which relies only on a “purely intuitive basis of concrete signs.” Whereas the operating with abstract concepts was considered “inadequate and uncertain,” there is a realm of extra-logical discrete objects, which exist intuitively as immediate experience before all thought. If logical inference is to be certain, then these objects must be capable of being completely surveyed in all their parts, and their presentation, their difference, their succession (like the objects themselves) must exist for us immediately, intuitively, as something which cannot be reduced to something else.

The objects in questions are signs, both numerals and the signs that make up formulas a formal proofs. The domain of contentual number theory consists in the finitary numerals, i.e., sequences of strokes. These have no meaning, i.e., they do not stand for abstract objects, but they can be operated on (e.g., concatenated) and compared. Knowledge of their properties and relations is intuitive and unmediated by logical inference. Contentual number theory developed this way is secure, according to Hilbert: no contradictions can arise simply because there is no logical structure in the propositions of contentual number theory. The intuitive-contentual operations with signs form the basis of Hilbert’s meta-mathematics. Just as contentual number theory operates with sequences of strokes, so meta-mathematics operates with sequences of symbols (formulas, proofs). Formulas and proofs can be syntactically manipulated, and the properties and relationships of formulas and proofs are similarly based in a logic-free intuitive capacity which guarantees certainty of knowledge about formulas and proofs arrived at by such syntactic operations. Mathematics itself, however, operates with abstract concepts, e.g., quantifiers, sets, functions, and uses logical inference based on principles such as mathematical induction or the principle of the excluded middle. These “concept-formations” and modes of reasoning had been criticized by Brouwer and others on grounds that they presuppose infinite totalities as given, or that they involve impredicative definitions. Hilbert’s aim was to justify their use. To this end, he pointed out that they can be formalized in axiomatic systems (such as that of Principia or those developed by Hilbert himself), and mathematical propositions and proofs thus turn into formulas and derivations from axioms according to strictly circumscribed rules of derivation. Mathematics, to Hilbert, “becomes an inventory of provable formulas.” In this way the proofs of mathematics are subject to metamathematical, contentual investigation. The goal of Hilbert is then to give a contentual, meta-mathematical proof that there can be no derivation of a contradiction, i.e., no formal derivation of a formula A and of its negation ¬A.

Hilbert and Bernays developed the ε-calculus as their definitive formalism for axiom systems for arithmetic and analysis, and the so-called ε-substitution method as the preferred approach to giving consistency proofs. Briefly, the ε-calculus is a formalism that includes ε as a term-forming operator. If A(x) is a formula, then εxA(x) is a term, which intuitively stands for a witness for A(x). In a logical formalism containing the ε-operator, the quantifiers can be defined by: ∃x A(x) ≡ A(εxA(x)) and ∀x A(x) ≡ A(εx¬A(x)). The only additional axiom necessary is the so-called “transfinite axiom,” A(t) → A(εxA(x)). Based on this idea, Hilbert and his collaborators developed axiomatizations of number theory and analysis. Consistency proofs for these systems were then given using the ε-substitution method. The idea of this method is, roughly, that the ε-terms εxA(x) occurring in a formal proof are replaced by actual numerals, resulting in a quantifier-free proof. Suppose we had a (suitably normalized) derivation of 0 = 1 that contains only one ε-term εxA(x). Replace all occurrences of εxA(x) by 0. The instances of the transfinite axiom then are all of the form A(t) → A(0). Since no other ε-terms occur in the proof, A(t) and A(0) are basic numerical formulas without quantifiers and, we may assume, also without free variables. So they can be evaluated by finitary calculation. If all such instances turn out to be true numerical formulas, we are done. If not, this must be because A(t) is true for some t, and A(0) is false. Then replace εxA(x) instead by n, where n is the numerical value of the term t. The resulting proof is then seen to be a derivation of 0 = 1 from true, purely numerical formulas using only modus ponens, and this is impossible. Indeed, the procedure works with only slight modifications even in the presence of the induction axiom, which in the ε-calculus takes the form of a least number principle: A(t) → εxA(x) ≤ t, which intuitively requires εxA(x) to be the least witness for A(x).

Dialectics: Mathematico-Philosophical Sequential Quantification. Drunken Risibility.

Untitled

Figure: Graphical representation of the quantification of dialectics.

A sequence S of P philosophers along a given period of time would incorporate the P most prominent and visible philosophers in that interval. The use of such a criterion to build the time-sequence for the philosophers implies in not necessarily uniform time-intervals between each pair of subsequent entries.

The set of C measurements used to characterize the philosophers define a C−dimensional feature space which will be henceforth referred to as the philosophical space. The characteristic vector v⃗i of each philosopher i defines a respective philosophical state in the philosophical space. Given a set of P philosophers, the average state at time i, i ≤ P, is defined as

a⃗i = 1/i ∑k=1i v⃗k

The opposite state of a given philosophical state v⃗i is defined as:

r⃗i = v⃗i +2(a⃗i −v⃗i) = 2a⃗i − v⃗i

The opposition vector of philosophical state v⃗i is given by D⃗i = r⃗i − v⃗i. The opposition amplitude of that same state is defined as ||D⃗i||.

An emphasis move taking place from the philosophical state v⃗i is any displacement from v⃗i along the direction −r⃗i. A contrary move from the philosophical state v⃗i is any displacement from v⃗i along the direction r⃗i.

Given a time-sequence S of P philosophers, the philosophical move implied by two successive philosophers i and j corresponds to the M⃗i,j vector extending from v⃗to v⃗j , i.e.

M⃗i,j = v⃗j – v⃗i

In principle, an innovative or differentiated philosophical move would be such that it departs substantially from the current philosophical state v⃗i. Decomposing innovation moves into two main subtypes: opposition and skewness.

The opposition index Wi,j of a given philosophical move M⃗i,j is defined as

Wi,j = 〈M⃗i,j, D⃗i〉/  ||D⃗i||2

This index quantifies the intensity of opposition of that respective philosophical move, in the sense of having a large projection along the vector D⃗i. It should also be noticed that the repetition of opposition moves lead to little innovation, as it would imply in an oscillation around the average state. The skewness index si,j of that same philosophical move is the distance between v⃗j and the line L defined by the vector D⃗i, and therefore quantifies how much the new philosophical state departs from the respective opposition move. Actually, a sequence of moves with zero skewness would represent more trivial oscillations within the opposition line Li.

We also suggest an index to quantify the dialectics between a triple of successive philosophers i, j and k. More specifically, the philosophical state v⃗i is understood as the thesis, the state j is taken as the antithesis, with the synthesis being associated to the state v⃗k. The hypothesis that k is the consequence, among other forces, of a dialectics between the views v⃗i and v⃗j can be expressed by the fact that the philosophical state v⃗k be located near the middle line MLi,j defined by the thesis and antithesis (i.e. the points which are at an equal distance to both v⃗i and v⃗j) relatively to the opposition amplitude ||D⃗i||.

Therefore, the counter-dialectic index is defined as

ρi→k = di→k /||M⃗i,j||

where di→k is the distance between the philosophical state v⃗k and the middle-line MLi,j between v⃗i and v⃗j. Note that 0 ≤ di→k ≤ 1. The choice of counter-dialectics instead of dialectics is justified to maintain compatibility with the use of a distance from point to line as adopted for the definition of skewness….

Right-(Left-)derived Functors

vZZBx

Fix an abelian category A, let J be a Δ-subcategory of K(A), let DJ be the corresponding derived category, and let

Q = QJ : J → DJ

be the canonical Δ-functor. For any Δ-functors F and G from J to another Δ-category E, or from DJ to E, Hom(F, G) will denote the abelian group of Δ-functor morphisms from F to G.

A Δ-functor F : J → E is right-derivable if there exists a Δ-functor

RF : DJ → E

and a morphism of Δ-functors

ζ : F → RF ◦ Q

such that for every Δ-functor G : DJ → E the composed map

Hom(RF, G) →natural Hom(RF ◦ Q, G ◦ Q) →via ζ Hom(F, G ◦ Q)

is an isomorphism, (the map “via ζ” is an isomorphism). The Δ-functor F is left-derivable if there exists a Δ-functor

LF : DJ → E

and a morphism of Δ-functors

ζ : LF ◦ Q → F

such that for every Δ-functor G : DJ → E the composed map

Hom(G, LF) →natural Hom(G ◦ Q, LF ◦ Q) →via ζ Hom(G ◦ Q, F)

is an isomorphism (the map “via ζ” is an isomorphism).

Such a pair (RF, ζ) and (LF, ζ) are called the right-derived and left-derived functors of F respectively. Composition with Q gives an embedding of Δ-functor categories

High Frequency Traders: A Case in Point.

Events on 6th May 2010:

At 2:32 p.m., against [a] backdrop of unusually high volatility and thinning liquidity, a large fundamental trader (a mutual fund complex) initiated a sell program to sell a total of 75,000 E-Mini [S&P 500 futures] contracts (valued at approximately $4.1 billion) as a hedge to an existing equity position. […] This large fundamental trader chose to execute this sell program via an automated execution algorithm (“Sell Algorithm”) that was programmed to feed orders into the June 2010 E-Mini market to target an execution rate set to 9% of the trading volume calculated over the previous minute, but without regard to price or time. The execution of this sell program resulted in the largest net change in daily position of any trader in the E-Mini since the beginning of the year (from January 1, 2010 through May 6, 2010). [. . . ] This sell pressure was initially absorbed by: high frequency traders (“HFTs”) and other intermediaries in the futures market; fundamental buyers in the futures market; and cross-market arbitrageurs who transferred this sell pressure to the equities markets by opportunistically buying E-Mini contracts and simultaneously selling products like SPY [(S&P 500 exchange-traded fund (“ETF”))], or selling individual equities in the S&P 500 Index. […] Between 2:32 p.m. and 2:45 p.m., as prices of the E-Mini rapidly declined, the Sell Algorithm sold about 35,000 E-Mini contracts (valued at approximately $1.9 billion) of the 75,000 intended. [. . . ] By 2:45:28 there were less than 1,050 contracts of buy-side resting orders in the E-Mini, representing less than 1% of buy-side market depth observed at the beginning of the day. [. . . ] At 2:45:28 p.m., trading on the E-Mini was paused for five seconds when the Chicago Mercantile Exchange (“CME”) Stop Logic Functionality was triggered in order to prevent a cascade of further price declines. […] When trading resumed at 2:45:33 p.m., prices stabilized and shortly thereafter, the E-Mini began to recover, followed by the SPY. [. . . ] Even though after 2:45 p.m. prices in the E-Mini and SPY were recovering from their severe declines, sell orders placed for some individual securities and Exchange Traded Funds (ETFs) (including many retail stop-loss orders, triggered by declines in prices of those securities) found reduced buying interest, which led to further price declines in those securities. […] [B]etween 2:40 p.m. and 3:00 p.m., over 20,000 trades (many based on retail-customer orders) across more than 300 separate securities, including many ETFs, were executed at prices 60% or more away from their 2:40 p.m. prices. [. . . ] By 3:08 p.m., [. . . ] the E-Mini prices [were] back to nearly their pre-drop level [. . . and] most securities had reverted back to trading at prices reflecting true consensus values.

In the ordinary course of business, HFTs use their technological advantage to profit from aggressively removing the last few contracts at the best bid and ask levels and then establishing new best bids and asks at adjacent price levels ahead of an immediacy-demanding customer. As an illustration of this “immediacy absorption” activity, consider the following stylized example, presented in Figure and described below.

Untitled 2

Suppose that we observe the central limit order book for a stock index futures contract. The notional value of one stock index futures contract is $50. The market is very liquid – on average there are hundreds of resting limit orders to buy or sell multiple contracts at either the best bid or the best offer. At some point during the day, due to temporary selling pressure, there is a total of just 100 contracts left at the best bid price of 1000.00. Recognizing that the queue at the best bid is about to be depleted, HFTs submit executable limit orders to aggressively sell a total of 100 contracts, thus completely depleting the queue at the best bid, and very quickly submit sequences of new limit orders to buy a total of 100 contracts at the new best bid price of 999.75, as well as to sell 100 contracts at the new best offer of 1000.00. If the selling pressure continues, then HFTs are able to buy 100 contracts at 999.75 and make a profit of $1,250 dollars among them. If, however, the selling pressure stops and the new best offer price of 1000.00 attracts buyers, then HFTs would very quickly sell 100 contracts (which are at the very front of the new best offer queue), “scratching” the trade at the same price as they bought, and getting rid of the risky inventory in a few milliseconds.

This type of trading activity reduces, albeit for only a few milliseconds, the latency of a price move. Under normal market conditions, this trading activity somewhat accelerates price changes and adds to the trading volume, but does not result in a significant directional price move. In effect, this activity imparts a small “immediacy absorption” cost on all traders, including the market makers, who are not fast enough to cancel the last remaining orders before an imminent price move.

This activity, however, makes it both costlier and riskier for the slower market makers to maintain continuous market presence. In response to the additional cost and risk, market makers lower their acceptable inventory bounds to levels that are too small to offset temporary liquidity imbalances of any significant size. When the diminished liquidity buffer of the market makers is pierced by a sudden order flow imbalance, they begin to demand a progressively greater compensation for maintaining continuous market presence, and prices start to move directionally. Just as the prices are moving directionally and volatility is elevated, immediacy absorption activity of HFTs can exacerbate a directional price move and amplify volatility. Higher volatility further increases the speed at which the best bid and offer queues are being depleted, inducing HFT algorithms to demand immediacy even more, fueling a spike in trading volume, and making it more costly for the market makers to maintain continuous market presence. This forces more risk averse market makers to withdraw from the market, which results in a full-blown market crash.

Empirically, immediacy absorption activity of the HFTs should manifest itself in the data very differently from the liquidity provision activity of the Market Makers. To establish the presence of these differences in the data, we test the following hypotheses:

Hypothesis H1: HFTs are more likely than Market Makers to aggressively execute the last 100 contracts before a price move in the direction of the trade. Market Makers are more likely than HFTs to have the last 100 resting contracts against which aggressive orders are executed.

Hypothesis H2: HFTs trade aggressively in the direction of the price move. Market Makers get run over by a price move.

Hypothesis H3: Both HFTs and Market Makers scratch trades, but HFTs scratch more.

To statistically test our “immediacy absorption” hypotheses against the “liquidity provision” hypotheses, we divide all of the trades during the 405 minute trading day into two subsets: Aggressive Buy trades and Aggressive Sell trades. Within each subset, we further aggregate multiple aggressive buy or sell transactions resulting from the execution of the same order into Aggressive Buy or Aggressive Sell sequences. The intuition is as follows. Often a specific trade is not a stand alone event, but a part of a sequence of transactions associated with the execution of the same order. For example, an order to aggressively sell 10 contracts may result in four Aggressive Sell transactions: for 2 contracts, 1 contract, 4 contracts, and 3 contracts, respectively, due to the specific sequence of resting bids against which this aggressive sell order was be executed. Using the order ID number, we are able to aggregate these four transactions into one Aggressive Sell sequence for 10 contracts.

Testing Hypothesis H1. Aggressive removal of the last 100 contracts by HFTs; passive provision of the last 100 resting contracts by the Market Makers. Using the Aggressive Buy sequences, we label as a “price increase event” all occurrences of trading sequences in which at least 100 contracts consecutively executed at the same price are followed by some number of contracts at a higher price. To examine indications of low latency, we focus on the the last 100 contracts traded before the price increase and the first 100 contracts at the next higher price (or fewer if the price changes again before 100 contracts are executed). Although we do not look directly at the limit order book data, price increase events are defined to capture occasions where traders use executable buy orders to lift the last remaining offers in the limit order book. Using Aggressive sell trades, we define “price decrease events” symmetrically as occurrences of sequences of trades in which 100 contracts executed at the same price are followed by executions at lower prices. These events are intended to capture occasions where traders use executable sell orders to hit the last few best bids in the limit order book. The results are presented in Table below

Untitled 2

For price increase and price decrease events, we calculate each of the six trader categories’ shares of Aggressive and Passive trading volume for the last 100 contracts traded at the “old” price level before the price increase or decrease and the first 100 contracts traded at the “new” price level (or fewer if the number of contracts is less than 100) after the price increase or decrease event.

Table above presents, for the six trader categories, volume shares for the last 100 contracts at the old price and the first 100 contracts at the new price. For comparison, the unconditional shares of aggressive and passive trading volume of each trader category are also reported. Table has four panels covering (A) price increase events on May 3-5, (B) price decrease events on May 3-5, (C) price increase events on May 6, and (D) price decrease events on May 6. In each panel there are six rows of data, one row for each trader category. Relative to panels A and C, the rows for Fundamental Buyers (BUYER) and Fundamental Sellers (SELLER) are reversed in panels B and D to emphasize the symmetry between buying during price increase events and selling during price decrease events. The first two columns report the shares of Aggressive and Passive contract volume for the last 100 contracts before the price change; the next two columns report the shares of Aggressive and Passive volume for up to the next 100 contracts after the price change; and the last two columns report the “unconditional” market shares of Aggressive and Passive sides of all Aggressive buy volume or sell volume. For May 3-5, the data are based on volume pooled across the three days.

Consider panel A, which describes price increase events associated with Aggressive buy trades on May 3-5, 2010. High Frequency Traders participated on the Aggressive side of 34.04% of all aggressive buy volume. Strongly consistent with immediacy absorption hypothesis, the participation rate rises to 57.70% of the Aggressive side of trades on the last 100 contracts of Aggressive buy volume before price increase events and falls to 14.84% of the Aggressive side of trades on the first 100 contracts of Aggressive buy volume after price increase events.

High Frequency Traders participated on the Passive side of 34.33% of all aggressive buy volume. Consistent with hypothesis, the participation rate on the Passive side of Aggressive buy volume falls to 28.72% of the last 100 contracts before a price increase event. It rises to 37.93% of the first 100 contracts after a price increase event.

These results are inconsistent with the idea that high frequency traders behave like textbook market makers, suffering adverse selection losses associated with being picked off by informed traders. Instead, when the price is about to move to a new level, high frequency traders tend to avoid being run over and take the price to the new level with Aggressive trades of their own.

Market Makers follow a noticeably more passive trading strategy than High Frequency Traders. According to panel A, Market Makers are 13.48% of the Passive side of all Aggressive trades, but they are only 7.27% of the Aggressive side of all Aggressive trades. On the last 100 contracts at the old price, Market Makers’ share of volume increases only modestly, from 7.27% to 8.78% of trades. Their share of Passive volume at the old price increases, from 13.48% to 15.80%. These facts are consistent with the interpretation that Market Makers, unlike High Frequency Traders, do engage in a strategy similar to traditional passive market making, buying at the bid price, selling at the offer price, and suffering losses when the price moves against them. These facts are also consistent with the hypothesis that High Frequency Traders have lower latency than Market Makers.

Intuition might suggest that Fundamental Buyers would tend to place the Aggressive trades which move prices up from one tick level to the next. This intuition does not seem to be corroborated by the data. According to panel A, Fundamental Buyers are 21.53% of all Aggressive trades but only 11.61% of the last 100 Aggressive contracts traded at the old price. Instead, Fundamental Buyers increase their share of Aggressive buy volume to 26.17% of the first 100 contracts at the new price.

Taking into account symmetry between buying and selling, panel B shows the results for Aggressive sell trades during May 3-5, 2010, are almost the same as the results for Aggressive buy trades. High Frequency Traders are 34.17% of all Aggressive sell volume, increase their share to 55.20% of the last 100 Aggressive sell contracts at the old price, and decrease their share to 15.04% of the last 100 Aggressive sell contracts at the new price. Market Makers are 7.45% of all Aggressive sell contracts, increase their share to only 8.57% of the last 100 Aggressive sell trades at the old price, and decrease their share to 6.58% of the last 100 Aggressive sell contracts at the new price. Fundamental Sellers’ shares of Aggressive sell trades behave similarly to Fundamental Buyers’ shares of Aggressive Buy trades. Fundamental Sellers are 20.91% of all Aggressive sell contracts, decrease their share to 11.96% of the last 100 Aggressive sell contracts at the old price, and increase their share to 24.87% of the first 100 Aggressive sell contracts at the new price.

Panels C and D report results for Aggressive Buy trades and Aggressive Sell trades for May 6, 2010. Taking into account symmetry between buying and selling, the results for Aggressive buy trades in panel C are very similar to the results for Aggressive sell trades in panel D. For example, Aggressive sell trades by Fundamental Sellers were 17.55% of Aggressive sell volume on May 6, while Aggressive buy trades by Fundamental Buyers were 20.12% of Aggressive buy volume on May 6. In comparison with the share of Fundamental Buyers and in comparison with May 3-5, the Flash Crash of May 6 is associated with a slightly lower – not higher – share of Aggressive sell trades by Fundamental Sellers.

The number of price increase and price decrease events increased dramatically on May 6, consistent with the increased volatility of the market on that day. On May 3-5, there were 4100 price increase events and 4062 price decrease events. On May 6 alone, there were 4101 price increase events and 4377 price decrease events. There were therefore approximately three times as many price increase events per day on May 6 as on the three preceding days.

A comparison of May 6 with May 3-5 reveals significant changes in the trading patterns of High Frequency Traders. Compared with May 3-5 in panels A and B, the share of Aggressive trades by High Frequency Traders drops from 34.04% of Aggressive buys and 34.17% of Aggressive sells on May 3-5 to 26.98% of Aggressive buy trades and 26.29% of Aggressive sell trades on May 6. The share of Aggressive trades for the last 100 contracts at the old price declines by even more. High Frequency Traders’ participation rate on the Aggressive side of Aggressive buy trades drops from 57.70% on May 3-5 to only 38.86% on May 6. Similarly, the participation rate on the Aggressive side of Aggressive sell trades drops from and 55.20% to 38.67%. These declines are largely offset by increases in the participation rate by Opportunistic Traders on the Aggressive side of trades. For example, Opportunistic Traders’ share of the Aggressive side of the last 100 contracts traded at the old price rises from 19.21% to 34.26% for Aggressive buys and from 20.99% to 33.86% for Aggressive sells. These results suggest that some Opportunistic Traders follow trading strategies for which low latency is important, such as index arbitrage, cross-market arbitrage, or opportunistic strategies mimicking market making.

Testing Hypothesis H2. HFTs trade aggressively in the direction of the price move; Market Makers get run over by a price move. To examine this hypothesis, we analyze whether High Frequency Traders use Aggressive trades to trade in the direction of contemporaneous price changes, while Market Makers use Passive trades to trade in the opposite direction from price changes. To this end, we estimate the regression equation

Δyt = α + Φ . Δyt-1 + δ . yt-1 + Σi=120i . Δpt-1 /0.25] + εt

(where yt and Δyt denote inventories and change in inventories of High Frequency Traders for each second of a trading day; t = 0 corresponds to the opening of stock trading on the NYSE at 8:30:00 a.m. CT (9:30:00 ET) and t = 24, 300 denotes the close of Globex at 15:15:00 CT (4:15 p.m. ET); Δpt denotes the price change in index point units between the high-low midpoint of second t-1 and the high-low midpoint of second t. Regressing second-by-second changes in inventory levels of High Frequency Traders on the level of their inventories the previous second, the change in their inventory levels the previous second, the change in prices during the current second, and lagged price changes for each of the previous 20 previous seconds.)

for Passive and Aggressive inventory changes separately.

Untitled

Table above presents the regression results of the two components of change in holdings on lagged inventory, lagged change in holdings and lagged price changes over one second intervals. Panel A and Panel B report the results for May 3-5 and May 6, respectively. Each panel has four columns, reporting estimated coefficients where the dependent variables are net Aggressive volume (Aggressive buys minus Aggressive sells) by High Frequency Traders (∆AHFT), net Passive volume by High Frequency Traders (∆PHFT), net Aggressive volume by Market Makers (∆AMM), and net Passive volume by Market Makers (∆PMM).

We observe that for lagged inventories (NPHFTt−1), the estimated coefficients for Aggressive and Passive trades by High Frequency Traders are δAHFT = −0.005 (t = −9.55) and δPHFT = −0.001 (t = −3.13), respectively. These coefficient estimates have the interpretation that High Frequency Traders use Aggressive trades to liquidate inventories more intensively than passive trades. In contrast, the results for Market Makers are very different. For lagged inventories (NPMMt−1), the estimated coefficients for Aggressive and Passive volume by Market Makers are δAMM = −0.002 (t = −6.73) and δPMM = −0.002 (t = −5.26), respectively. The similarity of these coefficients estimates has the interpretation that Market Makers favor neither Aggressive trades nor Passive trades when liquidating inventories.

For contemporaneous price changes (in the current second) (∆Pt−1), the estimated coefficient Aggressive and Passive volume by High Frequency Traders are β0 = 57.78 (t = 31.94) and β0 = −25.69 (t = −28.61), respectively. For Market Makers, the estimated coefficients for Aggressive and Passive trades are β0 = 6.38 (t = 18.51) and β0 = −19.92 (t = −37.68). These estimated coefficients have the interpretation that in seconds in which prices move up one tick, High Frequency traders are net buyers of about 58 contracts with Aggressive trades and net sellers of about 26 contracts with Passive trades in that same second, while Market Makers are net buyers of about 6 contracts with Aggressive trades and net sellers of about 20 contracts with Passive trades. High Frequency Traders and Market Makers are similar in that they both use Aggressive trades to trade in the direction of price changes, and both use Passive trades to trade against the direction of price changes. High Frequency Traders and Market Makers are different in that Aggressive net purchases by High Frequency Traders are greater in magnitude than the Passive net purchases, while the reverse is true for Market Makers.

For lagged price changes, coefficient estimates for Aggressive trades by High Frequency Traders and Market Makers are positive and statistically significant at lags 1-4 and lags 1-10, respectively. These results have the interpretation that both High Frequency Traders’ and Market Makers’ trade on recent price momentum, but the trading is compressed into a shorter time frame for High Frequency Traders than for Market Makers.

For lagged price changes, coefficient estimates for Passive volume by High Frequency Traders and Market Makers are negative and statistically significant at lags 1 and lags 1-3, respectively. Panel B of Table presents results for May 6. Similar to May 3-5, High Frequency Traders tend to use Aggressive trades more intensely than Passive trades to liquidate inventories, while Market Makers do not show this pattern. Also similar to May 3-5, High Frequency Trades and Market makers use Aggressive trades to trade in the contemporaneous direction of price changes and use Passive trades to trade in the direction opposite price changes, with Aggressive trading greater than Passive trading for High Frequency Traders and the reverse for Market Makers. In comparison with May 3-5, the coefficients are smaller in magnitude on May 6, indicating reduced liquidity at each tick. For lagged price changes, the coefficients associated with Aggressive trading by High Frequency Traders change from positive to negative at lags 1-4, and the positive coefficients associated with Aggressive trading by Market Makers change from being positive and statistically significant at lags lags 1-10 to being positive and statistically significant only at lags 1-3. These results illustrate accelerated trading velocity in the volatile market conditions of May 6.

We further examine how high frequency trading activity is related to market prices. Figure below illustrates how prices change after HFT trading activity in a given second. The upper-left panel presents results for buy trades for May 3-5, the upper right panel presents results for buy trades on May 6, and the lower-left and lower-right present corresponding results for sell trades. For an “event” second in which High Frequency Traders are net buyers, net Aggressive Buyers, and net Passive Buyers value-weighted average prices paid by the High Frequency Traders in that second are subtracted from the value-weighted average prices for all trades in the same second and each of the following 20 seconds. The results are averaged across event seconds, weighted by the magnitude of High Frequency Traders’ net position change in the event second. The upper-left panel presents results for May 3-5, the upper-right panel presents results for May 6, and the lower two panels present results for sell trades calculated analogously. Price differences on the vertical axis are scaled so that one unit equals one tick ($12.50 per contract).

Untitled 2

When High Frequency Traders are net buyers on May 3-5, prices rise by 17% of a tick in the next second. When HFTs execute Aggressively or Passively, prices rise by 20% and 2% of a tick in the next second, respectively. In subsequent seconds, prices in all cases trend downward by about 5% of a tick over the subsequent 19 seconds. For May 3-5, the results are almost symmetric for selling.

When High Frequency Traders are buying on May 6, prices increase by 7% of a tick in the next second. When they are aggressive buyers or passive buyers, prices increase by increase 25% of a tick or decrease by 5% of a tick in the next second, respectively. In subsequent seconds, prices generally tend to drift downwards. The downward drift is especially pronounced after Passive buying, consistent with the interpretation that High Frequency Traders were “run over” when their resting limit buy orders were “run over” in the down phase of the Flash Crash. When High Frequency Traders are net sellers, the results after one second are analogous to buying. After aggressive selling, prices continue to drift down for 20 seconds, consistent with the interpretation that High Frequency Traders made profits from Aggressive sales during the down phase of the Flash Crash.

Testing Hypothesis H3. Both HFTs and Market Makers scratch trades; HFTs scratch more. A textbook market maker will try to buy at the bid price, sell at the offer price, and capture the bid-ask spread as a profit. Sometimes, after buying at the bid price, market prices begin to fall before the market maker can make a one tick profit by selling his inventory at the best offer price. To avoid taking losses in this situation, one component of a traditional market making strategy is to “scratch trades in the presence of changing market conditions by quickly liquidating a position at the same price at which it was acquired. These scratched trades represent inventory management trades designed to lower the cost of adverse selection. Since many competing market makers may try to scratch trades at the same time, traders with the lowest latency will tend to be more successful in their attempts to scratch trades and thus more successful in their ability to avoid losses when market conditions change.

To examine whether and to what extent traders engage in trade scratching, we sequence each trader’s trades for the day using audit trail sequence numbers which not only sort trades by second but also sort trades chronologically within each second. We define an “immediately scratched trade” as a trade with the properties that the next trade in the sorted sequence (1) occurred in the same second, (2) was executed at the same price, (3) was in the opposite direction, i.e., buy followed by sell or sell followed by buy. For each of the trading accounts in our sample, we calculate the number of immediately scratched trades, then compare the number of scratched trades across the six trader categories.

The results of this analysis are presented in the table below. Panel A provides results for May 3-5 and panel B for May 6. In each panel, there are five rows of data, one for each trader category. The first three columns report the total number of trades, the total number of immediately scratched trades, and the percentage of trades that are immediately scratched by traders in five categories. For May 3-6, the reported numbers are from the pooled data.

Untitled 2

This table presents statistics for immediate trade scratching which measures how many times a trader changes his/her direction of trading in a second aggregated over a day. We define a trade direction change as a buy trade right after a sell trade or vice versa at the same price level in the same second.

This table shows that High Frequency Traders scratched 2.84 % of trades on May 3-5 and 4.26 % on May 6; Market Makers scratched 2.49 % of trades on May 3-5 and 5.53 % of trades on May 6. While the percentages of immediately scratched trades by Market Makers is slightly higher than that for High Frequency Traders on May 6, the percentages for both groups are very similar. The fourth, fifth, and sixth columns of the Table report the mean, standard deviation, and median of the number of scratched trades for the traders in each category.

Although the percentages of scratched trades are similar, the mean number of immediately scratched trades by High Frequency Traders is much greater than for Market Makers: 540.56 per day on May 3-5 and 1610.75 on May 6 for High Frequency Traders versus 13.35 and 72.92 for Market Makers. The differences between High Frequency Traders and Market Makers reflect differences in volume traded. The Table shows that High Frequency Traders and Market Makers scratch a significantly larger percentage of their trades than other trader categories.