# Lemke-Howson Algorithm – Symmetric Game with Symmetric Or NonSymmetric Equilibria. Note Quote.

Lemke-Howson Algorithm (LHA) function computes a sample mixed strategy Nash equilibrium in a bimatrix game. This function implements the Lemke-Howson complementary pivoting algorithm for solving Bimatrix Games, a variant of the Lemke algorithm for linear complementarity problems (LCPs). The LHA not only provides an elementary proof for the existence of equilibrium points, but also an efficient computational method for finding at least one equilibrium point. The LHA follows a path (called LH path) of vertex pairs (x, y) of P × Q, for the polytopes P and Q,

P = {x ∈ RM| x ≥ 0, Bx ≤ 1},

Q = {y ∈ RN |Ay ≤ 1, y ≥ 0}

that starts at (0, 0) and ends at a Nash equilibrium. An LH path alternately follows edges of P and Q, keeping the vertex in the other polytope fixed. Because the game is nondegenerate, a vertex of P is given by m labels, and a vertex of Q is given by n labels. An edge of P is defined by m−1 labels.

For example, in the above figure, the edge defined by labels 1 and 3 joins the vertices 0 and c. Dropping a label l of a vertex x of P, say, means traversing the unique edge that has all the labels of x except for l. For example, dropping label 2 of the vertex 0 of P in the figure gives the edge, defined by labels 1 and 3, that joins 0 to vertex c. The endpoint of the edge has a new label, which is said to be picked up, for example, label 5 is picked up at vertex c.

The LHA starts from (0, 0) in P × Q. This is called the artificial equilibrium, which is a completely labeled vertex pair because every pure strategy has probability zero. It does not represent a Nash equilibrium of the game because the zero vector cannot be rescaled to a mixed strategy vector. An initial free choice of the LHA is a pure strategy k of a player (any label in M ∪ N ), called the missing label. Starting with (x, y) = (0, 0), label k is dropped. At the endpoint of the corresponding edge (of P if k ∈ M, of Q if k ∈ N), the new label that is picked up is duplicate because it was present in the other polytope. That duplicate label is then dropped in the other polytope, picking up a new label. If the newly picked label is the missing label, the algorithm terminates and has found a Nash equilibrium. Otherwise, the algorithm repeats by dropping the duplicate label in the other polytope, and continues in this fashion.

Input: Nondegenerate bimatrix game.

Output: One Nash equilibrium of the game.

Method: Choose k ∈ M ∪ N , called the missing label. Let (x, y) = (0, 0) ∈ P × Q. Drop label k (from x in P if k ∈ M, from y in Q if k ∈ N).

Loop: Call the new vertex pair (x, y). Let l be the label that is picked up. If l = k, terminate with Nash equilibrium (x, y) (rescaled as mixed strategy pair). Otherwise, drop l in the other polytope and repeat.

The LHA terminates, and finds a Nash equilibrium, because P × Q has only finitely many vertex pairs. The next vertex pair on the path is always unique. Hence, a given vertex pair cannot be revisited because that would provide an additional possibility to proceed in the first place.

What we seem to have done is describe the LH path for missing label k by means of alternating edges between two polytopes. In fact, it is a path on the product polytope P × Q, given by the set of pairs (x, y) of P × Q that are k-almost completely labeled, meaning that every label in M ∪ N − {k} appears as a label of either x or y. In the above figure for k = 2, the vertex pairs on the path are (0, 0), (c, 0), (c, p), (d, p), (d, q).

For a fixed missing label k, the k-almost completely labeled vertices and edges of the product polytope P × Q form a graph of degree 1 or 2. Clearly, such a graph consists of disjoints paths and cycles. The endpoints of the paths are completely labeled. They are the Nash equilibria of the game and the artificial equilibrium (0, 0).

Though, there is a corollary to the this, in that, a nondegenerate bimatrix game has an odd number of Nash equilibria. The LHA can start at any Nash equilibrium, not just the artificial equilibrium. In the figure with missing label 2, starting the algorithm at the Nash equilibrium (d, q) would just generate the known LH path backward to (0, 0). When started at the Nash equilibrium (a, s), the LH path for the missing label 2 gives the vertex pair (b, s), where label 5 is duplicate, and then the equilibrium (b, r). This path cannot go back to (0, 0) because the path leading to (0, 0) starts at (d, q). This gives the three Nash equilibria of the game as endpoints of the two LH paths for missing label 2. These three equilibria can also be found by the LHA by varying the missing label.

However, some Nash equilibria can remain elusive to the LHA. An example is the following symmetric 3 × 3 game with

A = B =

Every Nash equilibrium (x, y) of this game is symmetric, i.e., x = y, where x is (0, 0, 1), (1/2, 1/4, 1/4), or (3/4, 1/4, 0). Only the first of these is found by the LHA, for any missing label; because the game is symmetric, it suffices to consider the missing labels 1, 2, 3. (A symmetric game remains unchanged when the players are exchanged; a symmetric game has always a symmetric equilibrium, but may also have nonsymmetric equilibria, which obviously come in pairs.)

# Superconformal Spin/Field Theories: When Vector Spaces have same Dimensions: Part 1, Note Quote.

A spin structure on a surface means a double covering of its space of non-zero tangent vectors which is non-trivial on each individual tangent space. On an oriented 1-dimensional manifold S it means a double covering of the space of positively-oriented tangent vectors. For purposes of gluing, this is the same thing as a spin structure on a ribbon neighbourhood of S in an orientable surface. Each spin structure has an automorphism which interchanges its sheets, and this will induce an involution T on any vector space which is naturally associated to a 1-manifold with spin structure, giving the vector space a mod 2 grading by its ±1-eigenspaces. A topological-spin theory is a functor from the cobordism category of manifolds with spin structures to the category of super vector spaces with its graded tensor structure. The functor is required to take disjoint unions to super tensor products, and additionally it is required that the automorphism of the spin structure of a 1-manifold induces the grading automorphism T = (−1)degree of the super vector space. This choice of the supersymmetry of the tensor product rather than the naive symmetry which ignores the grading is forced by the geometry of spin structures if the possibility of a semisimple category of boundary conditions is to be allowed. There are two non-isomorphic circles with spin structure: S1ns, with the Möbius or “Neveu-Schwarz” structure, and S1r, with the trivial or “Ramond” structure. A topological-spin theory gives us state spaces Cns and Cr, corresponding respectively to S1ns and S1r.

There are four cobordisms with spin structures which cover the standard annulus. The double covering can be identified with its incoming end times the interval [0,1], but then one has a binary choice when one identifies the outgoing end of the double covering over the annulus with the chosen structure on the outgoing boundary circle. In other words, alongside the cylinders A+ns,r = S1ns,r × [0,1] which induce the identity maps of Cns,r there are also cylinders Ans,r which connect S1ns,r to itself while interchanging the sheets. These cylinders Ans,r induce the grading automorphism on the state spaces. But because Ans ≅ A+ns by an isomorphism which is the identity on the boundary circles – the Dehn twist which “rotates one end of the cylinder by 2π” – the grading on Cns must be purely even. The space Cr can have both even and odd components. The situation is a little more complicated for “U-shaped” cobordisms, i.e., cylinders with two incoming or two outgoing boundary circles. If the boundaries are S1ns there is only one possibility, but if the boundaries are S1r there are two, corresponding to A±r. The complication is that there seems no special reason to prefer either of the spin structures as “positive”. We shall simply choose one – let us call it P – with incoming boundary S1r ⊔ S1r, and use P to define a pairing Cr ⊗ Cr → C. We then choose a preferred cobordism Q in the other direction so that when we sew its right-hand outgoing S1r to the left-hand incoming one of P the resulting S-bend is the “trivial” cylinder A+r. We shall need to know, however, that the closed torus formed by the composition P ◦ Q has an even spin structure. The Frobenius structure θ on C restricts to 0 on Cr.

There is a unique spin structure on the pair-of-pants cobordism in the figure below, which restricts to S1ns on each boundary circle, and it makes Cns into a commutative Frobenius algebra in the usual way.

If one incoming circle is S1ns and the other is S1r then the outgoing circle is S1r, and there are two possible spin structures, but the one obtained by removing a disc from the cylinder A+r is preferred: it makes Cr into a graded module over Cns. The chosen U-shaped cobordism P, with two incoming circles S1r, can be punctured to give us a pair of pants with an outgoing S1ns, and it induces a graded bilinear map Cr × Cr → Cns which, composing with the trace on Cns, gives a non-degenerate inner product on Cr. At this point the choice of symmetry of the tensor product becomes important. Let us consider the diffeomorphism of the pair of pants which shows us in the usual case that the Frobenius algebra is commutative. When we lift it to the spin structure, this diffeomorphism induces the identity on one incoming circle but reverses the sheets over the other incoming circle, and this proves that the cobordism must have the same output when we change the input from S(φ1 ⊗ φ2) to T(φ1) ⊗ φ2, where T is the grading involution and S : Cr ⊗ Cr → Cr ⊗ Cr is the symmetry of the tensor category. If we take S to be the symmetry of the tensor category of vector spaces which ignores the grading, this shows that the product on the graded vector space Cr is graded-symmetric with the usual sign; but if S is the graded symmetry then we see that the product on Cr is symmetric in the naive sense.

There is an analogue for spin theories of the theorem which tells us that a two-dimensional topological field theory “is” a commutative Frobenius algebra. It asserts that a spin-topological theory “is” a Frobenius algebra C = (Cns ⊕ CrC) with the following property. Let {φk} be a basis for Cns, with dual basis {φk} such that θCkφm) = δmk, and let βk and βk be similar dual bases for Cr. Then the Euler elements χns := ∑ φkφk and χr = ∑ βkβk are independent of the choices of bases, and the condition we need on the algebra C is that χns = χr. In particular, this condition implies that the vector spaces Cns and Cr have the same dimension. In fact, the Euler elements can be obtained from cutting a hole out of the torus. There are actually four spin structures on the torus. The output state is necessarily in Cns. The Euler elements for the three even spin structures are equal to χe = χns = χr. The Euler element χo corresponding to the odd spin structure, on the other hand, is given by χo = ∑(−1)degβkβkβk.

A spin theory is very similar to a Z/2-equivariant theory, which is the structure obtained when the surfaces are equipped with principal Z/2-bundles (i.e., double coverings) rather than spin structures.

It seems reasonable to call a spin theory semisimple if the algebra Cns is semisimple, i.e., is the algebra of functions on a finite set X. Then Cr is the space of sections of a vector bundle E on X, and it follows from the condition χns = χr that the fibre at each point must have dimension 1. Thus the whole structure is determined by the Frobenius algebra Cns together with a binary choice at each point x ∈ X of the grading of the fibre Ex of the line bundle E at x.

We can now see that if we had not used the graded symmetry in defining the tensor category we should have forced the grading of Cr to be purely even. For on the odd part the inner product would have had to be skew, and that is impossible on a 1-dimensional space. And if both Cns and Cr are purely even then the theory is in fact completely independent of the spin structures on the surfaces.

A concrete example of a two-dimensional topological-spin theory is given by C = C ⊕ Cη where η2 = 1 and η is odd. The Euler elements are χe = 1 and χo = −1. It follows that the partition function of a closed surface with spin structure is ±1 according as the spin structure is even or odd.

The most common theories defined on surfaces with spin structure are not topological: they are 2-dimensional conformal field theories with N = 1 supersymmetry. It should be noticed that if the theory is not topological then one does not expect the grading on Cns to be purely even: states can change sign on rotation by 2π. If a surface Σ has a conformal structure then a double covering of the non-zero tangent vectors is the complement of the zero-section in a two-dimensional real vector bundle L on Σ which is called the spin bundle. The covering map then extends to a symmetric pairing of vector bundles L ⊗ L → TΣ which, if we regard L and TΣ as complex line bundles in the natural way, induces an isomorphism L ⊗C L ≅ TΣ. An N = 1 superconformal field theory is a conformal-spin theory which assigns a vector space HS,L to the 1-manifold S with the spin bundle L, and is equipped with an additional map

Γ(S,L) ⊗ HS,L → HS,L

(σ,ψ) ↦ Gσψ,

where Γ(S,L) is the space of smooth sections of L, such that Gσ is real-linear in the section σ, and satisfies G2σ = Dσ2, where Dσ2 is the Virasoro action of the vector field σ2 related to σ ⊗ σ by the isomorphism L ⊗C L ≅ TΣ. Furthermore, when we have a cobordism (Σ,L) from (S0,L0) to (S1,L1) and a holomorphic section σ of L which restricts to σi on Si we have the intertwining property

Gσ1 ◦ UΣ,L = UΣ,L ◦ Gσ0

….

# Dynamics of Point Particles: Orthogonality and Proportionality

Let γ be a smooth, future-directed, timelike curve with unit tangent field ξa in our background spacetime (M, gab). We suppose that some massive point particle O has (the image of) this curve as its worldline. Further, let p be a point on the image of γ and let λa be a vector at p. Then there is a natural decomposition of λa into components proportional to, and orthogonal to, ξa:

λa = (λbξba + (λa −(λbξba) —– (1)

Here, the first part of the sum is proportional to ξa, whereas the second one is orthogonal to ξa.

These are standardly interpreted, respectively, as the “temporal” and “spatial” components of λa relative to ξa (or relative to O). In particular, the three-dimensional vector space of vectors at p orthogonal to ξa is interpreted as the “infinitesimal” simultaneity slice of O at p. If we introduce the tangent and orthogonal projection operators

kab = ξa ξb —– (2)

hab = gab − ξa ξb —– (3)

then the decomposition can be expressed in the form

λa = kab λb + hab λb —– (4)

We can think of kab and hab as the relative temporal and spatial metrics determined by ξa. They are symmetric and satisfy

kabkbc = kac —– (5)

habhbc = hac —– (6)

Many standard textbook assertions concerning the kinematics and dynamics of point particles can be recovered using these decomposition formulas. For example, suppose that the worldline of a second particle O′ also passes through p and that its four-velocity at p is ξ′a. (Since ξa and ξ′a are both future-directed, they are co-oriented; i.e., ξa ξ′a > 0.) We compute the speed of O′ as determined by O. To do so, we take the spatial magnitude of ξ′a relative to O and divide by its temporal magnitude relative to O:

v = speed of O′ relative to O = ∥hab ξ′b∥ / ∥kab ξ′b∥ —– (7)

For any vector μa, ∥μa∥ is (μaμa)1/2 if μ is causal, and it is (−μaμa)1/2 otherwise.

We have from equations 2, 3, 5 and 6

∥kab ξ′b∥ = (kab ξ′b kac ξ′c)1/2 = (kbc ξ′bξ′c)1/2 = (ξ′bξb)

and

∥hab ξ′b∥ = (−hab ξ′b hac ξ′c)1/2 = (−hbc ξ′bξ′c)1/2 = ((ξ′bξb)2 − 1)1/2

so

v = ((ξ’bξb)2 − 1)1/2 / (ξ′bξb) < 1 —– (8)

Thus, as measured by O, no massive particle can ever attain the maximal speed 1. We note that equation (8) implies that

(ξ′bξb) = 1/√(1 – v2) —– (9)

It is a basic fact of relativistic life that there is associated with every point particle, at every event on its worldline, a four-momentum (or energy-momentum) vector Pa that is tangent to its worldline there. The length ∥Pa∥ of this vector is what we would otherwise call the mass (or inertial mass or rest mass) of the particle. So, in particular, if Pa is timelike, we can write it in the form Pa =mξa, where m = ∥Pa∥ > 0 and ξa is the four-velocity of the particle. No such decomposition is possible when Pa is null and m = ∥Pa∥ = 0.

Suppose a particle O with positive mass has four-velocity ξa at a point, and another particle O′ has four-momentum Pa there. The latter can either be a particle with positive mass or mass 0. We can recover the usual expressions for the energy and three-momentum of the second particle relative to O if we decompose Pa in terms of ξa. By equations (4) and (2), we have

Pa = (Pbξb) ξa + habPb —– (10)

the first part of the sum is the energy component, while the second is the three-momentum. The energy relative to O is the coefficient in the first term: E = Pbξb. If O′ has positive mass and Pa = mξ′a, this yields, by equation (9),

E = m (ξ′bξb) = m/√(1 − v2) —– (11)

(If we had not chosen units in which c = 1, the numerator in the final expression would have been mc2 and the denominator √(1 − (v2/c2)). The three−momentum relative to O is the second term habPb in the decomposition of Pa, i.e., the component of Pa orthogonal to ξa. It follows from equations (8) and (9) that it has magnitude

p = ∥hab mξ′b∥ = m((ξ′bξb)2 − 1)1/2 = mv/√(1 − v2) —– (12)

Interpretive principle asserts that the worldlines of free particles with positive mass are the images of timelike geodesics. It can be thought of as a relativistic version of Newton’s first law of motion. Now we consider acceleration and a relativistic version of the second law. Once again, let γ : I → M be a smooth, future-directed, timelike curve with unit tangent field ξa. Just as we understand ξa to be the four-velocity field of a massive point particle (that has the image of γ as its worldline), so we understand ξnnξa – the directional derivative of ξa in the direction ξa – to be its four-acceleration field (or just acceleration) field). The four-acceleration vector at any point is orthogonal to ξa. (This is, since ξannξa) = 1/2 ξnnaξa) = 1/2 ξnn (1) = 0). The magnitude ∥ξnnξa∥ of the four-acceleration vector at a point is just what we would otherwise describe as the curvature of γ there. It is a measure of the rate at which γ “changes direction.” (And γ is a geodesic precisely if its curvature vanishes everywhere).

The notion of spacetime acceleration requires attention. Consider an example. Suppose you decide to end it all and jump off the tower. What would your acceleration history be like during your final moments? One is accustomed in such cases to think in terms of acceleration relative to the earth. So one would say that you undergo acceleration between the time of your jump and your calamitous arrival. But on the present account, that description has things backwards. Between jump and arrival, you are not accelerating. You are in a state of free fall and moving (approximately) along a spacetime geodesic. But before the jump, and after the arrival, you are accelerating. The floor of the observation deck, and then later the sidewalk, push you away from a geodesic path. The all-important idea here is that we are incorporating the “gravitational field” into the geometric structure of spacetime, and particles traverse geodesics iff they are acted on by no forces “except gravity.”

The acceleration of our massive point particle – i.e., its deviation from a geodesic trajectory – is determined by the forces acting on it (other than “gravity”). If it has mass m, and if the vector field Fa on I represents the vector sum of the various (non-gravitational) forces acting on it, then the particle’s four-acceleration ξnnξa satisfies

Fa = mξnnξa —– (13)

This is Newton’s second law of motion. Consider an example. Electromagnetic fields are represented by smooth, anti-symmetric fields Fab. If a particle with mass m > 0, charge q, and four-velocity field ξa is present, the force exerted by the field on the particle at a point is given by qFabξb. If we use this expression for the left side of equation (13), we arrive at the Lorentz law of motion for charged particles in the presence of an electromagnetic field:

qFabξb = mξbbξa —– (14)

This equation makes geometric sense. The acceleration field on the right is orthogonal to ξa. But so is the force field on the left, since ξa(Fabξb) = ξaξbFab = ξaξbF(ab), and F(ab) = 0 by the anti-symmetry of Fab.

# Unique Derivative Operator: Reparametrization. Metric Part 2.

Moving on from first part.

Suppose ∇ is a derivative operator, and gab is a metric, on the manifold M. Then ∇ is compatible with gab iff ∇a gbc = 0.

Suppose γ is an arbitrary smooth curve with tangent field ξa and λa is an arbitrary smooth field on γ satisfying ξnnλa = 0. Then

ξnn(gabλaλb) = gabλaξnnλb + gabλbξnnλa + λaλbξnngab

= λaλbξnngab

Suppose first that ∇ngab = 0. Then it follows immediately that ξnngabλaλb = 0. So ∇ is compatible with gab. Suppose next that ∇ is compatible with gab. Then ∀ choices of γ and λa (satisfying ξnnλa =0), we have λaλbξnngab = 0. Since the choice of λa (at any particular point) is arbitrary and gab is symmetric, it follows that ξnngab = 0. But this must be true for arbitrary ξa (at any particular point), and so we have ∇ngab = 0.

Note that the condition of compatibility is also equivalent to ∇agbc = 0. Hence,

0 = gbnaδcn = gbna(gnrgrc) = gbngnragrc + gbngrcagnr

= δbragrc + gbngrcagnr = ∇agbc + gbngrcagnr.

So if ∇agbc = 0,it follows immediately that ∇agbc = 0. Conversely, if ∇agbc =0, then gbngrcagnr = 0. And therefore,

0 = gpbgscgbngrcagnr = δnpδrsagnr = ∇agps

The basic fact about compatible derivative operators is the following.

Suppose gab is a metric on the manifold M. Then there is a unique derivative operator on M that is compatible with gab.

It turns out that if a manifold admits a metric, then it necessarily satisfies the countable cover condition. And then it guarantees the existence of a derivative operator.) We do prove that if M admits a derivative operator ∇, then it admits exactly one ∇′ that is compatible with gab.

Every derivative operator ∇′ on M can be realized as ∇′ = (∇, Cabc), where Cabc is a smooth, symmetric field on M. Now

∇′agbc = ∇agbc + gnc Cnab + gbn Cnac = ∇agbc + Ccab + Cbac. So ∇′ will be compatible with gab (i.e., ∇′agbc = 0) iff

agbc = −Ccab − Cbac —– (1)

Thus it suffices for us to prove that there exists a unique smooth, symmetric field Cabc on M satisfying equation (1). To do so, we write equation (1) twice more after permuting the indices:

cgab = −Cbca − Cacb,

bgac = −Ccba − Cabc

If we subtract these two from the first equation, and use the fact that Cabc is symmetric in (b, c), we get

Cabc = 1/2 (∇agbc − ∇bgac − ∇cgab) —– (2)

and, therefore,

Cabc = 1/2 gan (∇ngbc − ∇bgnc − ∇cgnb) —– (3)

This establishes uniqueness. But clearly the field Cabc defined by equation (3) is smooth, symmetric, and satisfies equation (1). So we have existence as well.

In the case of positive definite metrics, there is another way to capture the significance of compatibility of derivative operators with metrics. Suppose the metric gab on M is positive definite and γ : [s1, s2] → M is a smooth curve on M. We associate with γ a length

|γ| = ∫s1s2 gabξaξb ds,

where ξa is the tangent field to γ. This assigned length is invariant under reparametrization. For suppose σ : [t1, t2] → [s1, s2] is a diffeomorphism we shall write s = σ(t) and ξ′a is the tangent field of γ′ = γ ◦ σ : [t1, t2] → M. Then

We may as well require that the reparametrization preserve the orientation of the original curve – i.e., require that σ (t1) = s1 and σ (t2) = s2. In this case, ds/dt > 0 everywhere. (Only small changes are needed if we allow the reparametrization to reverse the orientation of the curve. In that case, ds/dt < 0 everywhere.) It

follows that

|γ’| = ∫t1t2 (gabξ′aξ′b)1/2 dt = ∫t1t2 (gabξaξb)1/2 ds/dt

= ∫s1s2 (gabξaξb)1/2 ds = |γ|

Let us say that γ : I → M is a curve from p to q if I is of the form [s1, s2], p = γ(s1), and q = γ(s2). In this (positive definite) case, we take the distance from p to q to be

d(p,q)=g.l.b. |γ|:γ is a smooth curve from p to q.

Further, we say that a curve γ : I → M is minimal if, for all s ∈ I, ∃ an ε > 0 such that, for all s1, s2 ∈ I with s1 ≤ s ≤ s2, if s2 − s1 < ε and if γ′ = γ|[s1, s2] (the restriction of γ to [s1, s2]), then |γ′| = d(γ(s1), γ(s2)) . Intuitively, minimal curves are “locally shortest curves.” Certainly they need not be “shortest curves” outright. (Consider, for example, two points on the “equator” of a two-sphere that are not antipodal to one another. An equatorial curve running from one to the other the “long way” qualifies as a minimal curve.)

One can characterize the unique derivative operator compatible with a positive definite metric gab in terms of the latter’s associated minimal curves. But in doing so, one has to pay attention to parametrization.

Let us say that a smooth curve γ : I → M with tangent field ξa is parametrized by arc length if ∀ ξa, gabξaξb = 1. In this case, if I = [s1, s2], then

|γ| = ∫s1s2 (gabξaξb)1/2 ds = ∫s1s2 1.ds = s2 – s1

Any non-trivial smooth curve can always be reparametrized by arc length.

# Metric. Part 1.

A (semi-Riemannian) metric on a manifold M is a smooth field gab on M that is symmetric and invertible; i.e., there exists an (inverse) field gbc on M such that gabgbc = δac.

The inverse field gbc of a metric gab is symmetric and unique. It is symmetric since

gcb = gnb δnc = gnb(gnm gmc) = (gmn gnb)gmc = δmb gmc = gbc

(Here we use the symmetry of gnm for the third equality.) It is unique because if g′bc is also an inverse field, then

g′bc = g′nc δnb = g′nc(gnm gmb) = (gmn g′nc) gmb = δmc gmb = gcb = gbc

(Here again we use the symmetry of gnm for the third equality; and we use the symmetry of gcb for the final equality.) The inverse field gbc of a metric gab is smooth. This follows, essentially, because given any invertible square matrix A (over R), the components of the inverse matrix A−1 depend smoothly on the components of A.

The requirement that a metric be invertible can be given a second formulation. Indeed, given any field gab on the manifold M (not necessarily symmetric and not necessarily smooth), the following conditions are equivalent.

(1) There is a tensor field gbc on M such that gabgbc = δac.

(2) ∀ p in M, and all vectors ξa at p, if gabξa = 0, then ξa =0.

(When the conditions obtain, we say that gab is non-degenerate.) To see this, assume first that (1) holds. Then given any vector ξa at any point p, if gab ξa = 0, it follows that ξc = δac ξa = gbc gab ξa = 0. Conversely, suppose that (2) holds. Then at any point p, the map from (Mp)a to (Mp)b defined by ξa → gab ξa is an injective linear map. Since (Mp)a and (Mp)b have the same dimension, it must be surjective as well. So the map must have an inverse gbc defined by gbc(gab ξa) = ξc or gbc gab = δac.

In the presence of a metric gab, it is customary to adopt a notation convention for “lowering and raising indices.” Consider first the case of vectors. Given a contravariant vector ξa at some point, we write gab ξa as ξb; and given a covariant vector ηb, we write gbc ηb as ηc. The notation is evidently consistent in the sense that first lowering and then raising the index of a vector (or vice versa) leaves the vector intact.

One would like to extend this notational convention to tensors with more complex index structure. But now one confronts a problem. Given a tensor αcab at a point, for example, how should we write gmc αcab? As αmab? Or as αamb? Or as αabm? In general, these three tensors will not be equal. To get around the problem, we introduce a new convention. In any context where we may want to lower or raise indices, we shall write indices, whether contravariant or covariant, in a particular sequence. So, for example, we shall write αabc or αacb or αcab. (These tensors may be equal – they belong to the same vector space – but they need not be.) Clearly this convention solves our problem. We write gmc αabc as αabm; gmc αacb as αamb; and so forth. No ambiguity arises. (And it is still the case that if we first lower an index on a tensor and then raise it (or vice versa), the result is to leave the tensor intact.)

We claimed in the preceding paragraph that the tensors αabc and αacb (at some point) need not be equal. Here is an example. Suppose ξ1a, ξ2a, … , ξna is a basis for the tangent space at a point p. Further suppose αabc = ξia ξjb ξkc at the point. Then αacb = ξia ξjc ξkb. Hence, lowering indices, we have αabc =ξia ξjb ξkc but αacb =ξia ξjc ξib at p. These two will not be equal unless j = k.

We have reserved special notation for two tensor fields: the index substiution field δba and the Riemann curvature field Rabcd (associated with some derivative operator). Our convention will be to write these as δab and Rabcd – i.e., with contravariant indices before covariant ones. As it turns out, the order does not matter in the case of the first since δab = δba. (It does matter with the second.) To verify the equality, it suffices to observe that the two fields have the same action on an arbitrary field αb:

δbaαb = (gbngamδnmb = gbnganαb = gbngnaαb = δabαb

Now suppose gab is a metric on the n-dimensional manifold M and p is a point in M. Then there exists an m, with 0 ≤ m ≤ n, and a basis ξ1a, ξ2a,…, ξna for the tangent space at p such that

gabξia ξib = +1 if 1≤i≤m

gabξiaξib = −1 if m<i≤n

gabξiaξjb = 0 if i ≠ j

Such a basis is called orthonormal. Orthonormal bases at p are not unique, but all have the same associated number m. We call the pair (m, n − m) the signature of gab at p. (The existence of orthonormal bases and the invariance of the associated number m are basic facts of linear algebraic life.) A simple continuity argument shows that any connected manifold must have the same signature at each point. We shall henceforth restrict attention to connected manifolds and refer simply to the “signature of gab

A metric with signature (n, 0) is said to be positive definite. With signature (0, n), it is said to be negative definite. With any other signature it is said to be indefinite. A Lorentzian metric is a metric with signature (1, n − 1). The mathematics of relativity theory is, to some degree, just a chapter in the theory of four-dimensional manifolds with Lorentzian metrics.

Suppose gab has signature (m, n − m), and ξ1a, ξ2a, . . . , ξna is an orthonormal basis at a point. Further, suppose μa and νa are vectors there. If

μa = ∑ni=1 μi ξia and νa = ∑ni=1 νi ξia, then it follows from the linearity of gab that

gabμa νb = μ1ν1 +…+ μmνm − μ(m+1)ν(m+1) −…−μnνn.

In the special case where the metric is positive definite, this comes to

gabμaνb = μ1ν1 +…+ μnνn

And where it is Lorentzian,

gab μaνb = μ1ν1 − μ2ν2 −…− μnνn

Metrics and derivative operators are not just independent objects, but, in a quite natural sense, a metric determines a unique derivative operator.

Suppose gab and ∇ are both defined on the manifold M. Further suppose

γ : I → M is a smooth curve on M with tangent field ξa and λa is a smooth field on γ. Both ∇ and gab determine a criterion of “constancy” for λa. λa is constant with respect to ∇ if ξnnλa = 0 and is constant with respect to gab if gab λa λb is constant along γ – i.e., if ξnn (gab λa λb = 0. It seems natural to consider pairs gab and ∇ for which the first condition of constancy implies the second. Let us say that ∇ is compatible with gab if, for all γ and λa as above, λa is constant w.r.t. gab whenever it is constant with respect to ∇.

# Mappings, Manifolds and Kantian Abstract Properties of Synthesis

An inverse system is a collection of sets which are connected by mappings. We start off with the definitions before relating these to abstract properties of synthesis.

Definition: A directed set is a set T together with an ordering relation ≤ such that

(1) ≤ is a partial order, i.e. transitive, reflexive, anti-symmetric

(2) ≤ is directed, i.e. for any s, t ∈ T there is r ∈ T with s, t ≤ r

Definition: An inverse system indexed by T is a set D = {Ds|s ∈ T} together with a family of mappings F = {hst|s ≥ t, hst : Ds → Dt}. The mappings in F must satisfy the coherence requirement that if s ≥ t ≥ r, htr ◦ hst = hsr.

Interpretation of the index set: The index set represents some abstract properties of synthesis. The ‘synthesis of apprehension in intuition’ proceeds by a ’running through and holding together of the manifold’ and is thus a process that takes place in time. We may now think of an index s ∈ T as an interval of time available for the process of ’running through and holding together’. More formally, s can be taken to be a set of instants or events, ordered by a ‘precedes’ relation; the relation t ≤ s then stands for: t is a substructure of s. It is immediate that on this interpretation ≤ is a partial order. The directedness is related to what Kant called ‘the formal unity of the consciousness in the synthesis of the manifold of representations’ or ‘the necessary unity of self-consciousness, thus also of the synthesis of the manifold, through a common function of the mind for combining it in one representation’ – the requirement that ‘for any s, t ∈ T there is r ∈ T with s, t ≤ r’ creates the formal conditions for combining the syntheses executed during s and t in one representation, coded by r.

Interpretation of the Ds and the mappings hst : Ds → Dt. An object in Ds can thought of as a possible ‘indeterminate object of empirical intuition’ synthesised in the interval s. If s ≥ t, the mapping hst : Ds → Dt expresses a consistency requirement: if d ∈ Ds represents an indeterminate object of empirical intuition synthesised in interval s, so that a particular manifold of features can be ‘run through and held together’ during s, some indeterminate object of empirical intuition must already be synthesisable by ‘running through and holding together’ in interval t, e.g. by combining a subset of the features characaterising d. This interpretation justifies the coherence condition s ≥ t ≥ r, htr ◦ hst = hsr: the synthesis obtained from first restricting the interval available for ‘running through and holding together’ to interval t, and then to interval r should not differ from the synthesis obtained by restricting to r directly.

We do not put any further requirements on the mappings hst : Ds → Dt, such as surjectivity or injectivity. Some indeterminate object of experience in Dt may have disappeared in Ds: more time for ‘running through and holding together’ may actually yield fewer features that can be combined. Thus we do not require the mappings to be surjective. It may also happen that an indeterminate object of experience in Dt corresponds to two or more of such objects in Ds, as when a building viewed from afar upon closer inspection turns out to be composed of two spatially separated buildings; thus the mappings need not be injective.

The interaction of the directedness of the index set and the mappings hst is of some interest. If r ≥ s, t there are mappings hrs : Dr → Ds and hrt : Ds → Dt. Each ‘indeterminate object of empirical intuition’ in d ∈ Dr can be seen as a synthesis of such objects hrs(d) ∈ Ds and hrt(d) ∈ Dt. For example, the ‘manifold of a house’ can be viewed as synthesised from a ‘manifold of the front’ and a ‘manifold of the back’. The operation just described has some of the characteristics of the synthesis of reproduction in imagination: the fact that the front of the house can be unified with the back to produce a coherent object presupposes that the front can be reproduced as it is while we are staring at the back. The mappings hrs : Dr → Ds and hrt : Ds → Dt capture the idea that d ∈ Dr arises from reproductions of hrs(d) and hrt(d) in r.

# Leibniz’s Compossibility and Compatibility

Leibniz believed in discovering a suitable logical calculus of concepts enabling its user to solve any rational question. Assuming that it is done he was in power to sketch the full ontological system – from monads and qualities to the real world.

Thus let some logical calculus of concepts (names?, predicates?) be given. Cn is its connected consequence operator, whereas – for any x – Th(x) is the Cn-theory generated by x.

Leibniz defined modal concepts by the following metalogical conditions:

M(x) :↔ ⊥ ∉ Th(x)

x is possible (its theory is consistent)

L(x) :↔ ⊥ ∈ Th(¬x)

x is necessary (its negation is impossible)

C(x,y) :↔ ⊥ ∉ Cn(Th(x) ∪ Th(y))

x and y are compossible (their common theory is consistent).

Immediately we obtain Leibnizian ”soundness” conditions:

C(x, y) ↔ C(y, x) Compossibility relation is symmetric.

M(x) ↔ C(x, x) Possibility means self-compossibility.

C(x, y) → M(x)∧M(y) Compossibility implies possibility.

When can the above implication be reversed?

Onto\logical construction

Observe that in the framework of combination ontology we have already defined M(x) in a way respecting M(x) ↔ C(x, x).

On the other hand, between MP( , ) and C( , ) there is another relation, more fundamental than compossibility. It is so-called compatibility relation. Indeed, putting

CP(x, y) :↔ MP(x, y) ∧ MP(y, x) – for compatibility, and C(x,y) :↔ M(x) ∧ M(y) ∧ CP(x,y) – for compossibility

we obtain a manageable compossibility relation obeying the above Leibniz’s ”soundness” conditions.

Wholes are combinations of compossible collections, whereas possible worlds are obtained by maximalization of wholes.

Observe that we start with one basic ontological making: MP(x, y) – modality more fundamental than Leibnizian compossibility, for it is definable in two steps. Observe also that the above construction can be done for making impossible and to both basic ontological modalities as well (producing quite Hegelian output in this case!).

# Nomological Possibility and Necessity

An event E is nomologically possible in history h at time t if the initial segment of that history up to t admits at least one continuation in Ω that lies in E; and E is nomologically necessary in h at t if every continuation of the history’s initial segment up to t lies in E.

More formally, we say that one history, h’, is accessible from another, h, at time t if the initial segments of h and h’ up to time t coincide, i.e., ht = ht‘. We then write h’Rth. The binary relation Rt on possible histories is in fact an equivalence relation (reflexive, symmetric, and transitive). Now, an event E ⊆ Ω is nomologically possible in history h at time t if some history h’ in Ω that is accessible from h at t is contained in E. Similarly, an event E ⊆ Ω is nomologically necessary in history h at time t if every history h’ in Ω that is accessible from h at t is contained in E.

In this way, we can define two modal operators, ♦t and ¤t, to express possibility and necessity at time t. We define each of them as a mapping from events to events. For any event E ⊆ Ω,

t E = {h ∈ Ω : for some h’ ∈ Ω with h’Rth, we have h’ ∈ E},

¤t E = {h ∈ Ω : for all h’ ∈ Ω with h’Rth, we have h’ ∈ E}.

So, ♦t E is the set of all histories in which E is possible at time t, and ¤t E is the set of all histories in which E is necessary at time t. Accordingly, we say that “ ♦t E” holds in history h if h is an element of ♦t E, and “ ¤t E” holds in h if h is an element of ¤t E. As one would expect, the two modal operators are duals of each other: for any event E ⊆ Ω, we have ¤t E = ~ ♦t ~E and ♦E = ~ ¤t ~E.

Although we have here defined nomological possibility and necessity, we can analogously define logical possibility and necessity. To do this, we must simply replace every occurrence of the set Ω of nomologically possible histories in our definitions with the set H of logically possible histories. Second, by defining the operators ♦t and ¤t as functions from events to events, we have adopted a semantic definition of these modal notions. However, we could also define them syntactically, by introducing an explicit modal logic. For each point in time t, the logic corresponding to the operators ♦t and ¤t would then be an instance of a standard S5 modal logic.

The analysis shows how nomological possibility and necessity depend on the dynamics of the system. In particular, as time progresses, the notion of possibility becomes more demanding: fewer events remain possible at each time. And the notion of necessity becomes less demanding: more events become necessary at each time, for instance due to having been “settled” in the past. Formally, for any t and t’ in T with t < t’ and any event E ⊆ Ω,

if ♦t’ E then ♦E,

if ¤t E then ¤t’ E.

Furthermore, in a deterministic system, for every event E and any time t, we have ♦t E = ¤t E. In other words, an event is possible in any history h at time t if and only if it is necessary in h at t. In an indeterministic system, by contrast, necessity and possibility come apart.

Let us say that one history, h’, is accessible from another, h, relative to a set T’ of time points, if the restrictions of h and h’ to T’ coincide, i.e., h’T’ = hT’. We then write h’RT’h. Accessibility at time t is the special case where T’ is the set of points in time up to time t. We can define nomological possibility and necessity relative to T’ as follows. For any event E ⊆ Ω,

T’ E = {h ∈ Ω : for some h’ ∈ Ω with h’RT’h, we have h’ ∈ E},

¤T’ E = {h ∈ Ω : for all h’ ∈ Ω with h’RT’h, we have h’ ∈ E}.

Although these modal notions are much less familiar than the standard ones (possibility and necessity at time t), they are useful for some purposes. In particular, they allow us to express the fact that the states of a system during a particular period of time, T’ ⊆ T, render some events E possible or necessary.

Finally, our definitions of possibility and necessity relative to some general subset T’ of T also allow us to define completely “atemporal” notions of possibility and necessity. If we take T’ to be the empty set, then the accessibility relation RT’ becomes the universal relation, under which every history is related to every other. An event E is possible in this atemporal sense (i.e., ♦E) iff E is a non-empty subset of Ω, and it is necessary in this atemporal sense (i.e., ¤E) if E coincides with all of Ω. These notions might be viewed as possibility and necessity from the perspective of some observer who has no temporal or historical location within the system and looks at it from the outside.

# Speculations

Any system that uses only single asset price (and possibly prices of multiple assets, but this case is not completely clear) as input. The price is actually secondary and typically fluctuates few percent a day in contrast with liquidity flow, that fluctuates in orders of magnitude. This also allows to estimate maximal workable time scale: the scale on which execution flow fluctuates at least in an order of magnitude (in 10 times).

Any system that has a built-in fixed time scale (e.g. moving average type of system). The market has no specific time scale.

Any “symmetric” system with just two signals “buy” and “sell” cannot make money. Minimal number of signals is four: “buy”, “sell position”, “sell short”, “cover short”. The system where e.g. “buy” and “cover short” is the same signal will eventually catastrophically lose money on an event when market go against position held. Short covering is buying back borrowed securities in order to close an open short position. Short covering refers to the purchase of the exact same security that was initially sold short, since the short-sale process involved borrowing the security and selling it in the market. For example, assume you sold short 100 shares of XYZ at \$20 per share, based on your view that the shares were headed lower. When XYZ declines to \$15, you buy back 100 shares of XYZ in the market to cover your short position (and pocket a gross profit of \$500 from your short trade).

Any system entering the position (does not matter long or short) during liquidity excess (e.g. I > IIH) cannot make money. During liquidity excess price movement is typically large and “reverse to the moving average” type of system often use such event as position entering signal. The market after liquidity excess event bounces a little, then typically goes to the same direction. This give a risk of on what to bet: “little bounce” or “follow the market”. What one should do during liquidity excess event is to CLOSE existing position. This is very fundamental – if you have a position during market uncertainty – eventually you will lose money, you must have ZERO position during liquidity excess. This is very important element of the P&L trading strategy.

Any system not entering the position during liquidity deficit event (e.g. I < IIL) typically lose money. Liquidity deficit periods are characterized by small price movements and difficult to identify by price-based trading systems. Liquidity deficit actually means that at current price buyers and sellers do not match well, and substantial price movement is expected. This is very well known by most traders: before large market movement volatility (and e.g. standard deviation as its crude measure) becomes very low. The direction (whether one should go long or short) during liquidity deficit event can, to some extent, be determined by the balance of supply–demand generalization.

An important issue is to discuss is: what would happen to the markets when this strategy (enter on liquidity deficit, exit on liquidity excess) is applied on mass scale by market participants. In contrast with other trading strategies, which reduce liquidity at current price when applied (when price is moved to the uncharted territory, the liquidity drains out because supply or demand drains ), this strategy actually increases market liquidity at current price. This insensitivity to price value is expected to lead not to the strategy stopping to work when applied on mass scale by market participants, but starting to work better and better and to markets’ destabilization in the end.