# Coarse Philosophies of Coarse Embeddabilities: Metric Space Conjectures Act Algorithmically On Manifolds – Thought of the Day 145.0 A coarse structure on a set X is defined to be a collection of subsets of X × X, called the controlled sets or entourages for the coarse structure, which satisfy some simple axioms. The most important of these states that if E and F are controlled then so is

E ◦ F := {(x, z) : ∃y, (x, y) ∈ E, (y, z) ∈ F}

Consider the metric spaces Zn and Rn. Their small-scale structure, their topology is entirely different, but on the large scale they resemble each other closely: any geometric configuration in Rn can be approximated by one in Zn, to within a uniformly bounded error. We think of such spaces as “coarsely equivalent”. The other axioms require that the diagonal should be a controlled set, and that subsets, transposes, and (finite) unions of controlled sets should be controlled. It is accurate to say that a coarse structure is the large-scale counterpart of a uniformity than of a topology.

Coarse structures and coarse spaces enjoy a philosophical advantage over coarse metric spaces, in that, all left invariant bounded geometry metrics on a countable group induce the same metric coarse structure which is therefore transparently uniquely determined by the group. On the other hand, the absence of a natural gauge complicates the notion of a coarse family, while it is natural to speak of sets of uniform size in different metric spaces it is not possible to do so in different coarse spaces without imposing additional structure.

Mikhail Leonidovich Gromov introduced the notion of coarse embedding for metric spaces. Let X and Y be metric spaces.

A map f : X → Y is said to be a coarse embedding if ∃ nondecreasing functions ρ1 and ρ2 from R+ = [0, ∞) to R such that

• ρ1(d(x,y)) ≤ d(f(x),f(y)) ≤ ρ2(d(x,y)) ∀ x, y ∈ X.
• limr→∞ ρi(r) = +∞ (i=1, 2).

Intuitively, coarse embeddability of a metric space X into Y means that we can draw a picture of X in Y which reflects the large scale geometry of X. In early 90’s, Gromov suggested that coarse embeddability of a discrete group into Hilbert space or some Banach spaces should be relevant to solving the Novikov conjecture. The connection between large scale geometry and differential topology and differential geometry, such as the Novikov conjecture, is built by index theory. Recall that an elliptic differential operator D on a compact manifold M is Fredholm in the sense that the kernel and cokernel of D are finite dimensional. The Fredholm index of D, which is defined by

index(D) = dim(kerD) − dim(cokerD),

has the following fundamental properties:

(1) it is an obstruction to invertibility of D;

(2) it is invariant under homotopy equivalence.

The celebrated Atiyah-Singer index theorem computes the Fredholm index of elliptic differential operators on compact manifolds and has important applications. However, an elliptic differential operator on a noncompact manifold is in general not Fredholm in the usual sense, but Fredholm in a generalized sense. The generalized Fredholm index for such an operator is called the higher index. In particular, on a general noncompact complete Riemannian manifold M, John Roe (Coarse Cohomology and Index Theory on Complete Riemannian Manifolds) introduced a higher index theory for elliptic differential operators on M.

The coarse Baum-Connes conjecture is an algorithm to compute the higher index of an elliptic differential operator on noncompact complete Riemannian manifolds. By the descent principal, the coarse Baum-Connes conjecture implies the Novikov higher signature conjecture. Guoliang Yu has proved the coarse Baum-Connes conjecture for bounded geometry metric spaces which are coarsely embeddable into Hilbert space. The metric spaces which admit coarse embeddings into Hilbert space are a large class, including e.g. all amenable groups and hyperbolic groups. In general, however, there are counterexamples to the coarse Baum-Connes conjecture. A notorious one is expander graphs. On the other hand, the coarse Novikov conjecture (i.e. the injectivity part of the coarse Baum-Connes conjecture) is an algorithm of determining non-vanishing of the higher index. Kasparov-Yu have proved the coarse Novikov conjecture for spaces which admit coarse embeddings into a uniformly convex Banach space.

# Game Theory and Finite Strategies: Nash Equilibrium Takes Quantum Computations to Optimality. Finite games of strategy, within the framework of noncooperative quantum game theory, can be approached from finite chain categories, where, by finite chain category, it is understood a category C(n;N) that is generated by n objects and N morphic chains, called primitive chains, linking the objects in a specific order, such that there is a single labelling. C(n;N) is, thus, generated by N primitive chains of the form:

x0 →f1 x1 →f2 x1 → … xn-1 →fn xn —– (1)

A finite chain category is interpreted as a finite game category as follows: to each morphism in a chain xi-1 →fi xi, there corresponds a strategy played by a player that occupies the position i, in this way, a chain corresponds to a sequence of strategic choices available to the players. A quantum formal theory, for a finite game category C(n;N), is defined as a formal structure such that each morphic fundament fi of the morphic relation xi-1 →fi xis a tuple of the form:

fi := (Hi, Pi, Pˆfi) —– (2)

where Hi is the i-th player’s Hilbert space, Pi is a complete set of projectors onto a basis that spans the Hilbert space, and Pˆfi ∈ Pi. This structure is interpreted as follows: from the strategic Hilbert space Hi, given the pure strategies’ projectors Pi, the player chooses to play Pˆfi .

From the morphic fundament (2), an assumption has to be made on composition in the finite category, we assume the following tensor product composition operation:

fj ◦ fi = fji —– (3)

fji = (Hji = Hj ⊗ Hi, Pji = Pj ⊗ Pi, Pˆfji = Pˆfj ⊗ Pˆfi) —– (4)

From here, a morphism for a game choice path could be introduced as:

x0 →fn…21 xn —– (5)

fn…21 = (HG = ⊗i=n1 Hi, PG = ⊗i=n1 Pi, Pˆ fn…21 = ⊗i=n1fi) —– (6)

in this way, the choices along the chain of players are completely encoded in the tensor product projectors Pˆfn…21. There are N = ∏i=1n dim(Hi) such morphisms, a number that coincides with the number of primitive chains in the category C(n;N).

Each projector can be addressed as a strategic marker of a game path, and leads to the matrix form of an Arrow-Debreu security, therefore, we call it game Arrow-Debreu projector. While, in traditional financial economics, the Arrow-Debreu securities pay one unit of numeraire per state of nature, in the present game setting, they pay one unit of payoff per game path at the beginning of the game, however this analogy may be taken it must be addressed with some care, since these are not securities, rather, they represent, projectively, strategic choice chains in the game, so that the price of a projector Pˆfn…21 (the Arrow-Debreu price) is the price of a strategic choice and, therefore, the result of the strategic evaluation of the game by the different players.

Now, let |Ψ⟩ be a ket vector in the game’s Hilbert space HG, such that:

|Ψ⟩ = ∑fn…21 ψ(fn…21)|(fn…21⟩ —– (7)

where ψ(fn…21) is the Arrow-Debreu price amplitude, with the condition:

fn…21 |ψ(fn…21)|2 = D —– (8)

for D > 0, then, the |ψ(fn…21)|corresponds to the Arrow-Debreu prices for the game path fn…21 and D is the discount factor in riskless borrowing, defining an economic scale for temporal connections between one unit of payoff now and one unit of payoff at the end of the game, such that one unit of payoff now can be capitalized to the end of the game (when the decision takes place) through a multiplication by 1/D, while one unit of payoff at the end of the game can be discounted to the beginning of the game through multiplication by D.

In this case, the unit operator, 1ˆ = ∑fn…21 Pˆfn…21 has a similar profile as that of a bond in standard financial economics, with ⟨Ψ|1ˆ|Ψ⟩ = D, on the other hand, the general payoff system, for each player, can be addressed from an operator expansion:

πiˆ = ∑fn…21 πi (fn…21) Pˆfn…21 —– (9)

where each weight πi(fn…21) corresponds to quantities associated with each Arrow-Debreu projector that can be interpreted as similar to the quantities of each Arrow-Debreu security for a general asset. Multiplying each weight by the corresponding Arrow-Debreu price, one obtains the payoff value for each alternative such that the total payoff for the player at the end of the game is given by:

⟨Ψ|1ˆ|Ψ⟩ = ∑fn…21 πi(fn…21) |ψ(fn…21)|2/D —– (10)

We can discount the total payoff to the beginning of the game using the discount factor D, leading to the present value payoff for the player:

PVi = D ⟨Ψ|πiˆ|Ψ⟩ = D ∑fn…21 π (fn…21) |ψ(fn…21)|2/D —– (11)

, where π (fn…21) represents quantities, while the ratio |ψ(fn…21)|2/D represents the future value at the decision moment of the quantum Arrow- Debreu prices (capitalized quantum Arrow-Debreu prices). Introducing the ket

|Q⟩ ∈ HG, such that:

|Q⟩ = 1/√D |Ψ⟩ —– (12)

then, |Q⟩ is a normalized ket for which the price amplitudes are expressed in terms of their future value. Replacing in (11), we have:

PVi = D ⟨Q|πˆi|Q⟩ —– (13)

In the quantum game setting, the capitalized Arrow-Debreu price amplitudes ⟨fn…21|Q⟩ become quantum strategic configurations, resulting from the strategic cognition of the players with respect to the game. Given |Q⟩, each player’s strategic valuation of each pure strategy can be obtained by introducing the projector chains:

Cˆfi = ∑fn…i+1fi-1…1 Pˆfn…i+1 ⊗ Pˆfi ⊗ Pˆfi-1…1 —– (14)

with ∑fi Cˆfi = 1ˆ. For each alternative choice of the player i, the chain sums over all of the other choice paths for the rest of the players, such chains are called coarse-grained chains in the decoherent histories approach to quantum mechanics. Following this approach, one may introduce the pricing functional from the expression for the decoherence functional:

D (fi, gi : |Q⟩) = ⟨Q| Cˆfi Cgi|Q⟩  —– (15)

we, then have, for each player

D (fi, gi : |Q⟩) = 0, ∀ fi ≠ gi —– (16)

this is the usual quantum mechanics’ condition for an aditivity of measure (also known as decoherence condition), which means that the capitalized prices for two different alternative choices of player i are additive. Then, we can work with the pricing functional D(fi, fi :|Q⟩) as giving, for each player an Arrow-Debreu capitalized price associated with the pure strategy, expressed by fi. Given that (16) is satisfied, each player’s quantum Arrow-Debreu pricing matrix, defined analogously to the decoherence matrix from the decoherent histories approach, is a diagonal matrix and can be expanded as a linear combination of the projectors for each player’s pure strategies as follows:

Di (|Q⟩) = ∑fi D(fi, f: |Q⟩) Pˆfi —– (17)

which has the mathematical expression of a mixed strategy. Thus, each player chooses from all of the possible quantum computations, the one that maximizes the present value payoff function with all the other strategies held fixed, which is in agreement with Nash.

# Categories of Pointwise Convergence Topology: Theory(ies) of Bundles.

Let H be a fixed, separable Hilbert space of dimension ≥ 1. Lets denote the associated projective space of H by P = P(H). It is compact iff H is finite-dimensional. Let PU = PU(H) = U(H)/U(1) be the projective unitary group of H equipped with the compact-open topology. A projective bundle over X is a locally trivial bundle of projective spaces, i.e., a fibre bundle P → X with fibre P(H) and structure group PU(H). An application of the Banach-Steinhaus theorem shows that we may identify projective bundles with principal PU(H)-bundles and the pointwise convergence topology on PU(H).

If G is a topological group, let GX denote the sheaf of germs of continuous functions G → X, i.e., the sheaf associated to the constant presheaf given by U → F(U) = G. Given a projective bundle P → X and a sufficiently fine good open cover {Ui}i∈I of X, the transition functions between trivializations P|Ui can be lifted to bundle isomorphisms gij on double intersections Uij = Ui ∩ Uj which are projectively coherent, i.e., over each of the triple intersections Uijk = Ui ∩ Uj ∩ Uk the composition gki gjk gij is given as multiplication by a U(1)-valued function fijk : Uijk → U(1). The collection {(Uij, fijk)} defines a U(1)-valued two-cocycle called a B-field on X,which represents a class BP in the sheaf cohomology group H2(X, U(1)X). On the other hand, the sheaf cohomology H1(X, PU(H)X) consists of isomorphism classes of principal PU(H)-bundles, and we can consider the isomorphism class [P] ∈ H1(X,PU(H)X).

There is an isomorphism

H1(X, PU(H)X) → H2(X, U(1)X) provided by the

boundary map [P] ↦ BP. There is also an isomorphism

H2(X, U(1)X) → H3(X, ZX) ≅ H3(X, Z)

The image δ(P) ∈ H3(X, Z) of BP is called the Dixmier-Douady invariant of P. When δ(P) = [H] is represented in H3(X, R) by a closed three-form H on X, called the H-flux of the given B-field BP, we will write P = PH. One has δ(P) = 0 iff the projective bundle P comes from a vector bundle E → X, i.e., P = P(E). By Serre’s theorem every torsion element of H3(X,Z) arises from a finite-dimensional bundle P. Explicitly, consider the commutative diagram of exact sequences of groups given by where we identify the cyclic group Zn with the group of n-th roots of unity. Let P be a projective bundle with structure group PU(n), i.e., with fibres P(Cn). Then the commutative diagram of long exact sequences of sheaf cohomology groups associated to the above commutative diagram of groups implies that the element BP ∈ H2(X, U(1)X) comes from H2(X, (Zn)X), and therefore its order divides n.

One also has δ(P1 ⊗ P2) = δ(P1) + δ(P2) and δ(P) = −δ(P). This follows from the commutative diagram and the fact that P ⊗ P = P(E) where E is the vector bundle of Hilbert-Schmidt endomorphisms of P . Putting everything together, it follows that the cohomology group H3(X, Z) is isomorphic to the group of stable equivalence classes of principal PU(H)-bundles P → X with the operation of tensor product.

We are now ready to define the twisted K-theory of the manifold X equipped with a projective bundle P → X, such that Px = P(H) ∀ x ∈ X. We will first give a definition in terms of Fredholm operators, and then provide some equivalent, but more geometric definitions. Let H be a Z2-graded Hilbert space. We define Fred0(H) to be the space of self-adjoint degree 1 Fredholm operators T on H such that T2 − 1 ∈ K(H), together with the subspace topology induced by the embedding Fred0(H) ֒→ B(H) × K(H) given by T → (T, T2 − 1) where the algebra of bounded linear operators B(H) is given the compact-open topology and the Banach algebra of compact operators K = K(H) is given the norm topology.

Let P = PH → X be a projective Hilbert bundle. Then we can construct an associated bundle Fred0(P) whose fibres are Fred0(H). We define the twisted K-theory group of the pair (X, P) to be the group of homotopy classes of maps

K0(X, H) = [X, Fred0(PH)]

The group K0(X, H) depends functorially on the pair (X, PH), and an isomorphism of projective bundles ρ : P → P′ induces a group isomorphism ρ∗ : K0(X, H) → K0(X, H′). Addition in K0(X, H) is defined by fibre-wise direct sum, so that the sum of two elements lies in K0(X, H2) with [H2] = δ(P ⊗ P(C2)) = δ(P) = [H]. Under the isomorphism H ⊗ C2 ≅ H, there is a projective bundle isomorphism P → P ⊗ P(C2) for any projective bundle P and so K0(X, H2) is canonically isomorphic to K0(X, H). When [H] is a non-torsion element of H3(X, Z), so that P = PH is an infinite-dimensional bundle of projective spaces, then the index map K0(X, H) → Z is zero, i.e., any section of Fred0(P) takes values in the index zero component of Fred0(H).

Let us now describe some other models for twisted K-theory which will be useful in our physical applications later on. A definition in algebraic K-theory may given as follows. A bundle of projective spaces P yields a bundle End(P) of algebras. However, if H is an infinite-dimensional Hilbert space, then one has natural isomorphisms H ≅ H ⊕ H and

End(H) ≅ Hom(H ⊕ H, H) ≅ End(H) ⊕ End(H)

as left End(H)-modules, and so the algebraic K-theory of the algebra End(H) is trivial. Instead, we will work with the Banach algebra K(H) of compact operators on H with the norm topology. Given that the unitary group U(H) with the compact-open topology acts continuously on K(H) by conjugation, to a given projective bundle PH we can associate a bundle of compact operators EH → X given by

EH = PH ×PU K

with δ(EH) = [H]. The Banach algebra AH := C0(X, EH) of continuous sections of EH vanishing at infinity is the continuous trace C∗-algebra CT(X, H). Then the twisted K-theory group K(X, H) of X is canonically isomorphic to the algebraic K-theory group K(AH).

We will also need a smooth version of this definition. Let AH be the smooth subalgebra of AH given by the algebra CT(X, H) = C(X, L1PH),

where L1PH = PH ×PUL1. Then the inclusion CT(X, H) → CT(X, H) induces an isomorphism KCT(X, H) → KCT(X, H) of algebraic K-theory groups. Upon choosing a bundle gerbe connection, one has an isomorphism KCT(X, H) ≅ K(X, H) with the twisted K-theory defined in terms of projective Hilbert bundles P = PH over X.

Finally, we propose a general definition based on K-theory with coefficients in a sheaf of rings. It parallels the bundle gerbe approach to twisted K-theory. Let B be a Banach algebra over C. Let E(B, X) be the category of continuous B-bundles over X, and let C(X, B) be the sheaf of continuous maps X → B. The ring structure in B equips C(X, B) with the structure of a sheaf of rings over X. We can therefore consider left (or right) C(X, B)-modules, and in particular the category LF C(X, B) of locally free C(X, B)-modules. Using the functor in the usual way, for X an equivalence of additive categories

E(B, X) ≅ LF (C(X, B))

Since these are both additive categories, we can apply the Grothendieck functor to each of them and obtain the abelian groups K(LF(C(X, B))) and K(E(B, X)). The equivalence of categories ensures that there is a natural isomorphism of groups

K(LF (C(X, B))) ≅ K(E(B, X))

This motivates the following general definition. If A is a sheaf of rings over X, then we define the K-theory of X with coefficients in A to be the abelian group

K(X, A) := K LF(A)

For example, consider the case B = C. Then C(X, C) is just the sheaf of continuous functions X → C, while E(C, X) is the category of complex vector bundles over X. Using the isomorphism of K-theory groups we then have

K(X, C(X,C)) := K(LF (C(X, C))) ≅ K (E(C, X)) = K0(X)

The definition of twisted K-theory uses another special instance of this general construction. For this, we define an Azumaya algebra over X of rank m to be a locally trivial algebra bundle over X with fibre isomorphic to the algebra of m × m complex matrices over C, Mm(C). An example is the algebra End(E) of endomorphisms of a complex vector bundle E → X. We can define an equivalence relation on the set A(X) of Azumaya algebras over X in the following way. Two Azumaya algebras A, A′ are called equivalent if there are vector bundles E, E′ over X such that the algebras A ⊗ End(E), A′ ⊗ End(E′) are isomorphic. Then every Azumaya algebra of the form End(E) is equivalent to the algebra of functions C(X) on X. The set of all equivalence classes is a group under the tensor product of algebras, called the Brauer group of X and denoted Br(X). By Serre’s theorem there is an isomorphism

δ : Br(X) → tor(H3(X, Z))

where tor(H3(X, Z)) is the torsion subgroup of H3(X, Z).

If A is an Azumaya algebra bundle, then the space of continuous sections C(X, A) of X is a ring and we can consider the algebraic K-theory group K(A) := K0(C(X,A)) of equivalence classes of projective C(X, A)-modules, which depends only on the equivalence class of A in the Brauer group. Under the equivalence, we can represent the Brauer group Br(X) as the set of isomorphism classes of sheaves of Azumaya algebras. Let A be a sheaf of Azumaya algebras, and LF(A) the category of locally free A-modules. Then as above there is an isomorphism

K(X, C(X, A)) ≅ K Proj (C(X, A))

where Proj (C(X, A)) is the category of finitely-generated projective C(X, A)-modules. The group on the right-hand side is the group K(A). For given [H] ∈ tor(H3(X, Z)) and A ∈ Br(X) such that δ(A) = [H], this group can be identified as the twisted K-theory group K0(X, H) of X with twisting A. This definition is equivalent to the description in terms of bundle gerbe modules, and from this construction it follows that K0(X, H) is a subgroup of the ordinary K-theory of X. If δ(A) = 0, then A is equivalent to C(X) and we have K(A) := K0(C(X)) = K0(X). The projective C(X, A)-modules over a rank m Azumaya algebra A are vector bundles E → X with fibre Cnm ≅ (Cm)⊕n, which is naturally an Mm(C)-module.

# Intuitive Algebra (Groupoid/Categorical Structure) of Open Strings As Morphisms

A geometric Dirichlet brane is a triple (L, E, ∇E) – a submanifold L ⊂ M, carrying a vector bundle E, with connection ∇E.

The real dimension of L is also often brought into the nomenclature, so that one speaks of a Dirichlet p-brane if p = dimRL.

An open string which stretches from a Dirichlet brane (L, E, ∇E) to a Dirichlet brane (K, F, ∇F), is a map X from an interval I ≅ [0,1] to M, such that X(0) ∈ L and X(1) ∈ K. An “open string history” is a map from R into open strings, or equivalently a map from a two-dimensional surface with boundary, say Σ ≡ I × R, to M , such that the two boundaries embed into L and K. The quantum theory of these open strings is defined by a functional integral over these histories, with a weight which depends on the connections ∇E and ∇F. It describes the time evolution of an open string state which is a wave function in a Hilbert space HB,B′ labelled by the two choices of brane B = (L, E, ∇E) and B′ = (K, F, ∇F). Distinct Dirichlet branes can embed into the same submanifold L. One way to represent this would be to specify the configurations of Dirichlet branes as a set of submanifolds with multiplicity. However, we can also represent this choice by using the choice of bundle E. Thus, a set of N identical branes will be represented by tensoring the bundle E with CN. The connection is also obtained by tensor product. An N-fold copy of the Dirichlet brane (L, E, ∇E) is thus a triple (L, E ⊗CN, ∇E ⊗ idN).

In physics, one visualizes this choice by labelling each open string boundary with a basis vector of CN, which specifies a choice among the N identical branes. These labels are called Chan-Paton factors. One then uses them to constrain the interactions between open strings. If we picture such an interaction as the joining of two open strings to one, the end of the first to the beginning of the second, we require not only the positions of the two ends to agree, but also the Chan-Paton factors. This operation is the intuitive algebra of open strings.

Mathematically, an algebra of open strings can always be tensored with a matrix algebra, in general producing a noncommutative algebra. More generally, if there is more than one possible boundary condition, then, rather than an algebra, it is better to think of this as a groupoid or categorical structure on the boundary conditions and the corresponding open strings. In the language of groupoids, particular open strings are elements of the groupoid, and the composition law is defined only for pairs of open strings with a common boundary. In the categorical language, boundary conditions are objects, and open strings are morphisms. The simplest intuitive argument that a non-trivial choice can be made here is to call upon the general principle that any local deformation of the world-sheet action should be a physically valid choice. In particular, particles in physics can be charged under a gauge field, for example the Maxwell field for an electron, the color Yang-Mills field for a quark, and so on. The wave function for a charged particle is then not complex-valued, but takes values in a bundle E.

Now, the effect of a general connection ∇E is to modify the functional integral by modifying the weight associated to a given history of the particle. Suppose the trajectory of a particle is defined by a map φ : R → M; then a natural functional on trajectories associated with a connection ∇ on M is simply its holonomy along the trajectory, a linear map from E|φ(t1) to E|φ(t2). The functional integral is now defined physically as a sum over trajectories with this holonomy included in the weight.

The simplest way to generalize this to a string is to consider the ls → 0 limit. Now the constraint of finiteness of energy is satisfied only by a string of vanishingly small length, effectively a particle. In this limit, both ends of the string map to the same point, which must therefore lie on L ∩ K.

The upshot is that, in this limit, the wave function of an open string between Dirichlet branes (L, E, ∇) and (K, F, ∇F) transforms as a section of E ⊠ F over L ∩ K, with the natural connection on the direct product. In the special case of (L, E, ∇E) ≅ (K, F, ∇F), this reduces to the statement that an open string state is a section of EndE. Open string states are sections of a graded vector bundle End E ⊗ Λ•T∗L, the degree-1 part of which corresponds to infinitesimal deformations of ∇E. In fact, these open string states are the infinitesimal deformations of ∇E, in the standard sense of quantum field theory, i.e., a single open string is a localized excitation of the field obtained by quantizing the connection ∇E. Similarly, other open string states are sections of the normal bundle of L within X, and are related in the same way to infinitesimal deformations of the submanifold. These relations, and their generalizations to open strings stretched between Dirichlet branes, define the physical sense in which the particular set of Dirichlet branes associated to a specified background X can be deduced from string theory.

# Microcausality If e0 ∈ R1+1 is a future-directed timelike unit vector, and if e1 is the unique spacelike unit vector with e0e1 = 0 that “points to the right,” then coordinates x0 and x1 on R1+1 are defined by x0(q) := qe0 and x1(q) := qe1. The partial differential operator

x : = ∂2x0 − ∂2x1

does not depend on the choice of e0.

The Fourier transform of the Klein-Gordon equation

(□ + m2)u = 0 —– (1)

where m > 0 is a given mass, is

(−p2 + m2)û(p) = 0 —– (2)

As a consequence, the support of û has to be a subset of the hyperbola Hm ⊂ R1+1 specified by the condition p2 = m2. One connected component of Hm consists of positive-energy vectors only; it is called the upper mass shell Hm+. The elements of Hm+ are the 4-momenta of classical relativistic point particles.

Denote by L1 the restricted Lorentz group, i.e., the connected component of the Lorentz group containing its unit element. In 1 + 1 dimensions, L1 coincides with the one-parameter Abelian group B(χ), χ ∈ R, of boosts. Hm+ is an orbit of L1 without fixed points. So if one chooses any point p′ ∈ Hm+, then there is, for each p ∈ Hm+, a unique χ(p) ∈ R with p = B(χ(p))p′. By construction, χ(B(ξ)p) = χ(p) + ξ, so the measure dχ on Hm+ is invariant under boosts and does note depend on the choice of p′.

For each p ∈ Hm+, the plane wave q ↦ e±ipq on R1+1 is a classical solution of the Klein-Gordon equation. The Klein-Gordon equation is linear, so if a+ and a are, say, integrable functions on Hm+, then

F(q) := ∫Hm+ (a+(p)e-ipq + a(p)eipq dχ(p) —– (3)

is a solution of the Klein-Gordon equation as well. If the functions a± are not integrable, the field F may still be well defined as a distribution. As an example, put a± ≡ (2π)−1, then

F(q) = (2π)−1 Hm+ (e-ipq + eipq) dχ(p) = π−1Hm+ cos(pq) dχ(p) =: Φ(q) —– (4)

and for a± ≡ ±(2πi)−1, F equals

F(q) = (2πi)−1Hm+ (e-ipq – eipq) dχ(p) = π−1Hm+ sin(pq) dχ(p) =: ∆(q) —– (5)

Quantum fields are obtained by “plugging” classical field equations and their solutions into the well-known second quantization procedure. This procedure replaces the complex (or, more generally speaking, finite-dimensional vector) field values by linear operators in an infinite-dimensional Hilbert space, namely, a Fock space. The Hilbert space of the hermitian scalar field is constructed from wave functions that are considered as the wave functions of one or several particles of mass m. The single-particle wave functions are the elements of the Hilbert space H1 := L2(Hm+, dχ). Put the vacuum (zero-particle) space H0 equal to C, define the vacuum vector Ω := 1 ∈ H0, and define the N-particle space HN as the Hilbert space of symmetric wave functions in L2((Hm+)N, dNχ), i.e., all wave functions ψ with

ψ(pπ(1) ···pπ(N)) = ψ(p1 ···pN)

∀ permutations π ∈ SN. The bosonic Fock space H is defined by

H := ⊕N∈N HN.

The subspace

D := ∪M∈N ⊕0≤M≤N HN is called a finite particle space.

The definition of the N-particle wave functions as symmetric functions endows the field with a Bose–Einstein statistics. To each wave function φ ∈ H1, assign a creation operator a+(φ) by

a+(φ)ψ := CNφ ⊗s ψ, ψ ∈ D,

where ⊗s denotes the symmetrized tensor product and where CN is a constant.

(a+(φ)ψ)(p1 ···pN) = CN/N ∑v φ(pν)ψ(pπ(1) ···p̂ν ···pπ(N)) —– (6)

where the hat symbol indicates omission of the argument. This defines a+(φ) as a linear operator on the finite-particle space D.

The adjoint operator a(φ) := a+(φ) is called an annihilation operator; it assigns to each ψ ∈ HN, N ≥ 1, the wave function a(φ)ψ ∈ HN−1 defined by

(a(φ)ψ)(p1 ···pN) := CN ∫Hm+ φ(p)ψ(p1 ···pN−1, p) dχ(p)

together with a(φ)Ω := 0, this suffices to specify a(φ) on D. Annihilation operators can also be defined for sharp momenta. Namely, one can define to each p ∈ Hm+ the annihilation operator a(p) assigning to

each ψ ∈ HN, N ≥ 1, the wave function a(p)ψ ∈ HN−1 given by

(a(p)ψ)(p1 ···pN−1) := Cψ(p, p1 ···pN−1), ψ ∈ HN,

and assigning 0 ∈ H to Ω. a(p) is, like a(φ), well defined on the finite-particle space D as an operator, but its hermitian adjoint is ill-defined as an operator, since the symmetric tensor product of a wave function by a delta function is no wave function.

Given any single-particle wave functions ψ, φ ∈ H1, the commutators [a(ψ), a(φ)] and [a+(ψ), a+(φ)] vanish by construction. It is customary to choose the constants CN in such a fashion that creation and annihilation operators exhibit the commutation relation

[a(φ), a+(ψ)] = ⟨φ, ψ⟩ —– (7)

which requires CN = N. With this choice, all creation and annihilation operators are unbounded, i.e., they are not continuous.

When defining the hermitian scalar field as an operator valued distribution, it must be taken into account that an annihilation operator a(φ) depends on its argument φ in an antilinear fashion. The dependence is, however, R-linear, and one can define the scalar field as a C-linear distribution in two steps.

For each real-valued test function φ on R1+1, define

Φ(φ) := a(φˆ|Hm+) + a+(φˆ|Hm+)

then one can define for an arbitrary complex-valued φ

Φ(φ) := Φ(Re(φ)) + iΦ(Im(φ))

Referring to (4), Φ is called the hermitian scalar field of mass m.

Thereafter, one could see

[Φ(q), Φ(q′)] = i∆(q − q′) —– (8)

Referring to (5), which is to be read as an equation of distributions. The distribution ∆ vanishes outside the light cone, i.e., ∆(q) = 0 if q2 < 0. Namely, the integrand in (5) is odd with respect to some p′ ∈ Hm+ if q is spacelike. Note that pq > 0 for all p ∈ Hm+ if q ∈ V+. The consequence of this is called microcausality: field operators located in spacelike separated regions commute (for the hermitian scalar field).

# Geometry and Localization: An Unholy Alliance? Thought of the Day 95.0 There are many misleading metaphors obtained from naively identifying geometry with localization. One which is very close to that of String Theory is the idea that one can embed a lower dimensional Quantum Field Theory (QFT) into a higher dimensional one. This is not possible, but what one can do is restrict a QFT on a spacetime manifold to a submanifold. However if the submanifold contains the time axis (a ”brane”), the restricted theory has too many degrees of freedom in order to merit the name ”physical”, namely it contains as many as the unrestricted; the naive idea that by using a subspace one only gets a fraction of phase space degrees of freedom is a delusion, this can only happen if the subspace does not contain a timelike line as for a null-surface (holographic projection onto a horizon).

The geometric picture of a string in terms of a multi-component conformal field theory is that of an embedding of an n-component chiral theory into its n-dimensional component space (referred to as a target space), which is certainly a string. But this is not what modular localization reveals, rather those oscillatory degrees of freedom of the multicomponent chiral current go into an infinite dimensional Hilbert space over one localization point and do not arrange themselves according according to the geometric source-target idea. A theory of this kind is of course consistent but String Theory is certainly a very misleading terminology for this state of affairs. Any attempt to imitate Feynman rules by replacing word lines by word sheets (of strings) may produce prescriptions for cooking up some mathematically interesting functions, but those results can not be brought into the only form which counts in a quantum theory, namely a perturbative approach in terms of operators and states.

String Theory is by no means the only area in particle theory where geometry and modular localization are at loggerheads. Closely related is the interpretation of the Riemann surfaces, which result from the analytic continuation of chiral theories on the lightray/circle, as the ”living space” in the sense of localization. The mathematical theory of Riemann surfaces does not specify how it should be realized; if its refers to surfaces in an ambient space, a distinguished subgroup of Fuchsian group or any other of the many possible realizations is of no concern for a mathematician. But in the context of chiral models it is important not to confuse the living space of a QFT with its analytic continuation.

Whereas geometry as a mathematical discipline does not care about how it is concretely realized the geometrical aspects of modular localization in spacetime has a very specific geometric content namely that which can be encoded in subspaces (Reeh-Schlieder spaces) generated by operator subalgebras acting onto the vacuum reference state. In other words the physically relevant spacetime geometry and the symmetry group of the vacuum is contained in the abstract positioning of certain subalgebras in a common Hilbert space and not that which comes with classical theories.

# Evolutionary Game Theory. Note Quote In classical evolutionary biology the fitness landscape for possible strategies is considered static. Therefore optimization theory is the usual tool in order to analyze the evolution of strategies that consequently tend to climb the peaks of the static landscape. However in more realistic scenarios the evolution of populations modifies the environment so that the fitness landscape becomes dynamic. In other words, the maxima of the fitness landscape depend on the number of specimens that adopt every strategy (frequency-dependent landscape). In this case, when the evolution depends on agents’ actions, game theory is the adequate mathematical tool to describe the process. But this is precisely the scheme in that the evolving physical laws (i.e. algorithms or strategies) are generated from the agent-agent interactions (bottom-up process) submitted to natural selection.

The concept of evolutionarily stable strategy (ESS) is central to evolutionary game theory. An ESS is defined as that strategy that cannot be displaced by any alternative strategy when being followed by the great majority – almost all of systems in a population. In general,

an ESS is not necessarily optimal; however it might be assumed that in the last stages of evolution — before achieving the quantum equilibrium — the fitness landscape of possible strategies could be considered static or at least slow varying. In this simplified case an ESS would be one with the highest payoff therefore satisfying an optimizing criterion. Different ESSs could exist in other regions of the fitness landscape.

In the information-theoretic Darwinian approach it seems plausible to assume as optimization criterion the optimization of information flows for the system. A set of three regulating principles could be:

Structure: The complexity of the system is optimized (maximized).. The definition that is adopted for complexity is Bennett’s logical depth that for a binary string is the time needed to execute the minimal program that generates such string. There is no a general acceptance of the definition of complexity, neither is there a consensus on the relation between the increase of complexity – for a certain definition – and Darwinian evolution. However, it seems that there is some agreement on the fact that, in the long term, Darwinian evolution should drive to an increase in complexity in the biological realm for an adequate natural definition of this concept. Then the complexity of a system at time in this theory would be the Bennett’s logical depth of the program stored at time in its Turing machine. The increase of complexity is a characteristic of Lamarckian evolution, and it is also admitted that the trend of evolution in the Darwinian theory is in the direction in which complexity grows, although whether this tendency depends on the timescale – or some other factors – is still not very clear.

Dynamics: The information outflow of the system is optimized (minimized). The information is the Fisher information measure for the probability density function of the position of the system. According to S. A. Frank, natural selection acts maximizing the Fisher information within a Darwinian system. As a consequence, assuming that the flow of information between a system and its surroundings can be modeled as a zero-sum game, Darwinian systems would follow dynamics.

Interaction: The interaction between two subsystems optimizes (maximizes) the complexity of the total system. The complexity is again equated to the Bennett’s logical depth. The role of Interaction is central in the generation of composite systems, therefore in the structure for the information processor of composite systems resulting from the logical interconnections among the processors of the constituents. There is an enticing option of defining the complexity of a system in contextual terms as the capacity of a system for anticipating the behavior at t + ∆t of the surrounding systems included in the sphere of radius r centered in the position X(t) occupied by the system. This definition would directly drive to the maximization of the predictive power for the systems that maximized their complexity. However, this magnitude would definitely be very difficult to even estimate, in principle much more than the usual definitions for complexity.

Quantum behavior of microscopic systems should now emerge from the ESS. In other terms, the postulates of quantum mechanics should be deduced from the application of the three regulating principles on our physical systems endowed with an information processor.

Let us apply Structure. It is reasonable to consider that the maximization of the complexity of a system would in turn maximize the predictive power of such system. And this optimal statistical inference capacity would plausibly induce the complex Hilbert space structure for the system’s space of states. Let us now consider Dynamics. This is basically the application of the principle of minimum Fisher information or maximum Cramer-Rao bound on the probability distribution function for the position of the system. The concept of entanglement seems to be determinant to study the generation of composite systems, in particular in this theory through applying Interaction. The theory admits a simple model that characterizes the entanglement between two subsystems as the mutual exchange of randomizers (R1, R2), programs (P1, P2) – with their respective anticipation modules (A1, A2) – and wave functions (Ψ1, Ψ2). In this way, both subsystems can anticipate not only the behavior of their corresponding surrounding systems, but also that of the environment of its partner entangled subsystem. In addition, entanglement can be considered a natural phenomenon in this theory, a consequence of the tendency to increase the complexity, and therefore, in a certain sense, an experimental support to the theory.

In addition, the information-theoretic Darwinian approach is a minimalist realist theory – every system follows a continuous trajectory in time, as in Bohmian mechanics, a local theory in physical space – in this theory apparent nonlocality, as in Bell’s inequality violations, would be an artifact of the anticipation module in the information space, although randomness would necessarily be intrinsic to nature through the random number generator methodologically associated with every fundamental system at t = 0, and as essential ingredient to start and fuel – through variation – Darwinian evolution. As time increases, random events determined by the random number generators would progressively be replaced by causal events determined by the evolving programs that gradually take control of the elementary systems. Randomness would be displaced by causality as physical Darwinian evolution gave rise to the quantum equilibrium regime, but not completely, since randomness would play a crucial role in the optimization of strategies – thus, of information flows – as game theory states.

# Noneism. Part 1. Noneism was created by Richard Routley. Its point of departure is the rejection of what Routley calls “The Ontological Assumption”. This assumption consists in the explicit or, more frequently, implicit belief that denoting always refers to existing objects. If the object, or objects, on which a proposition is about, do not exist, then these objects can only be one: the null entity. It is incredible that Frege believed that denoting descriptions without a real (empirical, theoretical, or ideal) referent denoted only the null set. And it is also difficult to believe that Russell sustained the thesis that non-existing objects cannot have properties and that propositions about these objects are false.

This means that we can have a very clear apprehension of imaginary objects, and quite clear intellection of abstract objects that are not real. This is possible because to determine an object we only need to describe it through its distinctive traits. This description is possible because an object is always chacterized through some definite notes. The amount of traits necessary to identify an object greatly varies. In some cases we need only a few, for instance, the golden mountain, or the blue bird; in other cases we need more, for instance, the goddess Venus or the centaur Chiron. In other instances the traits can be very numerous, even infinite. For instance the chiliedron, and the decimal number 0,0000…009, in which 9 comes after the first million zeros, have many traits. And the ordinal omega or any Hilbert space have infinite traits (although these traits can be reckoned through finite definitions). These examples show, in a convincing manner, that the Ontological Assumption is untenable. We must reject it and replace it with what Routley dubbs the Characterization Postulate. The Characterization Postulate says that, to be an object means to be characterized by determined traits. The set of the characterizing traits of an object can be called its “characteristic”. When the characteristic of an object is set up, the object is perfectly recognizable.

Once this postulate is adopted, its consequences are far reaching. Since we can characterize objects through any traits whatsoever, an object can not only be inexistent, it can even be absurd or inconsistent. For instance, the “squond” (the circle that is square and round). And we can make perfectly valid logical inferences from the premiss: x is the sqound:

(1) if x is the squond, then x is square
(2) if x is the squond, then x is round

So, the theory of objects has the widest realm of application. It is clear that the Ontological Assumption imposes unacceptable limits to logic. As a matter of fact, the existential quantifier of classical logic could not have been conceived without the Ontological Assumption. The expression “(∃x)Fx” means that there exists at least an object that has the property F (or, in extensional language, that there exists an x that is a member of the extension of F). For this reason, “∃x” is unappliable to non existing objects. Of course, in classical logic we can deny the existence of an Object, but we cannot say anything about Objects that have never existed and shall never exist (we are strictly speaking about classical logic). We cannot quantify individual variables of a first order predicate that do not refer to a real, actual, past or future entity. For instance, we cannot say “(∃x) (x is the eye of Polyphemus)”. This would be false, of course, because Polyphemus does not exist. But if the Ontological Assumption is set aside, it is true, within a mythological frame, that Polyphemus has a single eye and many other properties. And now we can understand why noneism leads to logical material-dependence.

As we have anticipated, there must be some limitations concerning the selection of the contradictory properties; otherwise the whole theory becomes inconsistent and is trivialized. To avoid trivialization neutral (noneist) logic distinguishes between two sorts of negation: the classical propositional negation: “8 is not P”, and the narrower negation: “8 is non-P”. In this way, and by applying some other technicalities (for instance, in case an universe is inconsistent, some kind of paraconsistent logic must be used) trivialization is avoided. With the former provisions, the Characterization Postulate can be applied to create inconsistent universes in which classical logic is not valid. For instance, a world in which there is a mysterious personage, that within determined but very subtle circumstances, is and is not at the same time in two different places. In this case the logic to be applied is, obviously, some kind of paraconsistent logic (the type to be selected depends on the characteristic of the personage). And in another universe there could be a jewel which has two false properties: it is false that it is transparent and it is false that it is opaque. In this kind of world we must use, clearly, some kind of paracomplete logic. To develop naive set theory (in Halmos sense), we must use some type of paraconsistent logic to cope with the paradoxes, that are produced through a natural way of mathematical reasoning; this logic can be of several orders, just like the classical. In other cases, we can use some kind of relevant and, a fortiori, paraconsistent logic; and so on, ad infinitum.

But if logic is content-dependent, and this dependence is a consequence of the Ontological Assumption’s rejection, what about ontology? Because the universes determined through the application of the Characterization Postulate may have no being (in fact, most of them do not), we cannot say that the objects that populate such universes are entities, because entities exist in the empirical world, or in the real world that underpins the phenomena, or (in a somewhat different way), in an ideal Platonic world. Instead of speaking about ontology, we should speak about objectology. In essence objectology is the discipline founded by Meinong (Theory of Objects), but enriched and made more precise by Routley and other noneist logicians. Its main division would be Ontology (the study of real physical and Platonic objects) and Medenology (the study of objects that have no existence).

# Emancipating Microlinearity from within a Well-adapted Model of Synthetic Differential Geometry towards an Adequately Restricted Cartesian Closed Category of Frölicher Spaces. Thought of the Day 15.0 Differential geometry of finite-dimensional smooth manifolds has been generalized by many authors to the infinite-dimensional case by replacing finite-dimensional vector spaces by Hilbert spaces, Banach spaces, Fréchet spaces or, more generally, convenient vector spaces as the local prototype. We know well that the category of smooth manifolds of any kind, whether finite-dimensional or infinite-dimensional, is not cartesian closed, while Frölicher spaces, introduced by Frölicher, do form a cartesian closed category. It seems that Frölicher and his followers do not know what a kind of Frölicher space, besides convenient vector spaces, should become the basic object of research for infinite-dimensional differential geometry. The category of Frölicher spaces and smooth mappings should be restricted adequately to a cartesian closed subcategory. Synthetic differential geometry is differential geometry with a cornucopia of nilpotent infinitesimals. Roughly speaking, a space of nilpotent infinitesimals of some kind, which exists only within an imaginary world, corresponds to a Weil algebra, which is an entity of the real world. The central object of study in synthetic differential geometry is microlinear spaces. Although the notion of a manifold (=a pasting of copies of a certain linear space) is defined on the local level, the notion of microlinearity is defined absolutely on the genuinely infinitesimal level. What we should do so as to get an adequately restricted cartesian closed category of Frölicher spaces is to emancipate microlinearity from within a well-adapted model of synthetic differential geometry.

Although nilpotent infinitesimals exist only within a well-adapted model of synthetic differential geometry, the notion of Weil functor was formulated for finite-dimensional manifolds and for infinite-dimensional manifolds. This is the first step towards microlinearity for Frölicher spaces. Therein all Frölicher spaces which believe in fantasy that all Weil functors are really exponentiations by some adequate infinitesimal objects in imagination form a cartesian closed category. This is the second step towards microlinearity for Frölicher spaces. Introducing the notion of “transversal limit diagram of Frölicher spaces” after the manner of that of “transversal pullback” is the third and final step towards microlinearity for Frölicher spaces. Just as microlinearity is closed under arbitrary limits within a well-adapted model of synthetic differential geometry, microlinearity for Frölicher spaces is closed under arbitrary transversal limits.

# Vector Representations and Why Would They Deviate From Projective Geometry? Note Quote. There is, of course, a definite reason why von Neumann used the mathematical structure of a complex Hilbert space for the formalization of quantum mechanics, but this reason is much less profound than it is for Riemann geometry and general relativity. The reason is that Heisenberg’s matrix mechanics and Schrödinger’s wave mechanics turned out to be equivalent, the first being a formalization of the new mechanics making use of l2, the set of all square summable complex sequences, and the second making use of L2(R3), the set of all square integrable complex functions of three real variables. The two spaces l2 and L2(R3) are canonical examples of a complex Hilbert space. This means that Heisenberg and Schrödinger were working already in a complex Hilbert space, when they formulated matrix mechanics and wave mechanics, without being aware of it. This made it a straightforward choice for von Neumann to propose a formulation of quantum mechanics in an abstract complex Hilbert space, reducing matrix mechanics and wave mechanics to two possible specific representations.

One problem with the Hilbert space representation was known from the start. A (pure) state of a quantum entity is represented by a unit vector or ray of the complex Hilbert space, and not by a vector. Indeed vectors contained in the same ray represent the same state or one has to renormalize the vector that represents the state after it has been changed in one way or another. It is well known that if rays of a vector space are called points and two dimensional subspaces of this vector space are called lines, the set of points and lines corresponding in this way to a vector space, form a projective geometry. What we just remarked about the unit vector or ray representing the state of the quantum entity means that in some way the projective geometry corresponding to the complex Hilbert space represents more intrinsically the physics of the quantum world as does the Hilbert space itself. This state of affairs is revealed explicitly in the dynamics of quantum entities, that is built by using group representations, and one has to consider projective representations, which are representations in the corresponding projective geometry, and not vector representations.