Homotopically Truncated Spaces.

The Eckmann–Hilton dual of the Postnikov decomposition of a space is the homology decomposition (or Moore space decomposition) of a space.

A Postnikov decomposition for a simply connected CW-complex X is a commutative diagram

Untitled

such that pn∗ : πr(X) → πr(Pn(X)) is an isomorphism for r ≤ n and πr(Pn(X)) = 0 for r > n. Let Fn be the homotopy fiber of qn. Then the exact sequence

πr+1(PnX) →qn∗ πr+1(Pn−1X) → πr(Fn) → πr(PnX) →qn∗ πr(Pn−1X)

shows that Fn is an Eilenberg–MacLane space K(πnX, n). Constructing Pn+1(X) inductively from Pn(X) requires knowing the nth k-invariant, which is a map of the form kn : Pn(X) → Yn. The space Pn+1(X) is then the homotopy fiber of kn. Thus there is a homotopy fibration sequence

K(πn+1X, n+1) → Pn+1(X) → Pn(X) → Yn

This means that K(πn+1X, n+1) is homotopy equivalent to the loop space ΩYn. Consequently,

πr(Yn) ≅ πr−1(ΩYn) ≅ πr−1(K(πn+1X, n+1) = πn+1X, r = n+2,

= 0, otherwise.

and we see that Yn is a K(πn+1X, n+2). Thus the nth k-invariant is a map kn : Pn(X) → K(πn+1X, n+2)

Note that it induces the zero map on all homotopy groups, but is not necessarily homotopic to the constant map. The original space X is weakly homotopy equivalent to the inverse limit of the Pn(X).

Applying the paradigm of Eckmann–Hilton duality, we arrive at the homology decomposition principle from the Postnikov decomposition principle by changing:

    • the direction of all arrows
    • π to H
    • loops Ω to suspensions S
    • fibrations to cofibrations and fibers to cofibers
    • Eilenberg–MacLane spaces K(G, n) to Moore spaces M(G, n)
    • inverse limits to direct limits

A homology decomposition (or Moore space decomposition) for a simply connected CW-complex X is a commutative diagram

Untitled

such that jn∗ : Hr(X≤n) → Hr(X) is an isomorphism for r ≤ n and Hr(X≤n) = 0 for

r > n. Let Cn be the homotopy cofiber of in. Then the exact sequence

Hr(X≤n−1) →in∗ Hr(X≤n) → Hr(Cn) →in∗ Hr−1(X≤n−1) → Hr−1(X≤n)

shows that Cn is a Moore space M(HnX, n). Constructing X≤n+1 inductively from X≤n requires knowing the nth k-invariant, which is a map of the form kn : Yn → X≤n.

The space X≤n+1 is then the homotopy cofiber of kn. Thus there is a homotopy cofibration sequence

Ynkn X≤nin+1 X≤n+1 → M(Hn+1X, n+1)

This means that M(Hn+1X, n+1) is homotopy equivalent to the suspension SYn. Consequently,

H˜r(Yn) ≅ Hr+1(SYn) ≅ Hr+1(M(Hn+1X, n+1)) = Hn+1X, r = n,

= 0, otherwise

and we see that Yn is an M(Hn+1X, n). Thus the nth k-invariant is a map kn : M(Hn+1X, n) → X≤n

It induces the zero map on all reduced homology groups, which is a nontrivial statement to make in degree n:

kn∗ : Hn(M(Hn+1X, n)) ∼= Hn+1(X) → Hn(X) ∼= Hn(X≤n)

The original space X is homotopy equivalent to the direct limit of the X≤n. The Eckmann–Hilton duality paradigm, while being a very valuable organizational principle, does have its natural limitations. Postnikov approximations possess rather good functorial properties: Let pn(X) : X → Pn(X) be a stage-n Postnikov approximation for X, that is, pn(X) : πr(X) → πr(Pn(X)) is an isomorphism for r ≤ n and πr(Pn(X)) = 0 for r > n. If Z is a space with πr(Z) = 0 for r > n, then any map g : X → Z factors up to homotopy uniquely through Pn(X). In particular, if f : X → Y is any map and pn(Y) : Y → Pn(Y) is a stage-n Postnikov approximation for Y, then, taking Z = Pn(Y) and g = pn(Y) ◦ f, there exists, uniquely up to homotopy, a map pn(f) : Pn(X) → Pn(Y) such that

Untitled

homotopy commutes. Let X = S22 e3 be a Moore space M(Z/2,2) and let Y = X ∨ S3. If X≤2 and Y≤2 denote stage-2 Moore approximations for X and Y, respectively, then X≤2 = X and Y≤2 = X. We claim that whatever maps i : X≤2 → X and j : Y≤2 → Y such that i : Hr(X≤2) → Hr(X) and j : Hr(Y≤2) → Hr(Y) are isomorphisms for r ≤ 2 one takes, there is always a map f : X → Y that cannot be compressed into the stage-2 Moore approximations, i.e. there is no map f≤2 : X≤2 → Y≤2 such that

Untitled

commutes up to homotopy. We shall employ the universal coefficient exact sequence for homotopy groups with coefficients. If G is an abelian group and M(G, n) a Moore space, then there is a short exact sequence

0 → Ext(G, πn+1Y) →ι [M(G, n), Y] →η Hom(G, πnY) → 0,

where Y is any space and [−,−] denotes pointed homotopy classes of maps. The map η is given by taking the induced homomorphism on πn and using the Hurewicz isomorphism. This universal coefficient sequence is natural in both variables. Hence, the following diagram commutes:

Untitled

Here we will briefly write E2(−) = Ext(Z/2,−) so that E2(G) = G/2G, and EY (−) = Ext(−, π3Y). By the Hurewicz theorem, π2(X) ∼= H2(X) ∼= Z/2, π2(Y) ∼= H2(Y) ∼= Z/2, and π2(i) : π2(X≤2) → π2(X), as well as π2(j) : π2(Y≤2) → π2(Y), are isomorphisms, hence the identity. If a homomorphism φ : A → B of abelian groups is onto, then E2(φ) : E2(A) = A/2A → B/2B = E2(B) remains onto. By the Hurewicz theorem, Hur : π3(Y) → H3(Y) = Z is onto. Consequently, the induced map E2(Hur) : E23Y) → E2(H3Y) = E2(Z) = Z/2 is onto. Let ξ ∈ E2(H3Y) be the generator. Choose a preimage x ∈ E23Y), E2(Hur)(x) = ξ and set [f] = ι(x) ∈ [X,Y]. Suppose there existed a homotopy class [f≤2] ∈ [X≤2, Y≤2] such that

j[f≤2] = i[f].

Then

η≤2[f≤2] = π2(j)η≤2[f≤2] = ηj[f≤2] = ηi[f] = π2(i)η[f] = π2(i)ηι(x) = 0.

Thus there is an element ε ∈ E23Y≤2) such that ι≤2(ε) = [f≤2]. From ιE2π3(j)(ε) = jι≤2(ε) = j[f≤2] = i[f] = iι(x) = ιEY π2(i)(x)

we conclude that E2π3(j)(ε) = x since ι is injective. By naturality of the Hurewicz map, the square

Untitled

commutes and induces a commutative diagram upon application of E2(−):

Untitled

It follows that

ξ = E2(Hur)(x) = E2(Hur)E2π3(j)(ε) = E2H3(j)E2(Hur)(ε) = 0,

a contradiction. Therefore, no compression [f≤2] of [f] exists.

Given a cellular map, it is not always possible to adjust the extra structure on the source and on the target of the map so that the map preserves the structures. Thus the category theoretic setup automatically, and in a natural way, singles out those continuous maps that can be compressed into homologically truncated spaces.

Advertisements

10 or 11 Dimensions? Phenomenological Conundrum. Drunken Risibility.

supersymmetry_experienc_2014_02

It is not the fact that we are living in a ten-dimensional world which forces string theory to a ten-dimensional description. It is that perturbative string theories are only anomaly-free in ten dimensions; and they contain gravitons only in a ten-dimensional formulation. The resulting question, how the four-dimensional spacetime of phenomenology comes off from ten-dimensional perturbative string theories (or its eleven-dimensional non-perturbative extension: the mysterious M theory), led to the compactification idea and to the braneworld scenarios.

It is not the fact that empirical indications for supersymmetry were found, that forces consistent string theories to include supersymmetry. Without supersymmetry, string theory has no fermions and no chirality, but there are tachyons which make the vacuum instable; and supersymmetry has certain conceptual advantages: it leads very probably to the finiteness of the perturbation series, thereby avoiding the problem of non-renormalizability which haunted all former attempts at a quantization of gravity; and there is a close relation between supersymmetry and Poincaré invariance which seems reasonable for quantum gravity. But it is clear that not all conceptual advantages are necessarily part of nature – as the example of the elegant, but unsuccessful Grand Unified Theories demonstrates.

Apart from its ten (or eleven) dimensions and the inclusion of supersymmetry – both have more or less the character of only conceptually, but not empirically motivated ad-hoc assumptions – string theory consists of a rather careful adaptation of the mathematical and model-theoretical apparatus of perturbative quantum field theory to the quantized, one-dimensionally extended, oscillating string (and, finally, of a minimal extension of its methods into the non-perturbative regime for which the declarations of intent exceed by far the conceptual successes). Without any empirical data transcending the context of our established theories, there remains for string theory only the minimal conceptual integration of basic parts of the phenomenology already reproduced by these established theories. And a significant component of this phenomenology, namely the phenomenology of gravitation, was already used up in the selection of string theory as an interesting approach to quantum gravity. Only, because string theory – containing gravitons as string states – reproduces in a certain way the phenomenology of gravitation, it is taken seriously.

But consistency requirements, the minimal inclusion of basic phenomenological constraints, and the careful extension of the model-theoretical basis of quantum field theory are not sufficient to establish an adequate theory of quantum gravity. Shouldn’t the landscape scenario of string theory be understood as a clear indication, not only of fundamental problems with the reproduction of the gauge invariances of the standard model of quantum field theory (and the corresponding phenomenology), but of much more severe conceptual problems? Almost all attempts at a solution of the immanent and transcendental problems of string theory seem to end in the ambiguity and contingency of the multitude of scenarios of the string landscape. That no physically motivated basic principle is known for string theory and its model-theoretical procedures might be seen as a problem which possibly could be overcome in future developments. But, what about the use of a static background spacetime in string theory which falls short of the fundamental insights of general relativity and which therefore seems to be completely unacceptable for a theory of quantum gravity?

At least since the change of context (and strategy) from hadron physics to quantum gravity, the development of string theory was dominated by immanent problems which led with their attempted solutions deeper. The result of this successively increasing self- referentiality is a more and more enhanced decoupling from phenomenological boundary conditions and necessities. The contact with the empirical does not increase, but gets weaker and weaker. The result of this process is a labyrinthic mathematical structure with a completely unclear physical relevance

Geometry and Localization: An Unholy Alliance? Thought of the Day 95.0

SYM5

There are many misleading metaphors obtained from naively identifying geometry with localization. One which is very close to that of String Theory is the idea that one can embed a lower dimensional Quantum Field Theory (QFT) into a higher dimensional one. This is not possible, but what one can do is restrict a QFT on a spacetime manifold to a submanifold. However if the submanifold contains the time axis (a ”brane”), the restricted theory has too many degrees of freedom in order to merit the name ”physical”, namely it contains as many as the unrestricted; the naive idea that by using a subspace one only gets a fraction of phase space degrees of freedom is a delusion, this can only happen if the subspace does not contain a timelike line as for a null-surface (holographic projection onto a horizon).

The geometric picture of a string in terms of a multi-component conformal field theory is that of an embedding of an n-component chiral theory into its n-dimensional component space (referred to as a target space), which is certainly a string. But this is not what modular localization reveals, rather those oscillatory degrees of freedom of the multicomponent chiral current go into an infinite dimensional Hilbert space over one localization point and do not arrange themselves according according to the geometric source-target idea. A theory of this kind is of course consistent but String Theory is certainly a very misleading terminology for this state of affairs. Any attempt to imitate Feynman rules by replacing word lines by word sheets (of strings) may produce prescriptions for cooking up some mathematically interesting functions, but those results can not be brought into the only form which counts in a quantum theory, namely a perturbative approach in terms of operators and states.

String Theory is by no means the only area in particle theory where geometry and modular localization are at loggerheads. Closely related is the interpretation of the Riemann surfaces, which result from the analytic continuation of chiral theories on the lightray/circle, as the ”living space” in the sense of localization. The mathematical theory of Riemann surfaces does not specify how it should be realized; if its refers to surfaces in an ambient space, a distinguished subgroup of Fuchsian group or any other of the many possible realizations is of no concern for a mathematician. But in the context of chiral models it is important not to confuse the living space of a QFT with its analytic continuation.

Whereas geometry as a mathematical discipline does not care about how it is concretely realized the geometrical aspects of modular localization in spacetime has a very specific geometric content namely that which can be encoded in subspaces (Reeh-Schlieder spaces) generated by operator subalgebras acting onto the vacuum reference state. In other words the physically relevant spacetime geometry and the symmetry group of the vacuum is contained in the abstract positioning of certain subalgebras in a common Hilbert space and not that which comes with classical theories.

Kant and Non-Euclidean Geometries. Thought of the Day 94.0

ei5yC

The argument that non-Euclidean geometries contradict Kant’s doctrine on the nature of space apparently goes back to Hermann Helmholtz and was retaken by several philosophers of science such as Hans Reichenbach (The Philosophy of Space and Time) who devoted much work to this subject. In a essay written in 1870, Helmholtz argued that the axioms of geometry are not a priori synthetic judgments (in the sense given by Kant), since they can be subjected to experiments. Given that Euclidian geometry is not the only possible geometry, as was believed in Kant’s time, it should be possible to determine by means of measurements whether, for instance, the sum of the three angles of a triangle is 180 degrees or whether two straight parallel lines always keep the same distance among them. If it were not the case, then it would have been demonstrated experimentally that space is not Euclidean. Thus the possibility of verifying the axioms of geometry would prove that they are empirical and not given a priori.

Helmholtz developed his own version of a non-Euclidean geometry on the basis of what he believed to be the fundamental condition for all geometries: “the possibility of figures moving without change of form or size”; without this possibility, it would be impossible to define what a measurement is. According to Helmholtz:

the axioms of geometry are not concerned with space-relations only but also at the same time with the mechanical deportment of solidest bodies in motion.

Nevertheless, he was aware that a strict Kantian might argue that the rigidity of bodies is an a priori property, but

then we should have to maintain that the axioms of geometry are not synthetic propositions… they would merely define what qualities and deportment a body must have to be recognized as rigid.

At this point, it is worth noticing that Helmholtz’s formulation of geometry is a rudimentary version of what was later developed as the theory of Lie groups. As for the transport of rigid bodies, it is well known that rigid motion cannot be defined in the framework of the theory of relativity: since there is no absolute simultaneity of events, it is impossible to move all parts of a material body in a coordinated and simultaneous way. What is defined as the length of a body depends on the reference frame from where it is observed. Thus, it is meaningless to invoke the rigidity of bodies as the basis of a geometry that pretend to describe the real world; it is only in the mathematical realm that the rigid displacement of a figure can be defined in terms of what mathematicians call a congruence.

Arguments similar to those of Helmholtz were given by Reichenbach in his intent to refute Kant’s doctrine on the nature of space and time. Essentially, the argument boils down to the following: Kant assumed that the axioms of geometry are given a priori and he only had classical geometry in mind, Einstein demonstrated that space is not Euclidean and that this could be verified empirically, ergo Kant was wrong. However, Kant did not state that space must be Euclidean; instead, he argued that it is a pure form of intuition. As such, space has no physical reality of its own, and therefore it is meaningless to ascribe physical properties to it. Actually, Kant never mentioned Euclid directly in his work, but he did refer many times to the physics of Newton, which is based on classical geometry. Kant had in mind the axioms of this geometry which is a most powerful tool of Newtonian mechanics. Actually, he did not even exclude the possibility of other geometries, as can be seen in his early speculations on the dimensionality of space.

The important point missed by Reichenbach is that Riemannian geometry is necessarily based on Euclidean geometry. More precisely, a Riemannian space must be considered as locally Euclidean in order to be able to define basic concepts such as distance and parallel transport; this is achieved by defining a flat tangent space at every point, and then extending all properties of this flat space to the globally curved space (Luther Pfahler Eisenhart Riemannian Geometry). To begin with, the structure of a Riemannian space is given by its metric tensor gμν from which the (differential) length is defined as ds2 = gμν dxμ dxν; but this is nothing less than a generalization of the usual Pythagoras theorem in Euclidean space. As for the fundamental concept of parallel transport, it is taken directly from its analogue in Euclidean space: it refers to the transport of abstract (not material, as Helmholtz believed) figures in such a space. Thus Riemann’s geometry cannot be free of synthetic a priori propositions because it is entirely based upon concepts such as length and congruence taken form Euclid. We may conclude that Euclids geometry is the condition of possibility for a more general geometry, such as Riemann’s, simply because it is the natural geometry adapted to our understanding; Kant would say that it is our form of grasping space intuitively. The possibility of constructing abstract spaces does not refute Kant’s thesis; on the contrary, it reinforces it.

Individuation. Thought of the Day 91.0

Figure-6-Concepts-of-extensionality

The first distinction is between two senses of the word “individuation” – one semantic, the other metaphysical. In the semantic sense of the word, to individuate an object is to single it out for reference in language or in thought. By contrast, in the metaphysical sense of the word, the individuation of objects has to do with “what grounds their identity and distinctness.” Sets are often used to illustrate the intended notion of “grounding.” The identity or distinctness of sets is said to be “grounded” in accordance with the principle of extensionality, which says that two sets are identical iff they have precisely the same elements:

SET(x) ∧ SET(y) → [x = y ↔ ∀u(u ∈ x ↔ u ∈ y)]

The metaphysical and semantic senses of individuation are quite different notions, neither of which appears to be reducible to or fully explicable in terms of the other. Since sufficient sense cannot be made of the notion of “grounding of identity” on which the metaphysical notion of individuation is based, focusing on the semantic notion of individuation is an easy way out. This choice of focus means that our investigation is a broadly empirical one drawn on empirical linguistics and psychology.

What is the relation between the semantic notion of individuation and the notion of a criterion of identity? It is by means of criteria of identity that semantic individuation is effected. Singling out an object for reference involves being able to distinguish this object from other possible referents with which one is directly presented. The final distinction is between two types of criteria of identity. A one-level criterion of identity says that two objects of some sort F are identical iff they stand in some relation RF:

Fx ∧ Fy → [x = y ↔ RF(x,y)]

Criteria of this form operate at just one level in the sense that the condition for two objects to be identical is given by a relation on these objects themselves. An example is the set-theoretic principle of extensionality.

A two-level criterion of identity relates the identity of objects of one sort to some condition on entities of another sort. The former sort of objects are typically given as functions of items of the latter sort, in which case the criterion takes the following form:

f(α) = f(β) ↔ α ≈ β

where the variables α and β range over the latter sort of item and ≈ is an equivalence relation on such items. An example is Frege’s famous criterion of identity for directions:

d(l1) = d(l2) ↔ l1 || l2

where the variables l1 and l2 range over lines or other directed items. An analogous two-level criterion relates the identity of geometrical shapes to the congruence of things or figures having the shapes in question. The decision to focus on the semantic notion of individuation makes it natural to focus on two-level criteria. For two-level criteria of identity are much more useful than one-level criteria when we are studying how objects are singled out for reference. A one-level criterion provides little assistance in the task of singling out objects for reference. In order to apply a one-level criterion, one must already be capable of referring to objects of the sort in question. By contrast, a two-level criterion promises a way of singling out an object of one sort in terms of an item of another and less problematic sort. For instance, when Frege investigated how directions and other abstract objects “are given to us”, although “we cannot have any ideas or intuitions of them”, he proposed that we relate the identity of two directions to the parallelism of the two lines in terms of which these directions are presented. This would be explanatory progress since reference to lines is less puzzling than reference to directions.

Contact Geometry and Manifolds

Fig-1-Contact-geometry-of-a-rough-body-against-a-plane-d-c-denotes-d-0-d-x-c-TH-h-c

Let M be a manifold of dimension 2n + 1. A contact structure on M is a distribution ξ ⊂ TM of dimension 2n, such that the defining 1-form α satisfies

α ∧ (dα)n ≠ 0 —– (1)

A 1-form α satisfying (1) is said to be a contact form on M. Let α be a contact form on M; then there exists a unique vector field Rα on M such that

α(Rα) = 1, ιRα dα = 0,

where ιRα dα denotes the contraction of dα along Rα. By definition Rα is called the Reeb vector field of the contact form α. A contact manifold is a pair (M, ξ) where M is a 2n + 1-dimensional manifold and ξ is a contact structure. Let (M, ξ) be a contact manifold and fix a defining (contact) form α. Then the 2-form κ = 1/2 dα defines a symplectic form on the contact structure ξ; therefore the pair (ξ, κ) is a symplectic vector bundle over M. A complex structure on ξ is the datum of J ∈ End(ξ) such that J2 = −Iξ.

Let α be a contact form on M, with ξ = ker α and let κ = 1/2 dα. A complex structure J on ξ is said to be κ-calibrated if gJ [x](·, ·) := κ[x](·, Jx ·) is a JxHermitian inner product on ξx for any x ∈ M.

The set of κ-calibrated complex structures on ξ will be denoted by Cα(M). If J is a complex structure on ξ = ker α, then we extend it to an endomorphism of TM by setting

J(Rα) = 0.

Note that such a J satisfies

J2 =−I + α ⊗ Rα

If J is κ-calibrated, then it induces a Riemannian metric g on M given by

g := gJ + α ⊗ α —– (2)

Furthermore the Nijenhuis tensor of J is defined by

NJ (X, Y) = [JX, JY] − J[X, JY] − J[Y, JX] + J2[X, Y] for any X, Y ∈ TM

A Sasakian structure on a 2n + 1-dimensional manifold M is a pair (α, J), where

• α is a contact form;

• J ∈ Cα(M) satisfies NJ = −dα ⊗ Rα

The triple (M, α, J) is said to be a Sasakian manifold. Let (M, ξ) be a contact manifold. A differential r-form γ on M is said to be basic if

ιRα γ = 0, LRα γ = 0,

where L denotes the Lie derivative and Rα is the Reeb vector field of an arbitrary contact form defining ξ. We will denote by ΛrB(M) the set of basic r-forms on (M, ξ). Note that

rB(M) ⊂ Λr+1B(M)

The cohomology HB(M) of this complex is called the basic cohomology of (M, ξ). If (M, α, J) is a Sasakian manifold, then

J(ΛrB(M)) = ΛrB(M), where, as usual, the action of J on r-forms is defined by

Jφ(X1,…, Xr) = φ(JX1,…, JXr)

Consequently ΛrB(M) ⊗ C splits as

ΛrB(M) ⊗ C = ⊕p+q=r Λp,qJ(ξ)

and, according with this gradation, it is possible to define the cohomology groups Hp,qB(M). The r-forms belonging to Λp,qJ(ξ) are said to be of type (p, q) with respect to J. Note that κ = 1/2 dα ∈ Λ1,1J(ξ) and it determines a non-vanishing cohomology class in H1,1B(M). The Sasakian structure (α, J) also induces a natural connection ∇ξ on ξ given by

ξX Y = (∇X Y)ξ if X ∈ ξ

= [Rα, Y] if X = Rα

where the subscript ξ denotes the projection onto ξ. One easily gets

ξX J = 0, ∇ξXgJ = 0, ∇ξX dα = 0, ∇ξX Y − ∇ξY X = [X,Y]ξ,

for any X, Y ∈ TM. Consequently we have Hol(∇ξ) ⊆ U(n).

The basic cohomology class

cB1(M) = 1/2π [ρT] ∈ H1,1B(M)

is called the first basic Chern class of (M, α, J) and, if it vanishes, then (M, α, J) is said to be null-Sasakian.

Furthermore a Sasakian manifold is called α-Einstein if there exist λ, ν ∈ C(M, R) such that

Ric = λg + να ⊗ α, where Ric is the Ricci Tensor.

A submanifold p: L ֒→ M of a 2n + 1-dimensional contact manifold (M, ξ) is said to be Legendrian if :

1) dimRL = n,

2) p(TL) ⊂ ξ

Observe that, if α is a defining form of the contact structure ξ, then condition 2) is equivalent to say that p(α) = 0. Hence Legendrian submanifolds are the analogue of Lagrangian submanifolds in contact geometry.

Cryptocurrency and Efficient Market Hypothesis. Drunken Risibility.

According to the traditional definition, a currency has three main properties: (i) it serves as a medium of exchange, (ii) it is used as a unit of account and (iii) it allows to store value. Along economic history, monies were related to political power. In the beginning, coins were minted in precious metals. Therefore, the value of a coin was intrinsically determined by the value of the metal itself. Later, money was printed in paper bank notes, but its value was linked somewhat to a quantity in gold, guarded in the vault of a central bank. Nation states have been using their political power to regulate the use of currencies and impose one currency (usually the one issued by the same nation state) as legal tender for obligations within their territory. In the twentieth century, a major change took place: abandoning gold standard. The detachment of the currencies (specially the US dollar) from the gold standard meant a recognition that the value of a currency (specially in a world of fractional banking) was not related to its content or representation in gold, but to a broader concept as the confidence in the economy in which such currency is based. In this moment, the value of a currency reflects the best judgment about the monetary policy and the “health” of its economy.

In recent years, a new type of currency, a synthetic one, emerged. We name this new type as “synthetic” because it is not the decision of a nation state, nor represents any underlying asset or tangible wealth source. It appears as a new tradable asset resulting from a private agreement and facilitated by the anonymity of internet. Among this synthetic currencies, Bitcoin (BTC) emerges as the most important one, with a market capitalization of a few hundred million short of $80 billions.

bitcoin-price-bitstamp-sept1

Bitcoin Price Chart from Bitstamp

There are other cryptocurrencies, based on blockchain technology, such as Litecoin (LTC), Ethereum (ETH), Ripple (XRP). The website https://coinmarketcap.com/currencies/ counts up to 641 of such monies. However, as we can observe in the figure below, Bitcoin represents 89% of the capitalization of the market of all cryptocurrencies.

Untitled

Cryptocurrencies. Share of market capitalization of each currency.

One open question today is if Bitcoin is in fact a, or may be considered as a, currency. Until now, we cannot observe that Bitcoin fulfills the main properties of a standard currency. It is barely (though increasingly so!) accepted as a medium of exchange (e.g. to buy some products online), it is not used as unit of account (there are no financial statements valued in Bitcoins), and we can hardly believe that, given the great swings in price, anyone can consider Bitcoin as a suitable option to store value. Given these characteristics, Bitcoin could fit as an ideal asset for speculative purposes. There is no underlying asset to relate its value to and there is an open platform to operate round the clock.

Untitled

Bitcoin returns, sampled every 5 hours.

Speculation has a long history and it seems inherent to capitalism. One common feature of speculative assets in history has been the difficulty in valuation. Tulipmania, the South Sea bubble, and more others, reflect on one side human greedy behavior, and on the other side, the difficulty to set an objective value to an asset. All speculative behaviors were reflected in a super-exponential growth of the time series.

Cryptocurrencies can be seen as the libertarian response to central bank failure to manage financial crises, as the one occurred in 2008. Also cryptocurrencies can bypass national restrictions to international transfers, probably at a cheaper cost. Bitcoin was created by a person or group of persons under the pseudonym Satoshi Nakamoto. The discussion of Bitcoin has several perspectives. The computer science perspective deals with the strengths and weaknesses of blockchain technology. In fact, according to R. Ali et. al., the introduction of a “distributed ledger” is the key innovation. Traditional means of payments (e.g. a credit card), rely on a central clearing house that validate operations, acting as “middleman” between buyer and seller. On contrary, the payment validation system of Bitcoin is decentralized. There is a growing army of miners, who put their computer power at disposal of the network, validating transactions by gathering together blocks, adding them to the ledger and forming a ’block chain’. This work is remunerated by giving the miners Bitcoins, what makes (until now) the validating costs cheaper than in a centralized system. The validation is made by solving some kind of algorithm. With the time solving the algorithm becomes harder, since the whole ledger must be validated. Consequently it takes more time to solve it. Contrary to traditional currencies, the total number of Bitcoins to be issued is beforehand fixed: 21 million. In fact, the issuance rate of Bitcoins is expected to diminish over time. According to Laursen and Kyed, validating the public ledger was initially rewarded with 50 Bitcoins, but the protocol foresaw halving this quantity every four years. At the current pace, the maximum number of Bitcoins will be reached in 2140. Taking into account the decentralized character, Bitcoin transactions seem secure. All transactions are recorded in several computer servers around the world. In order to commit fraud, a person should change and validate (simultaneously) several ledgers, which is almost impossible. Additional, ledgers are public, with encrypted identities of parties, making transactions “pseudonymous, not anonymous”. The legal perspective of Bitcoin is fuzzy. Bitcoin is not issued, nor endorsed by a nation state. It is not an illegal substance. As such, its transaction is not regulated.

In particular, given the nonexistence of saving accounts in Bitcoin, and consequently the absense of a Bitcoin interest rate, precludes the idea of studying the price behavior in relation with cash flows generated by Bitcoins. As a consequence, the underlying dynamics of the price signal, finds the Efficient Market Hypothesis as a theoretical framework. The Efficient Market Hypothesis (EMH) is the cornerstone of financial economics. One of the seminal works on the stochastic dynamics of speculative prices is due to L Bachelier, who in his doctoral thesis developed the first mathematical model concerning the behavior of stock prices. The systematic study of informational efficiency begun in the 1960s, when financial economics was born as a new area within economics. The classical definition due to Eugene Fama (Foundations of Finance_ Portfolio Decisions and Securities Prices 1976-06) says that a market is informationally efficient if it “fully reflects all available information”. Therefore, the key element in assessing efficiency is to determine the appropriate set of information that impels prices. Following Efficient Capital Markets, informational efficiency can be divided into three categories: (i) weak efficiency, if prices reflect the information contained in the past series of prices, (ii) semi-strong efficiency, if prices reflect all public information and (iii) strong efficiency, if prices reflect all public and private information. As a corollary of the EMH, one cannot accept the presence of long memory in financial time series, since its existence would allow a riskless profitable trading strategy. If markets are informationally efficient, arbitrage prevent the possibility of such strategies. If we consider the financial market as a dynamical structure, short term memory can exist (to some extent) without contradicting the EMH. In fact, the presence of some mispriced assets is the necessary stimulus for individuals to trade and reach an (almost) arbitrage free situation. However, the presence of long range memory is at odds with the EMH, because it would allow stable trading rules to beat the market.

The presence of long range dependence in financial time series generates a vivid debate. Whereas the presence of short term memory can stimulate investors to exploit small extra returns, making them disappear, long range correlations poses a challenge to the established financial model. As recognized by Ciaian et. al., Bitcoin price is not driven by macro-financial indicators. Consequently a detailed analysis of the underlying dynamics (Hurst exponent) becomes important to understand its emerging behavior. There are several methods (both parametric and non parametric) to calculate the Hurst exponent, which become a mandatory framework to tackle BTC trading.