Market Liquidity

market-liquidity-graphic-1_0

The notion of market liquidity is nowadays almost ubiquitous. It quantifies the ability of a financial market to match buyers and sellers in an efficient way, without causing a significant movement in the price, thus delivering low transaction costs. It is the lifeblood of financial markets without which market dislocations can show as in the recent well documented crisis: 2007 Yen carry trade unwind, 2008 Credit Crunch, May 6th 2010 Flash Crash or the numerous Mini Flash Crashes occurring in US equity markets, but also in many others cases that go unnoticed but are potent candidates to become more important. While omnipresent, liquidity is an elusive concept. Several reasons may account for this ambiguity; some markets, such as the foreign exchange (FX) market with the daily turnover of $5.3 trillion (let’s say), are mistakenly assumed to be extremely liquid, whereas the generated volume is equated with liquidity. Secondly, the structure of modern markets with its high degree of decentralization generates fragmentation and low transparency of transactions which complicates the way to define market liquidity as a whole. Aggregating liquidity from all trading sources can be quite daunting and even with all of the market fragmentation, as new venues with different market structure continue to be launched. Furthermore, the landscape is continuously changing as new players emerge, such as high frequency traders that have taken over the role of liquidity intermediation in many markets, accounting between 50% and 70%  (and ever rising) of all trading. Last, but not least, important participants influencing the markets are the central banks with their myriad of market interventions, whereas it is indirectly through monetization of substantial amount of sovereign and mortgage debt with various quantitative easing programs, or in a direct manner as with Swiss National Bank setting the floor on EUR/CHF exchange rate, providing plenty of arguments they have overstepped their role of last resort liquidity providers and at this stage they hamper market liquidity, potentially exposing themselves to massive losses in the near future.

Despite the obvious importance of liquidity there is little agreement on the best way to measure and define market liquidity. Liquidity measures can be classified into different categories. Volume-based measures: liquidity ratio, Martin index, Hui and Heubel ratio, turnover ratio, market adjusted liquidity index, where, over a fixed period of time, the exchanged volume is compared to price changes. This class implies that non-trivial assumptions are made about the relation between volume and price movements. Other classes of measures include price based measures: Marsh and Rock ratio, variance ratio, vector autoregressive models; transaction costs based measures: spread, implied spread, absolute spread or relative spread see; or time based measures: number of transactions or orders per time unit. The aforementioned approaches suffer from many drawbacks. They provide a top-down approach of analysing a complex system, where the impact of the variation of liquidity is analysed rather than providing a bottom-up approach where liquidity lacking times are identified and quantified. These approaches also suffer from a specific choice of physical time, that does not reflect the correct and multi-scale nature of any financial market. Liquidity is defined as an information theoretic measurement that characterises the unlikeliness of price trajectories and argue that this new metric has the ability to detect and predict stress in financial markets and show examples within the FX market, so that the optimal choice of scales is derived using the Maximum Entropy Principle.

Left/Right Paradigm, According to the Left

LEFTright

There is the traditional “right” (a bunch of NeoCons and Rinos in the pockets of the same oligarchal scumbags who own the Democrats) who consistently (by pure coincidence, of course) lose every significant battle, even when controlling both the Executive and Legislative branches of US government. Plus the normalcy bias-afflicted status quo-worshipers; moderates; and coincidence theorists who vote for them because Fox News tells them anyone better is too radical. Both the office-holding surrender monkeys and their gullible voter base have recently been labeled “cuckservatives” or “cucks” for short.

And now there is the “Alt Right”–those who intentionally lump you together with the cucks if you value individual liberty and representative government higher than racial identity. In other words, the loudest voices in the “Alt Right” are both the mirror image of the goose-stepping collectivist SJWs on the left, and the caricature of a “right-winger” that those goose-stepping collectivist SJWs cling to as a vital component of The Narrative.

If you don’t want to join the hive mind of the left, then your choice is either to assimilate with the socialists in “conservative” drag (NeoCons, RINOs, cucksevatives, the GOPe, etc.); or to buy into white supremacy (also referred to as “Western Civilization” in the white tribalist  blogosphere).

NRx seemed to provide a space where intelligent ideas could be discussed freely and a rallying point for those intelligent but dissatisfied people of the right. However, with the infusion of the alt-Right, thought policing–admittedly of different kind–has returned with methods of the SJW, driving away the intelligent people.

For the Left, this state of affairs is particularly fortuitous and sometimes you have to wonder if they bring out their alt-Right hitmen every now and then to discredit intelligent Rightists through guilt by association.

Homogeneity: Leibniz Contra Euler. Note Quote.

1200px-RationalBezier2D.png

Euler insists that the relation of equality holds between any infinitesimal and zero. Similarly, Leibniz worked with a generalized relation of “equality” which was an equality up to a negligible term. Leibniz codified this relation in terms of his transcendental law of homogeneity (TLH), or lex homogeneorum transcendentalis in the original Latin. Leibniz had already referred to the law of homogeneity in his first work on the calculus: “the only remaining differential quantities, namely dx, dy, are found always outside the numerators and roots, and each member is acted on by either dx, or by dy, always with the law of homogeneity maintained with regard to these two quantities, in whatever manner the calculation may turn out.”

The TLH governs equations involving differentials. Bos interprets it as follows:

A quantity which is infinitely small with respect to an- other quantity can be neglected if compared with that quantity. Thus all terms in an equation except those of the highest order of infinity, or the lowest order of infinite smallness, can be discarded. For instance,

a + dx = a —– (1)

dx+ddy = dx

etc. The resulting equations satisfy this . . . requirement of homogeneity.

(here the expression ddx denotes a second-order differential obtained as a second difference). Thus, formulas like Euler’s

a + dx = a —– (2)

(where a “is any finite quantity”; (Euler) belong in the Leibnizian tradition of drawing inferences in accordance with the TLH and as reported by Bos in formula (1) above. The principle of cancellation of infinitesimals was, of course, the very basis of the technique. However, it was also the target of Berkeley’s charge of a logical inconsistency (Berkeley). This can be expressed in modern notation by the conjunction (dx ≠ 0) ∧ (dx = 0). But the Leibnizian framework does not suffer from an inconsistency of type (dx ≠ 0) ∧ (dx = 0) given the more general relation of “equality up to”; in other words, the dx is not identical to zero but is merely discarded at the end of the calculation in accordance with the TLH.

Relations of equality: What Euler and Leibniz appear to have realized more clearly than their contemporaries is that there is more than one relation falling under the general heading of “equality”. Thus, to explain formulas like (2), Euler elaborated two distinct ways, arithmetic and geometric, of comparing quantities. He described the two modalities of comparison in the following terms:

Since we are going to show that an infinitely small quantity is really zero (cyphra), we must meet the objection of why we do not always use the same symbol 0 for infinitely small quantities, rather than some special ones…

[S]ince we have two ways to compare them [a more pre- cise translation would be “there are two modalities of comparison”], either arithmetic or geometric, let us look at the quotients of quantities to be compared in order to see the difference. (Euler)

Furthermore,

If we accept the notation used in the analysis of the infi- nite, then dx indicates a quantity that is infinitely small, so that both dx = 0 and a dx = 0, where a is any finite quantity. Despite this, the geometric ratio a dx : dx is finite, namely a : 1. For this reason, these two infinitely small quantities, dx and adx, both being equal to 0, cannot be confused when we consider their ratio. In a similar way, we will deal with infinitely small quantities dx and dy.

Having defined the two modalities of comparison of quantities, arithmetic and geometric, Euler proceeds to clarify the difference between them as follows:

Let a be a finite quantity and let dx be infinitely small. The arithmetic ratio of equals is clear:

Since ndx = 0, we have

a ± ndx − a = 0 —– (3)

On the other hand, the geometric ratio is clearly of equals, since

(a ± ndx)/a =1 —– (4)

While Euler speaks of distinct modalities of comparison, he writes them down symbolically in terms of two distinct relations, both denoted by the equality sign “=”; namely, (3) and (4). Euler concludes as follows:

From this we obtain the well-known rule that the infinitely small vanishes in comparison with the finite and hence can be neglected [with respect to it].

Note that in the Latin original, the italicized phrase reads infinite parva prae finitis evanescant, atque adeo horum respectu reiici queant. The term evanescant can mean either vanish or lapse, but the term prae makes it read literally as “the infinitely small vanishes before (or by the side of ) the finite,” implying that the infinitesimal disappears because of the finite, and only once it is compared to the finite.

A possible interpretation is that any motion or activity involved in the term evanescant does not indicate that the infinitesimal quantity is a dynamic entity that is (in and of itself) in a state of disappearing, but rather is a static entity that changes, or disappears, only “with respect to” (horum respectu) a finite entity. To Euler, the infinitesimal has a different status depending on what it is being compared to. The passage suggests that Euler’s usage accords more closely with reasoning exploiting static infinitesimals than with dynamic limit-type reasoning.

Computer Algebra Systems (CAS): Mathematica. Note Quote.

17-figure-1

If we are generous, there is one clear analogue to Leibniz’s vision in the contemporary world, and that is a computer algebra system (CAS), such as Mathematica, Maple, or Maxsyma. A computer algebra system is a piece of software which allows one to perform specific mathematical computations, such as differentiation or integration, as well as basic programming tasks, such as list manipulation, and so on. As the development of CAS’s have progressed hand in hand with the growth of the software industry and scientific computing, they have come to incorporate a large amount of functionality, spanning many different scientific and technical domains.

In this sense, CAS’s have some resemblance to Leibniz vision. Mathematica, for example, has all of the special functions of mathematical physics as well as data from many different sources which can be systematically and uniformly manipulated by the symbolic representation of the Wolfram Language. One could reasonably claim that it incorporates many of the basic desiderata of Leibniz’s universal calculus – it has both the structured data of Leibniz’s hypothetical encyclopedia, as well as symbolic means for manipulating this data. 

However, Leibniz’s vision was significantly more ambitious than any contemporary CAS can make claim to have realized. For instance, while a CAS incorporates mathematical knowledge from different domains, these domains are effectively different modules within the software that can be used in a standalone fashion. Consider, for example, that one can use a CAS to perform calculations relevant to both quantum mechanics and general relativity. The existence of both of these capabilities in a single piece of software says nothing about the long-standing theoretical obstacles to creating a unified theory of quantum gravity. Indeed, as has long been bemoaned in the formal verification and theorem proving communities, CAS’s are effectively a large number of small pieces of software neatly packaged into a single bundle that the user interacts with in a monolithic way. This fact has consequences for those interested in the robustness of the underlying computations, but in the present context, it simply serves to highlight a fundamental problem in Leibniz’s agenda.

So in effect, one way to describe Leibniz’s universal calculus, was an attempt to create something like a modern computer algebra system, but which extended across all areas of human knowledge. This goal itself would be quite an ambitious one, but in addition Leibniz wanted the additional property that the symbolic representation should have a transparent relationship to the corresponding encyclopedia, as well as possess the capacity of mnemonics to be memorized with ease. To quote Leibniz (caution: German) himself,

My invention contains all the functions of reason: it is a judge for controversies; an interpreter of notions; a scale for weighing probabilities; a compass which guides us through the ocean of experience; an inventory of things; a table of thoughts; a microscope for scrutinizing things close at hand; an innocent magic; a non-chimerical cabala; a writing which everyone can read in his own language; and finally a language which can be learnt in a few weeks, traveling swiftly across the world, carrying the true religion with it, wherever it goes.

It difficult to not be swept away by the beauty of Leibniz’s imagery. And yet, from our modern vantage point, there is hardly a doubt that this agenda could not possibly have worked.

Leibnizian Mnemonics

Leibniz_-Characteristica_Universalis

By any standard, Leibniz’s effort to create a “universal calculus” should be considered one of the most ambitious intellectual agendas ever conceived. Building on his previous successes in developing the infinitesimal calculus, Leibniz aimed to extend the notion of a symbolic calculus to all domains of human thought, from law, to medicine, to biology, to theology. The ultimate vision was a pictorial language which could be learned by anyone in a matter of weeks and which would transparently represent the factual content of all human knowledge. This would be the starting point for developing a logical means for manipulating the associated symbolic representation, thus giving rise to the ability to model nature and society, to derive new logic truths, and to eliminate logical contradictions from the foundations of Christian thought.

Astonishingly, many elements of this agenda are quite familiar when examined from the perspective of modern computer science. The starting point for this agenda would be an encyclopedia of structured knowledge, not unlike our own contemporary efforts related to the Semantic Web, Web 2.0, or LinkedData. Rather than consisting of prose descriptions, this encyclopedia would consist of taxonomies of basic concepts extending across all subjects.

Leibniz then wanted to create a symbolic representation of each of the fundamental concepts in this repository of structured information. It is the choice of the symbolic representation that is particularly striking. Unlike the usual mathematical symbols that comprise the differential calculus, Leibniz’s effort would rely on mnemonic images which were useful for memorizing facts.

Whereas modern thinkers usually imagine memorization to be a task accomplished through pure repetition, 16th and 17th-century Europe saw fundamental innovation in the theory and practice of memory. During this period, practitioners of the memory arts relied on a diverse array of visualization techniques that allowed them to recall massive amounts of information with extraordinary precision. These memory techniques were part of a broader intellectual culture which viewed memorization as a foundational methodology for structuring knowledge.

The basic elements of this methodology were mnemonic techniques. Not the simple catch phrases that we typically associate with mnemonics, but rather, elaborate visualized scenes or images that represented what was to be remembered. It is these same memory techniques that are used in modern memory competitions and which allow competitors to perform such superhuman feats as memorizing the order of a deck of cards in under 25 seconds, or thousands of random numbers in an hour. The basic principle behind these techniques is the same, namely, that a striking and inventive visual image can dramatically aid the memory.

Leibniz and many of his contemporaries had a much more ambitious vision for mnemonics than our modern day competitive memorizers. They believed that the process of memorization went hand in hand with structuring knowledge, and furthermore, that there were better and worse mnemonics and that the different types of pictorial representations could have different philosophical and scientific implications.

For instance, if the purpose was merely to memorize, one might create the most lewd and absurd possible images in order to remember some list of facts. Indeed, this was recommended by enterprising memory theorists of the day trying to make money by selling pamphlets on how to improve one’s memory. Joshua Foer’s memoir Moonwalking with Einstein is an engaging and insightful first-person account of the “competitive memory circuit,” where techniques such as this one are the bread and butter of how elite competitors are able to perform feats of memory that boggle the mind.

But whereas in the modern world, mnemonic techniques have been relegated to learning vocabulary words and the competitive memory circuit, elite intellectuals several centuries ago had a much more ambitious vision the ultimate implications of this methodology. In particular, Leibniz hoped that through a rigorous process of notation engineering one might be able to preserve the memory-aiding properties of mnemonics while eliminating the inevitable conceptual interference that arises in creating absurdly comical, lewd, or provocative mnemonics. By drawing inspiration from the Chinese alphabet and Egyptian hieroglyphics, he hoped to create a language that could be learned by anyone in a short period of time and which would transparently – through the pictorial dimension – represent the factual content of a curated encyclopedia. Furthermore, by building upon his successes in developing the infinitesimal calculus, Leibniz hoped that a logical structure would emerge which would allow novel insights to be derived by manipulating the associated symbolic calculus.

Leibniz’s motivations extended far beyond the realm of the natural sciences. Using mnemonics as the core alphabet to engineer a symbolic system with complete notational transparency would mean that all people would be able to learn this language, regardless of their level of education or cultural background. It would be a truly universal language, one that would unite the world, end religious conflict, and bring about widespread peace and prosperity. It was a beautiful and humane vision, although it goes without saying that it did not materialize.

Geach and Relative Identity

Peter-Geachsi_2790822a

The Theory of Relative Identity is a logical innovation due to Peter Thomas Geach  (P.T. Geach-Logic Matters) motivated by the same sort of mathematical examples as Frege’s definition by abstraction. Like Frege Geach seeks to give a logical sense to mathematical talk “up to” a given equivalence E through replacing E by identity but unlike Frege he purports, in doing so, to avoid the introduction of new abstract objects (which in his view causes unnecessary ontological inflation). The price for the ontological parsimony is Geach’s repudiation of Frege’s principle of a unique and absolute identity for the objects in the domain over which quantified variables range. According to Geach things can be same in one way while differing in others. For example two printed letters aa are same as a type but different as tokens. In Geach’s view this distinction does not commit us to a-tokens and a-types as entities but presents two different ways of describing the same reality. The unspecified (or “absolute” in Geach’s terminology) notion of identity so important for Frege is in Geach’s view is incoherent.

Geach’s proposal appears to account better for the way the notion of identity is employed in mathematics since it does not invoke “directions” or other mathematically redundant concepts. It captures particularly well the way the notion of identity is understood in Category theory. According to Baez & Dolan

In a category, two objects can be “the same in a way” while still being different.

So in Category theory the notion of identity is relative in exactly Geach’s sense. But from the logical point of view the notion of relative identity remains highly controversial. Let x,y be identical in one way but not in another, or in symbols: Id(x,y) & ¬Id'(x,y). The intended interpretation assumes that x in the left part of the formula and x in the right part have the same referent, where this last same apparently expresses absolute not relative identity. So talk of relative identity arguably smuggles in the usual absolute notion of identity anyway. If so, there seems good reason to take a standard line and reserve the term “identity” for absolute identity.

We see that Plato, Frege and Geach propose three different views of identity in mathematics. Plato notes that the sense of “the same” as applied to mathematical objects and to the ideas is different: properly speaking, sameness (identity) applies only to ideas while in mathematics sameness means equality or some other equivalence relation. Although Plato certainly recognizes essential links between mathematical objects and Ideas (recall the “ideal numbers”) he keeps the two domains apart. Unlike Plato Frege supposes that identity is a purely logical and domain-independent notion, which mathematicians must rely upon in order to talk about the sameness or difference of mathematical objects, or any other kind at all. Geach’s proposal has the opposite aim: to provide a logical justification for the way of thinking about the (relativized) notions of sameness and difference which he takes to be usual in mathematical contexts and then extend it to contexts outside mathematics (As Geach says):

Any equivalence relation … can be used to specify a criterion of relative identity. The procedure is common enough in mathematics: e.g. there is a certain equivalence relation between ordered pairs of integers by virtue of which we may say that x and y though distinct ordered pairs, are one and the same rational number. The absolute identity theorist regards this procedure as unrigorous but on a relative identity view it is fully rigorous.