Arbitrage, or Tensors thereof…

qs_2015_1

What is an arbitrage? Basically it means ”to get something from nothing” and a free lunch after all. More strict definition states the arbitrage as an operational opportunity to make a risk-free profit with a rate of return higher than the risk-free interest rate accured on deposit.

The arbitrage appears in the theory when we consider a curvature of the connection. A rate of excess return for an elementary arbitrage operation (a difference between rate of return for the operation and the risk-free interest rate) is an element of curvature tensor calculated from the connection. It can be understood keeping in mind that a curvature tensor elements are related to a difference between two results of infinitesimal parallel transports performed in different order. In financial terms it means that the curvature tensor elements measure a difference in gains accured from two financial operations with the same initial and final points or, in other words, a gain from an arbitrage operation.

In a certain sense, the rate of excess return for an elementary arbitrage operation is an analogue of the electromagnetic field. In an absence of any uncertanty (or, in other words, in an absense of walks of prices, exchange and interest rates) the only state is realised is the state of zero arbitrage. However, if we place the uncertenty in the game, prices and the rates move and some virtual arbitrage possibilities to get more than less appear. Therefore we can say that the uncertanty play the same role in the developing theory as the quantization did for the quantum gauge theory.

What of “matter” fields then, which interact through the connection. The “matter” fields are money flows fields, which have to be gauged by the connection. Dilatations of money units (which do not change a real wealth) play a role of gauge transformation which eliminates the effect of the dilatation by a proper tune of the connection (interest rate, exchange rates, prices and so on) exactly as the Fisher formula does for the real interest rate in the case of an inflation. The symmetry of the real wealth to a local dilatation of money units (security splits and the like) is the gauge symmetry of the theory.

A theory may contain several types of the “matter” fields which may differ, for example, by a sign of the connection term as it is for positive and negative charges in the electrodynamics. In the financial stage it means different preferances of investors. Investor’s strategy is not always optimal. It is due to partially incomplete information in hands, choice procedure, partially, because of investors’ (or manager’s) internal objectives. Physics of Finance

 

 

Advertisement

Belief Networks “Acyclicity”. Thought of the Day 69.0

Belief networks are used to model uncertainty in a domain. The term “belief networks” encompasses a whole range of different but related techniques which deal with reasoning under uncertainty. Both quantitative (mainly using Bayesian probabilistic methods) and qualitative techniques are used. Influence diagrams are an extension to belief networks; they are used when working with decision making. Belief networks are used to develop knowledge based applications in domains which are characterised by inherent uncertainty. Increasingly, belief network techniques are being employed to deliver advanced knowledge based systems to solve real world problems. Belief networks are particularly useful for diagnostic applications and have been used in many deployed systems. The free-text help facility in the Microsoft Office product employs Bayesian belief network technology. Within a belief network the belief of each node (the node’s conditional probability) is calculated based on observed evidence. Various methods have been developed for evaluating node beliefs and for performing probabilistic inference. Influence diagrams, which are an extension of belief networks, provide facilities for structuring the goals of the diagnosis and for ascertaining the value (the influence) that given information will have when determining a diagnosis. In influence diagrams, there are three types of node: chance nodes, which correspond to the nodes in Bayesian belief networks; utility nodes, which represent the utilities of decisions; and decision nodes, which represent decisions which can be taken to influence the state of the world. Influence diagrams are useful in real world applications where there is often a cost, both in terms of time and money, in obtaining information.

The basic idea in belief networks is that the problem domain is modelled as a set of nodes interconnected with arcs to form a directed acyclic graph. Each node represents a random variable, or uncertain quantity, which can take two or more possible values. The arcs signify the existence of direct influences between the linked variables, and the strength of each influence is quantified by a forward conditional probability.

The Belief Network, which is also called the Bayesian Network, is a directed acyclic graph for probabilistic reasoning. It defines the conditional dependencies of the model by associating each node X with a conditional probability P(X|Pa(X)), where Pa(X) denotes the parents of X. Here are two of its conditional independence properties:

1. Each node is conditionally independent of its non-descendants given its parents.

2. Each node is conditionally independent of all other nodes given its Markov blanket, which consists of its parents, children, and children’s parents.

The inference of Belief Network is to compute the posterior probability distribution

P(H|V) = P(H,V)/ ∑HP(H,V)

where H is the set of the query variables, and V is the set of the evidence variables. Approximate inference involves sampling to compute posteriors. The Sigmoid Belief Network is a type of the Belief Network such that

P(Xi = 1|Pa(Xi)) = σ( ∑Xj ∈ Pa(Xi) WjiXj + bi)

where Wji is the weight assigned to the edge from Xj to Xi, and σ is the sigmoid function.

Untitled

Accelerated Capital as an Anathema to the Principles of Communicative Action. A Note Quote on the Reciprocity of Capital and Ethicality of Financial Economics

continuum

Markowitz portfolio theory explicitly observes that portfolio managers are not (expected) utility maximisers, as they diversify, and offers the hypothesis that a desire for reward is tempered by a fear of uncertainty. This model concludes that all investors should hold the same portfolio, their individual risk-reward objectives are satisfied by the weighting of this ‘index portfolio’ in comparison to riskless cash in the bank, a point on the capital market line. The slope of the Capital Market Line is the market price of risk, which is an important parameter in arbitrage arguments.

Merton had initially attempted to provide an alternative to Markowitz based on utility maximisation employing stochastic calculus. He was only able to resolve the problem by employing the hedging arguments of Black and Scholes, and in doing so built a model that was based on the absence of arbitrage, free of turpe-lucrum. That the prescriptive statement “it should not be possible to make sure profits”, is a statement explicit in the Efficient Markets Hypothesis and in employing an Arrow security in the context of the Law of One Price. Based on these observations, we conject that the whole paradigm for financial economics is built on the principle of balanced reciprocity. In order to explore this conjecture we shall examine the relationship between commerce and themes in Pragmatic philosophy. Specifically, we highlight Robert Brandom’s (Making It Explicit Reasoning, Representing, and Discursive Commitment) position that there is a pragmatist conception of norms – a notion of primitive correctnesses of performance implicit in practice that precludes and are presupposed by their explicit formulation in rules and principles.

The ‘primitive correctnesses’ of commercial practices was recognised by Aristotle when he investigated the nature of Justice in the context of commerce and then by Olivi when he looked favourably on merchants. It is exhibited in the doux-commerce thesis, compare Fourcade and Healey’s contemporary description of the thesis Commerce teaches ethics mainly through its communicative dimension, that is, by promoting conversations among equals and exchange between strangers, with Putnam’s description of Habermas’ communicative action based on the norm of sincerity, the norm of truth-telling, and the norm of asserting only what is rationally warranted …[and] is contrasted with manipulation (Hilary Putnam The Collapse of the Fact Value Dichotomy and Other Essays)

There are practices (that should be) implicit in commerce that make it an exemplar of communicative action. A further expression of markets as centres of communication is manifested in the Asian description of a market brings to mind Donald Davidson’s (Subjective, Intersubjective, Objective) argument that knowledge is not the product of a bipartite conversations but a tripartite relationship between two speakers and their shared environment. Replacing the negotiation between market agents with an algorithm that delivers a theoretical price replaces ‘knowledge’, generated through communication, with dogma. The problem with the performativity that Donald MacKenzie (An Engine, Not a Camera_ How Financial Models Shape Markets) is concerned with is one of monism. In employing pricing algorithms, the markets cannot perform to something that comes close to ‘true belief’, which can only be identified through communication between sapient humans. This is an almost trivial observation to (successful) market participants, but difficult to appreciate by spectators who seek to attain ‘objective’ knowledge of markets from a distance. To appreciate the relevance to financial crises of the position that ‘true belief’ is about establishing coherence through myriad triangulations centred on an asset rather than relying on a theoretical model.

Shifting gears now, unless the martingale measure is a by-product of a hedging approach, the price given by such martingale measures is not related to the cost of a hedging strategy therefore the meaning of such ‘prices’ is not clear. If the hedging argument cannot be employed, as in the markets studied by Cont and Tankov (Financial Modelling with Jump Processes), there is no conceptual framework supporting the prices obtained from the Fundamental Theorem of Asset Pricing. This lack of meaning can be interpreted as a consequence of the strict fact/value dichotomy in contemporary mathematics that came with the eclipse of Poincaré’s Intuitionism by Hilbert’s Formalism and Bourbaki’s Rationalism. The practical problem of supporting the social norms of market exchange has been replaced by a theoretical problem of developing formal models of markets. These models then legitimate the actions of agents in the market without having to make reference to explicitly normative values.

The Efficient Market Hypothesis is based on the axiom that the market price is determined by the balance between supply and demand, and so an increase in trading facilitates the convergence to equilibrium. If this axiom is replaced by the axiom of reciprocity, the justification for speculative activity in support of efficient markets disappears. In fact, the axiom of reciprocity would de-legitimise ‘true’ arbitrage opportunities, as being unfair. This would not necessarily make the activities of actual market arbitrageurs illicit, since there are rarely strategies that are without the risk of a loss, however, it would place more emphasis on the risks of speculation and inhibit the hubris that has been associated with the prelude to the recent Crisis. These points raise the question of the legitimacy of speculation in the markets. In an attempt to understand this issue Gabrielle and Reuven Brenner identify the three types of market participant. ‘Investors’ are preoccupied with future scarcity and so defer income. Because uncertainty exposes the investor to the risk of loss, investors wish to minimise uncertainty at the cost of potential profits, this is the basis of classical investment theory. ‘Gamblers’ will bet on an outcome taking odds that have been agreed on by society, such as with a sporting bet or in a casino, and relates to de Moivre’s and Montmort’s ‘taming of chance’. ‘Speculators’ bet on a mis-calculation of the odds quoted by society and the reason why speculators are regarded as socially questionable is that they have opinions that are explicitly at odds with the consensus: they are practitioners who rebel against a theoretical ‘Truth’. This is captured in Arjun Appadurai’s argument that the leading agents in modern finance believe in their capacity to channel the workings of chance to win in the games dominated by cultures of control . . . [they] are not those who wish to “tame chance” but those who wish to use chance to animate the otherwise deterministic play of risk [quantifiable uncertainty]”.

In the context of Pragmatism, financial speculators embody pluralism, a concept essential to Pragmatic thinking and an antidote to the problem of radical uncertainty. Appadurai was motivated to study finance by Marcel Mauss’ essay Le Don (The Gift), exploring the moral force behind reciprocity in primitive and archaic societies and goes on to say that the contemporary financial speculator is “betting on the obligation of return”, and this is the fundamental axiom of contemporary finance. David Graeber (Debt The First 5,000 Years) also recognises the fundamental position reciprocity has in finance, but where as Appadurai recognises the importance of reciprocity in the presence of uncertainty, Graeber essentially ignores uncertainty in his analysis that ends with the conclusion that “we don’t ‘all’ have to pay our debts”. In advocating that reciprocity need not be honoured, Graeber is not just challenging contemporary capitalism but also the foundations of the civitas, based on equality and reciprocity. The origins of Graeber’s argument are in the first half of the nineteenth century. In 1836 John Stuart Mill defined political economy as being concerned with [man] solely as a being who desires to possess wealth, and who is capable of judging of the comparative efficacy of means for obtaining that end.

In Principles of Political Economy With Some of Their Applications to Social Philosophy, Mill defended Thomas Malthus’ An Essay on the Principle of Population, which focused on scarcity. Mill was writing at a time when Europe was struck by the Cholera pandemic of 1829–1851 and the famines of 1845–1851 and while Lord Tennyson was describing nature as “red in tooth and claw”. At this time, society’s fear of uncertainty seems to have been replaced by a fear of scarcity, and these standards of objectivity dominated economic thought through the twentieth century. Almost a hundred years after Mill, Lionel Robbins defined economics as “the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses”. Dichotomies emerge in the aftermath of the Cartesian revolution that aims to remove doubt from philosophy. Theory and practice, subject and object, facts and values, means and ends are all separated. In this environment ex cathedra norms, in particular utility (profit) maximisation, encroach on commercial practice.

In order to set boundaries on commercial behaviour motivated by profit maximisation, particularly when market uncertainty returned after the Nixon shock of 1971, society imposes regulations on practice. As a consequence, two competing ethics, functional Consequential ethics guiding market practices and regulatory Deontological ethics attempting stabilise the system, vie for supremacy. It is in this debilitating competition between two essentially theoretical ethical frameworks that we offer an explanation for the Financial Crisis of 2007-2009: profit maximisation, not speculation, is destabilising in the presence of radical uncertainty and regulation cannot keep up with motivated profit maximisers who can justify their actions through abstract mathematical models that bare little resemblance to actual markets. An implication of reorienting financial economics to focus on the markets as centres of ‘communicative action’ is that markets could become self-regulating, in the same way that the legal or medical spheres are self-regulated through professions. This is not a ‘libertarian’ argument based on freeing the Consequential ethic from a Deontological brake. Rather it argues that being a market participant entails restricting norms on the agent such as sincerity and truth telling that support knowledge creation, of asset prices, within a broader objective of social cohesion. This immediately calls into question the legitimacy of algorithmic/high- frequency trading that seems an anathema in regard to the principles of communicative action.

Complexity Wrapped Uncertainty in the Bazaar

economic_growth_atlas

One could conceive a financial market as a set of N agents each of them taking a binary decision every time step. This is an extremely crude representation, but capture the essential feature that decision could be coded by binary symbols (buy = 0, sell = 1, for example). Although the extreme simplification, the above setup allow a “stylized” definition of price.

Let Nt0, Nt1 be the number of agents taking the decision 0, 1 respectively at the time t. Obviously, N = Nt0 + Nt1 for every t . Then, with the above definition of the binary code the price can be defined as:

pt = f(Nt0/Nt1)

where f is an increasing and convex function which also hold that:

a) f(0)=0

b) limx→∞ f(x) = ∞

c) limx→∞ f'(x) = 0

The above definition perfectly agree with the common believe about how offer and demand work. If Nt0 is small and Nt1 large, then there are few agents willing to buy and a lot of agents willing to sale, hence the price should be low. If on the contrary, Nt0 is large and Nt1 is small, then there are a lot of agents willing to buy and just few agents willing to sale, hence the price should be high. Notice that the winning choice is related with the minority choice. We exploit the above analogy to construct a binary time-series associated to each real time-series of financial markets. Let {pt}t∈N be the original real time-series. Then we construct a binary time-series {at}t∈N by the rule:

at = {1 pt > pt-1

at = {0 pt < pt-1

Physical complexity is defined as the number of binary digits that are explainable (or meaningful) with respect to the environment in a string η. In reference to our problem the only physical record one gets is the binary string built up from the original real time series and we consider it as the environment ε . We study the physical complexity of substrings of ε . The comprehension of their complex features has high practical importance. The amount of data agents take into account in order to elaborate their choice is finite and of short range. For every time step t, the binary digits at-l, at-l+1,…, at-1 carry some information about the behavior of agents. Hence, the complexity of these finite strings is a measure of how complex information agents face. The Kolmogorov – Chaitin complexity is defined as the length of the shortest program π producing the sequence η when run on universal Turing machine T:

K(η) = min {|π|: η = T(π)}

where π represent the length of π in bits, T(π) the result of running π on Turing machine T and K(η) the Kolmogorov-Chaitin complexity of sequence π. In the framework of this theory, a string is said to be regular if K(η) < η . It means that η can be described by a program π with length smaller than η length. The interpretation of a string should be done in the framework of an environment. Hence, let imagine a Turing machine that takes the string ε as input. We can define the conditional complexity K(η / ε) as the length of the smallest program that computes η in a Turing machine having ε as input:

K(η / ε) = min {|π|: η = CT(π, ε)}

We want to stress that K(η / ε) represents those bits in η that are random with respect to ε. Finally, the physical complexity can be defined as the number of bits that are meaningful in η with respect to ε :

K(η : ε) = |η| – K(η / ε)

η also represent the unconditional complexity of string η i.e., the value of complexity if the input would be ε = ∅ . Of course, the measure K (η : ε ) as defined in the above equation has few practical applications, mainly because it is impossible to know the way in which information about ε is encoded in η . However, if a statistical ensemble of strings is available to us, then the determination of complexity becomes an exercise in information theory. It can be proved that the average values C(η) of the physical complexity K(η : ε) taken over an ensemble Σ of strings of length η can be approximated by:

C|(η)| = 〈K(η : ε) ≅  |η| – K(η : ε), where

K(η : ε) = -∑η∈∑p(η / ε) log2p(η / ε)

and the sum is taking over all the strings η in the ensemble Σ. In a population of N strings in environment ε, the quantity n(η)/N, where n(s) denotes the number of strings equal to η in ∑, approximates p(η / ε) as N → ∞.

Let ε = {at}t∈N and l be a positive integer l ≥ 2. Let Σl be the ensemble of sequences of length l built up by a moving window of length l i.e., if η ∈ Σl then η = aiai+1ai+l−1 for some value of i. The selection of strings ε is related to periods before crashes and in contrast, period with low uncertainty in the market…..

Not Just Any Lair of Filth….Investment Environment = Ratio of Ordinary Profits to Total Capital – Long-Term Interest Rate

spire logo

In the stock market the price changes are subject to the law of demand and supply, that the price rises when there is excess demand, and the price falls when there is excess supply. It seems natural to assume that the price raises if the number of the buyer exceeds the number of the seller because there may be excess demand, and the price falls if the number of seller exceeds the number of the seller because there may be excess supply. Thus a trader, who expects a certain exchange profit through trading, will predict every other traders’ behaviour, and will choose the same behaviour as the other traders’ behaviour as thoroughly as possible he could. The decision-making of traders will be also influenced by changes of the firm’s fundamental value, which can be derived from analysis of present conditions and future prospects of the firm, and the return on the alternative asset (e.g. bonds). For simplicity’s sake of an empirical analysis, lets use the ratio of ordinary profits to total capital that is a typical measure of investment, as a proxy for changes of the fundamental value, and the long-term interest rate as a proxy for changes of the return on the alternative asset.

An investment environment is defined as

investment environment = ratio of ordinary profits to total capital – long- term interest rate

When the investment environment increases (decreases) a trader may think that now is the time for him to buy (sell) the stock. Formally let us assume that the investment attitude of trader i is determined by minimisation of the following disagreement function ei(x),

ei(x) = -1/2 ∑j=1Naijxixj – bisxi —– (1)

where aij denotes the strength of trader j’s influence on trader i, and bi denotes the strength of the reaction of trader i upon the change of the investment environment s which may be interpreted as an external field, and x denotes the vector of investment attitude x = (x1, x2, ……xN). The optimisation problem that should be solved for every trader to achieve minimisation of their disagreement functions ei(x) at the same time is formalised by

min E(x) = -1/2 ∑i=1Nj=1Naijxixj – ∑i=1Nbisxi —– (2)

Now let us assume that trader’s decision making is subject to a probabilistic rule. The summation over all possible configurations of agents’ investment attitude x = (x1,…..,xN) is computationally explosive with size of the number of trader N. Therefore under the circumstance that a large number of traders participates into trading, a probabilistic setting may be one of best means to analyse the collective behaviour of the many interacting traders. Let us introduce a random variable xk =(xk1,xk2,……,xkN), k=1,2,…..,K. The state of the agents’ investment attitude xk occur with probability P(xk) = Prob(xk) with the requirement 0 < P(xk) < 1 and ∑k=1KP(xk) = 1. Defining the amount of uncertainty before the occurrence of the state xk with probability P(xk) as the logarithmic function: I(xk) = −logP(xk). Under these assumptions the above optimisation problem is formalised by

min <E(x)> = ∑k=1NP(xk) E(xk) —– (3)

subject to H = − ∑k=1NP(xk)logP(xk), ∑k=-NNP(xk) = 1

where E(xk) = 1/2 ∑i=1NEi(xk)

xk is a state, and H is information entropy. P(xk) is the relative frequency the occurrence of the state xk. The well-known solutions of the above optimisation problem is

P(xk) = 1/Z e(-μE(xk)), Z = ∑k=1Ke(-E(xk)) k = 1, 2, …., K —– (4)

where the parameter μ may be interested as a market temperature describing a degree of randomness in the behaviour of traders. The probability distribution P(xk) is called the Boltzmann distribution where P(xk) is the probability that the traders’ investment attitude is in the state k with the function E(xk), and Z is the partition function. We call the optimising behaviour of the traders with interaction among the other traders a relative expectation formation.