Bayesianism in Game Theory. Thought of the Day 24.0

16f585c6707dae1b884ef409d0b5c7ef

Bayesianism in game theory can be characterised as the view that it is always possible to define probabilities for anything that is relevant for the players’ decision-making. In addition, it is usually taken to imply that the players use Bayes’ rule for updating their beliefs. If the probabilities are to be always definable, one also has to specify what players’ beliefs are before the play is supposed to begin. The standard assumption is that such prior beliefs are the same for all players. This common prior assumption (CPA) means that the players have the same prior probabilities for all those aspects of the game for which the description of the game itself does not specify different probabilities. Common priors are usually justified with the so called Harsanyi doctrine, according to which all differences in probabilities are to be attributed solely to differences in the experiences that the players have had. Different priors for different players would imply that there are some factors that affect the players’ beliefs even though they have not been explicitly modelled. The CPA is sometimes considered to be equivalent to the Harsanyi doctrine, but there seems to be a difference between them: the Harsanyi doctrine is best viewed as a metaphysical doctrine about the determination of beliefs, and it is hard to see why anybody would be willing to argue against it: if everything that might affect the determination of beliefs is included in the notion of ‘experience’, then it alone does determine the beliefs. The Harsanyi doctrine has some affinity to some convergence theorems in Bayesian statistics: if individuals are fed with similar information indefinitely, their probabilities will ultimately be the same, irrespective of the original priors.

The CPA however is a methodological injunction to include everything that may affect the players’ behaviour in the game: not just everything that motivates the players, but also everything that affects the players’ beliefs should be explicitly modelled by the game: if players had different priors, this would mean that the game structure would not be completely specified because there would be differences in players’ behaviour that are not explained by the model. In a dispute over the status of the CPA, Faruk Gul essentially argues that the CPA does not follow from the Harsanyi doctrine. He does this by distinguishing between two different interpretations of the common prior, the ‘prior view’ and the ‘infinite hierarchy view’. The former is a genuinely dynamic story in which it is assumed that there really is a prior stage in time. The latter framework refers to Mertens and Zamir’s construction in which prior beliefs can be consistently formulated. This framework however, is static in the sense that the players do not have any information on a prior stage, indeed, the ‘priors’ in this framework do not even pin down a player’s priors for his own types. Thus, the existence of a common prior in the latter framework does not have anything to do with the view that differences in beliefs reflect differences in information only.

It is agreed by everyone that for most (real-world) problems there is no prior stage in which the players know each other’s beliefs, let alone that they would be the same. The CPA, if understood as a modelling assumption, is clearly false. Robert Aumann, however, defends the CPA by arguing that whenever there are differences in beliefs, there must have been a prior stage in which the priors were the same, and from which the current beliefs can be derived by conditioning on the differentiating events. If players differ in their present beliefs, they must have received different information at some previous point in time, and they must have processed this information correctly. Based on this assumption, he further argues that players cannot ‘agree to disagree’: if a player knows that his opponents’ beliefs are different from his own, he should revise his beliefs to take the opponents’ information into account. The only case where the CPA would be violated, then, is when players have different beliefs, and have common knowledge about each others’ different beliefs and about each others’ epistemic rationality. Aumann’s argument seems perfectly legitimate if it is taken as a metaphysical one, but we do not see how it could be used as a justification for using the CPA as a modelling assumption in this or that application of game theory and Aumann does not argue that it should.

wpid-bilindustriella-a86478514b

Sustainability of Debt

death scythe

For economies with fractional reserve-generated fiat money, balancing the budget is characterized by an exponential growth D(t) ≈ D0(1 + r)t of any initial debt D0 subjected to interest r as a function of time t due to the compound interest; a fact known since antiquity. At the same time, besides default, this increasing debt can only be reduced by the following five mostly linear, measures:

(i) more income or revenue I (in the case of sovereign debt: higher taxation or higher tax base);

(ii) less spending S;

(iii) increase of borrowing L;

(iv) acquisition of external resources, and

(v) inflation; that is, devaluation of money.

Whereas (i), (ii) and (iv) without inflation are essentially measures contributing linearly (or polynomially) to the acquisition or compensation of debt, inflation also grows exponentially with time t at some (supposedly constant) rate f ≥ 1; that is, the value of an initial debt D0, without interest (r = 0), in terms of the initial values, gets reduced to F(t) = D0/ft. Conversely, the capacity of an economy to compensate debt will increase with compound inflation: for instance, the initial income or revenue I will, through adaptions, usually increase exponentially with time in an inflationary regime by Ift.

Because these are the only possibilities, we can consider such economies as closed systems (with respect to money flows), characterized by the (continuity) equation

Ift + S + L ≈ D0(1+r)t, or

L ≈ D0(1 + r)t − Ift − S.

Let us concentrate on sovereign debt and briefly discuss the fiscal, social and political options. With regards to the five ways to compensate debt the following assumptions will be made: First, in non-despotic forms of governments (e.g., representative democracies and constitutional monarchies), increases of taxation, related to (i), as well as spending cuts, related to (ii), are very unpopular, and can thus be enforced only in very limited, that is polynomial, forms.

Second, the acquisition of external resources, related to (iv), are often blocked for various obvious reasons; including military strategy limitations, and lack of opportunities. We shall therefore disregard the acquisition of external resources entirely and set A = 0.

As a consequence, without inflation (i.e., for f = 1), the increase of debt

L ≈ D0(1 + r)t − I − S

grows exponentially. This is only “felt” after trespassing a quasi-linear region for which, due to a Taylor expansion around t = 0, D(t) = D0(1 + r)t ≈ D0 + D0rt.

So, under the political and social assumptions made, compound debt without inflation is unsustainable. Furthermore, inflation, with all its inconvenient consequences and re-appropriation, seems inevitable for the continuous existence of economies based on fractional reserve generated fiat money; at least in the long run.

High Frequency Markets and Leverage

0*o9wpWk6YyXYGxntK

Leverage effect is a well-known stylized fact of financial data. It refers to the negative correlation between price returns and volatility increments: when the price of an asset is increasing, its volatility drops, while when it decreases, the volatility tends to become larger. The name “leverage” comes from the following interpretation of this phenomenon: When an asset price declines, the associated company becomes automatically more leveraged since the ratio of its debt with respect to the equity value becomes larger. Hence the risk of the asset, namely its volatility, should become more important. Another economic interpretation of the leverage effect, inverting causality, is that the forecast of an increase of the volatility should be compensated by a higher rate of return, which can only be obtained through a decrease in the asset value.

Some statistical methods enabling us to use high frequency data have been built to measure volatility. In financial engineering, it has become clear in the late eighties that it is necessary to introduce leverage effect in derivatives pricing frameworks in order to accurately reproduce the behavior of the implied volatility surface. This led to the rise of famous stochastic volatility models, where the Brownian motion driving the volatility is (negatively) correlated with that driving the price for stochastic volatility models.

Traditional explanations for leverage effect are based on “macroscopic” arguments from financial economics. Could microscopic interactions between agents naturally lead to leverage effect at larger time scales? We would like to know whether part of the foundations for leverage effect could be microstructural. To do so, our idea is to consider a very simple agent-based model, encoding well-documented and understood behaviors of market participants at the microscopic scale. Then we aim at showing that in the long run, this model leads to a price dynamic exhibiting leverage effect. This would demonstrate that typical strategies of market participants at the high frequency level naturally induce leverage effect.

One could argue that transactions take place at the finest frequencies and prices are revealed through order book type mechanisms. Therefore, it is an obvious fact that leverage effect arises from high frequency properties. However, under certain market conditions, typical high frequency behaviors, having probably no connection with the financial economics concepts, may give rise to some leverage effect at the low frequency scales. It is important to emphasize that leverage effect should be fully explained by high frequency features.

Another important stylized fact of financial data is the rough nature of the volatility process. Indeed, for a very wide range of assets, historical volatility time-series exhibit a behavior which is much rougher than that of a Brownian motion. More precisely, the dynamics of the log-volatility are typically very well modeled by a fractional Brownian motion with Hurst parameter around 0.1, that is a process with Hölder regularity of order 0.1. Furthermore, using a fractional Brownian motion with small Hurst index also enables to reproduce very accurately the features of the volatility surface.

hurst_fbm

The fact that for basically all reasonably liquid assets, volatility is rough, with the same order of magnitude for the roughness parameter, is of course very intriguing. Tick-by-tick price model is based on a bi-dimensional Hawkes process, which is a bivariate point process (Nt+, Nt)t≥0 taking values in (R+)2 and with intensity (λ+t, λt) of the form

Untitled

Here μ+ and μ are positive constants and the functions (φi)i=1,…4 are non-negative with associated matrix called kernel matrix. Hawkes processes are said to be self-exciting, in the sense that the instantaneous jump probability depends on the location of the past events. Hawkes processes are nowadays of standard use in finance, not only in the field of microstructure but also in risk management or contagion modeling. The Hawkes process generates behavior that mimics financial data in a pretty impressive way. And back-fitting, yields coorespndingly good results.  Some key problems remain the same whether you use a simple Brownian motion model or this marvelous technical apparatus.

In short, back-fitting only goes so far.

  • The essentially random nature of living systems can lead to entirely different outcomes if said randomness had occurred at some other point in time or magnitude. Due to randomness, entirely different groups would likely succeed and fail every time the “clock” was turned back to time zero, and the system allowed to unfold all over again. Goldman Sachs would not be the “vampire squid”. The London whale would never have been. This will boggle the mind if you let it.

  • Extraction of unvarying physical laws governing a living system from data is in many cases is NP-hard. There are far many varieties of actors and variety of interactions for the exercise to be tractable.

  • Given the possibility of their extraction, the nature of the components of a living system are not fixed and subject to unvarying physical laws – not even probability laws.

  • The conscious behavior of some actors in a financial market can change the rules of the game, some of those rules some of the time, or complete rewire the system form the bottom-up. This is really just an extension of the former point.

  • Natural mutations over time lead to markets reworking their laws over time through an evolutionary process, with never a thought of doing so.

ee2bb4_8eaf3fa3c14d4960aceae022db54340c

Thus, in this approach, Nt+ corresponds to the number of upward jumps of the asset in the time interval [0,t] and Nt to the number of downward jumps. Hence, the instantaneous probability to get an upward (downward) jump depends on the arrival times of the past upward and downward jumps. Furthermore, by construction, the price process lives on a discrete grid, which is obviously a crucial feature of high frequency prices in practice.

This simple tick-by-tick price model enables to encode very easily the following important stylized facts of modern electronic markets in the context of high frequency trading:

  1. Markets are highly endogenous, meaning that most of the orders have no real economic motivation but are rather sent by algorithms in reaction to other orders.
  2. Mechanisms preventing statistical arbitrages take place on high frequency markets. Indeed, at the high frequency scale, building strategies which are on average profitable is hardly possible.
  3. There is some asymmetry in the liquidity on the bid and ask sides of the order book. This simply means that buying and selling are not symmetric actions. Indeed, consider for example a market maker, with an inventory which is typically positive. She is likely to raise the price by less following a buy order than to lower the price following the same size sell order. This is because its inventory becomes smaller after a buy order, which is a good thing for her, whereas it increases after a sell order.
  4. A significant proportion of transactions is due to large orders, called metaorders, which are not executed at once but split in time by trading algorithms.

    In a Hawkes process framework, the first of these properties corresponds to the case of so-called nearly unstable Hawkes processes, that is Hawkes processes for which the stability condition is almost saturated. This means the spectral radius of the kernel matrix integral is smaller than but close to unity. The second and third ones impose a specific structure on the kernel matrix and the fourth one leads to functions φi with heavy tails.

The Political: NRx, Neoreactionism Archived.

This one is eclectic and for the record.

dark-enlightenment-map-1-5

The techno-commercialists appear to have largely arrived at neoreaction via right-wing libertarianism. They are defiant free marketeers, sharing with other ultra-capitalists such as Randian Objectivists a preoccupation with “efficiency,” a blind trust in the power of the free market, private property, globalism and the onward march of technology. However, they are also believers in the ideal of small states, free movement and absolute or feudal monarchies with no form of democracy. The idea of “exit,” predominantly a techno-commercialist viewpoint but found among other neoreactionaries too, essentially comes down to the idea that people should be able to freely exit their native country if they are unsatisfied with its governance-essentially an application of market economics and consumer action to statehood. Indeed, countries are often described in corporate terms, with the King being the CEO and the aristocracy shareholders.

The “theonomists” place more emphasis on the religious dimension of neoreaction. They emphasise tradition, divine law, religion rather than race as the defining characteristic of “tribes” of peoples and traditional, patriarchal families. They are the closest group in terms of ideology to “classical” or, if you will, “palaeo-reactionaries” such as the High Tories, the Carlists and French Ultra-royalists. Often Catholic and often ultramontanist. Finally, there’s the “ethnicist” lot, who believe in racial segregation and have developed a new form of racial ideology called “Human Biodiversity” (HBD) which says people of African heritage are naturally less intelligent than people of Caucasian and east Asian heritage. Of course, the scientific community considers the idea that there are any genetic differences between human races beyond melanin levels in the skin and other cosmetic factors to be utterly false, but presumably this is because they are controlled by “The Cathedral.” They like “tribal solidarity,” tribes being defined by shared ethnicity, and distrust outsiders.

dark-enlightenment

Overlap between these groups is considerable, but there are also vast differences not just between them but within them. What binds them together is common opposition to “The Cathedral” and to “progressive” ideology. Some of their criticisms of democracy and modern society are well-founded, and some of them make good points in defence of the monarchical system. However, I don’t much like them, and I doubt they’d much like me.

Whereas neoreactionaries are keen on the free market and praise capitalism, unregulated capitalism is something I am wary of. Capitalism saw the collapse of traditional monarchies in Europe in the 19th century, and the first revolutions were by capitalists seeking to establish democratic, capitalist republics where the bourgeoisie replaced the aristocratic elite as the ruling class; setting an example revolutionary socialists would later follow. Capitalism, when unregulated, leads to monopolies, exploitation of the working class, unsustainable practices in pursuit of increased short-term profits, globalisation and materialism. Personally, I prefer distributist economics, which embrace private property rights but emphasise widespread ownership of wealth, small partnerships and cooperatives replacing private corporations as the basic units of the nation’s economy. And although critical of democracy, the idea that any form of elected representation for the lower classes is anathaema is not consistent with my viewpoint; my ideal government would not be absolute or feudal monarchy, but executive constitutional monarchy with a strong monarch exercising executive powers and the legislative role being at least partially controlled by an elected parliament-more like the Bourbon Restoration than the Ancien Régime, though I occasionally say “Vive l’Ancien Régime!” on forums or in comments to annoy progressive types. Finally, I don’t believe in racialism in any form. I tend to attribute preoccupations with racial superiority to deep insecurity which people find the need to suppress by convincing themselves that they are “racially superior” to others, in absence of any actual talent or especial ability to take pride in. The 20th century has shown us where dividing people up based on their genetics leads us, and it is not somewhere I care to return to.

I do think it is important to go into why Reactionaries think Cthulhu always swims left, because without that they’re vulnerable to the charge that they have no a priori reason to expect our society to have the biases it does, and then the whole meta-suspicion of the modern Inquisition doesn’t work or at least doesn’t work in that particular direction. Unfortunately (for this theory) I don’t think their explanation is all that great (though this deserves substantive treatment) and we should revert to a strong materialist prior, but of course I would say that, wouldn’t I.

And of course you could get locked up for wanting fifty Stalins! Just try saying how great Enver Hoxha was at certain places and times. Of course saying you want fifty Stalins is not actually advocating that Stalinism become more like itself – as Leibniz pointed out, a neat way of telling whether something is something is checking whether it is exactly like that thing, and nothing could possibly be more like Stalinism than Stalinism. Of course fifty Stalins is further in the direction that one Stalin is from our implied default of zero Stalins. But then from an implied default of 1.3 kSt it’s a plea for moderation among hypostalinist extremists. As Mayberry Mobmuck himself says, “sovereign is he who determines the null hypothesis.”

Speaking of Stalinism, I think it does provide plenty of evidence that policy can do wonderful things for life expectancy and so on, and I mean that in a totally unironic “hail glorious comrade Stalin!” way, not in a “ha ha Stalin sure did kill a lot people way.” But this is a super-unintuitive claim to most people today, so ill try to get around to summarizing the evidence at some point.

‘Neath an eyeless sky, the inkblack sea
Moves softly, utters not save a quiet sound
A lapping-sound, not saying what may be
The reach of its voice a furthest bound;
And beyond it, nothing, nothing known
Though the wind the boat has gently blown
Unsteady on shifting and traceless ground
And quickly away from it has flown.

Allow us a map, and a lamp electric
That by instrument we may probe the dark
Unheard sounds and an unseen metric
Keep alive in us that unknown spark
To burn bright and not consume or mar
Has the unbounded one come yet so far
For night over night the days to mark
His journey — adrift, without a star?

Chaos is the substrate, and the unseen action (or non-action) against disorder, the interloper. Disorder is a mere ‘messing up order’.  Chaos is substantial where disorder is insubstantial. Chaos is the ‘quintessence’ of things, chaotic itself and yet always-begetting order. Breaking down disorder, since disorder is maladaptive. Exit is a way to induce bifurcation, to quickly reduce entropy through separation from the highly entropic system. If no immediate exit is available, Chaos will create one.

Market Liquidity

market-liquidity-graphic-1_0

The notion of market liquidity is nowadays almost ubiquitous. It quantifies the ability of a financial market to match buyers and sellers in an efficient way, without causing a significant movement in the price, thus delivering low transaction costs. It is the lifeblood of financial markets without which market dislocations can show as in the recent well documented crisis: 2007 Yen carry trade unwind, 2008 Credit Crunch, May 6th 2010 Flash Crash or the numerous Mini Flash Crashes occurring in US equity markets, but also in many others cases that go unnoticed but are potent candidates to become more important. While omnipresent, liquidity is an elusive concept. Several reasons may account for this ambiguity; some markets, such as the foreign exchange (FX) market with the daily turnover of $5.3 trillion (let’s say), are mistakenly assumed to be extremely liquid, whereas the generated volume is equated with liquidity. Secondly, the structure of modern markets with its high degree of decentralization generates fragmentation and low transparency of transactions which complicates the way to define market liquidity as a whole. Aggregating liquidity from all trading sources can be quite daunting and even with all of the market fragmentation, as new venues with different market structure continue to be launched. Furthermore, the landscape is continuously changing as new players emerge, such as high frequency traders that have taken over the role of liquidity intermediation in many markets, accounting between 50% and 70%  (and ever rising) of all trading. Last, but not least, important participants influencing the markets are the central banks with their myriad of market interventions, whereas it is indirectly through monetization of substantial amount of sovereign and mortgage debt with various quantitative easing programs, or in a direct manner as with Swiss National Bank setting the floor on EUR/CHF exchange rate, providing plenty of arguments they have overstepped their role of last resort liquidity providers and at this stage they hamper market liquidity, potentially exposing themselves to massive losses in the near future.

Despite the obvious importance of liquidity there is little agreement on the best way to measure and define market liquidity. Liquidity measures can be classified into different categories. Volume-based measures: liquidity ratio, Martin index, Hui and Heubel ratio, turnover ratio, market adjusted liquidity index, where, over a fixed period of time, the exchanged volume is compared to price changes. This class implies that non-trivial assumptions are made about the relation between volume and price movements. Other classes of measures include price based measures: Marsh and Rock ratio, variance ratio, vector autoregressive models; transaction costs based measures: spread, implied spread, absolute spread or relative spread see; or time based measures: number of transactions or orders per time unit. The aforementioned approaches suffer from many drawbacks. They provide a top-down approach of analysing a complex system, where the impact of the variation of liquidity is analysed rather than providing a bottom-up approach where liquidity lacking times are identified and quantified. These approaches also suffer from a specific choice of physical time, that does not reflect the correct and multi-scale nature of any financial market. Liquidity is defined as an information theoretic measurement that characterises the unlikeliness of price trajectories and argue that this new metric has the ability to detect and predict stress in financial markets and show examples within the FX market, so that the optimal choice of scales is derived using the Maximum Entropy Principle.

Unconditional Accelerationists: Park Chung-Hee and Napoleon

Land’s Teleoplexy,

Some instance of intermediate individuation—most obviously the state—could be strategically invested by a Left Accelerationism. precisely in order to submit the virtual-teleoplexic lineage of Terrestrial Capitalism (or Techonomic Singularity) to effacement and disruption.

For the unconditional accelerationist as much as for the social historian, of course, the voluntarist quality of this image is a lie. Napoleon’s supposed flight from history can amply be recuperated within the process of history itself, if only we revise our image of what this is: not a flat space or a series of smooth curves, but rather a tangled, homeorhetic, deep-subversive spiral-complex. Far from shaping history like putty, Napoleon like all catastrophic agents of time-anomaly unleashed forces that ran far ahead of his very intentions: pushing Europe’s engagement with Africa and the Middle East onto a new plane, promulgating the Code Napoleon that would shape and selectively boost the economic development of continental Europe. In this respect, the image of him offered later by Marinetti is altogether more interesting. In his 1941 ‘Qualitative Imaginative Futurist Mathematics’, Marinetti claimed that Futurist military ‘calculations are as precise as those of Napoleon who in some battles had all of his couriers killed and hence his generals autonomous‘. Far from the prideful image of a singular genius strutting as he pleases across the stage of world history, here Napoleon becomes something altogether more monstrous. Foreshadowing Bataille’s argument a few years later that the apex of sovereignty is precisely an absolute moment of unknowing, he becomes a head that has severed itself from its limbs, falling from its body as it gives way to the sharp and militant positive feedback it has unleashed.

To understand its significance, we must begin by recognising that far from being a story of the triumph of a free capitalism over communism, the reality of Park Chung-hee’s rule and the overtaking of the North by the South is more than a little uncomfortable for a right-libertarian (though not, perhaps, for someone like Peter Thiel). Park was not just a sovereign dictator but an inveterate interventionist, who constructed an entire sequence of bureaucracies to oversee the expansion of the economy according to determinate Five-Year Plans. In private notes, he emphasised the ideology of the February 26 incident in Japan, the militarised attempt to effect a ‘Shōwa Restoration’ that would have united the Japanese race politically and economically behind a totalitarian emperor. In Japan this had failed: in Korea, Park himself could be the president-emperor, declaiming on his ‘sacred military revolution’ of 1961 that had brought together the ‘Korean race’. At the same time, he explicitly imitated the communist North, proclaiming the need for spiritual mobilisation and a ‘path of the leader’ 지도자의길 around which the nation would cohere. The carefully-coordinated mass histrionics after his death in 1979 echoed closely the spectacle with which we are still familiar in North Korea.

Park Chung-hee and Napoleon demonstrate at its extreme the tangled structure of the history of capital. Capitalism’s intensities are geographically and temporally uneven; they spread through loops and spectacular digressions. Human agencies and the mechanisms of the state have their important place within this capitalist megamachine. But things never quite work out the way they plan.

Political Ideology Chart

8kkxS

It displays anarchism (lower end) and authoritarianism (higher end) as the extremes of another (vertical) axis as a social measure while left-right is the horizontal axis which is an economic measure.

Anarchism is about self-governance, having as little hierarchy as possible. As you go to the left, the means of production are distrubuted more equally; and as you go to the right, individuals and corporations own more of the means of production and accumulate capital.

On the upper left you have an authoritarian state, distributing the means of production to the people as equally as possible; on the lower left you have the collectives, getting together voluntarily utilizing their local means of production and sharing the products; on the lower right you have anarchocapitalists, with no state, tax or public service, everything operated by private companies in a completely free and global market; and finally on the top right you both have powerful state and corporations (pretty much all the countries).

But after all, these terms change meanings through history and different cultures. Under unrestrained capitalism the accumulation of wealth both creates monopolies and more importantly political influence. So that influences state interference and civil liberties also. It also aspires for infinite growth which leads to the depletion of natural resources which is another diminishing fact for the quality of living for the people. At that point it favors conservatism rather than progressive scientific thinking. Under collective anarchism, since it’s localized, it is quite difficult to create global catastrophes, and this is why in today’s world, the terms anarchism and capitalism seems as opposite.