Game Theory and Finite Strategies: Nash Equilibrium Takes Quantum Computations to Optimality.


Finite games of strategy, within the framework of noncooperative quantum game theory, can be approached from finite chain categories, where, by finite chain category, it is understood a category C(n;N) that is generated by n objects and N morphic chains, called primitive chains, linking the objects in a specific order, such that there is a single labelling. C(n;N) is, thus, generated by N primitive chains of the form:

x0 →f1 x1 →f2 x1 → … xn-1 →fn xn —– (1)

A finite chain category is interpreted as a finite game category as follows: to each morphism in a chain xi-1 →fi xi, there corresponds a strategy played by a player that occupies the position i, in this way, a chain corresponds to a sequence of strategic choices available to the players. A quantum formal theory, for a finite game category C(n;N), is defined as a formal structure such that each morphic fundament fi of the morphic relation xi-1 →fi xis a tuple of the form:

fi := (Hi, Pi, Pˆfi) —– (2)

where Hi is the i-th player’s Hilbert space, Pi is a complete set of projectors onto a basis that spans the Hilbert space, and Pˆfi ∈ Pi. This structure is interpreted as follows: from the strategic Hilbert space Hi, given the pure strategies’ projectors Pi, the player chooses to play Pˆfi .

From the morphic fundament (2), an assumption has to be made on composition in the finite category, we assume the following tensor product composition operation:

fj ◦ fi = fji —– (3)

fji = (Hji = Hj ⊗ Hi, Pji = Pj ⊗ Pi, Pˆfji = Pˆfj ⊗ Pˆfi) —– (4)

From here, a morphism for a game choice path could be introduced as:

x0 →fn…21 xn —– (5)

fn…21 = (HG = ⊗i=n1 Hi, PG = ⊗i=n1 Pi, Pˆ fn…21 = ⊗i=n1fi) —– (6)

in this way, the choices along the chain of players are completely encoded in the tensor product projectors Pˆfn…21. There are N = ∏i=1n dim(Hi) such morphisms, a number that coincides with the number of primitive chains in the category C(n;N).

Each projector can be addressed as a strategic marker of a game path, and leads to the matrix form of an Arrow-Debreu security, therefore, we call it game Arrow-Debreu projector. While, in traditional financial economics, the Arrow-Debreu securities pay one unit of numeraire per state of nature, in the present game setting, they pay one unit of payoff per game path at the beginning of the game, however this analogy may be taken it must be addressed with some care, since these are not securities, rather, they represent, projectively, strategic choice chains in the game, so that the price of a projector Pˆfn…21 (the Arrow-Debreu price) is the price of a strategic choice and, therefore, the result of the strategic evaluation of the game by the different players.

Now, let |Ψ⟩ be a ket vector in the game’s Hilbert space HG, such that:

|Ψ⟩ = ∑fn…21 ψ(fn…21)|(fn…21⟩ —– (7)

where ψ(fn…21) is the Arrow-Debreu price amplitude, with the condition:

fn…21 |ψ(fn…21)|2 = D —– (8)

for D > 0, then, the |ψ(fn…21)|corresponds to the Arrow-Debreu prices for the game path fn…21 and D is the discount factor in riskless borrowing, defining an economic scale for temporal connections between one unit of payoff now and one unit of payoff at the end of the game, such that one unit of payoff now can be capitalized to the end of the game (when the decision takes place) through a multiplication by 1/D, while one unit of payoff at the end of the game can be discounted to the beginning of the game through multiplication by D.

In this case, the unit operator, 1ˆ = ∑fn…21 Pˆfn…21 has a similar profile as that of a bond in standard financial economics, with ⟨Ψ|1ˆ|Ψ⟩ = D, on the other hand, the general payoff system, for each player, can be addressed from an operator expansion:

πiˆ = ∑fn…21 πi (fn…21) Pˆfn…21 —– (9)

where each weight πi(fn…21) corresponds to quantities associated with each Arrow-Debreu projector that can be interpreted as similar to the quantities of each Arrow-Debreu security for a general asset. Multiplying each weight by the corresponding Arrow-Debreu price, one obtains the payoff value for each alternative such that the total payoff for the player at the end of the game is given by:

⟨Ψ|1ˆ|Ψ⟩ = ∑fn…21 πi(fn…21) |ψ(fn…21)|2/D —– (10)

We can discount the total payoff to the beginning of the game using the discount factor D, leading to the present value payoff for the player:

PVi = D ⟨Ψ|πiˆ|Ψ⟩ = D ∑fn…21 π (fn…21) |ψ(fn…21)|2/D —– (11)

, where π (fn…21) represents quantities, while the ratio |ψ(fn…21)|2/D represents the future value at the decision moment of the quantum Arrow- Debreu prices (capitalized quantum Arrow-Debreu prices). Introducing the ket

|Q⟩ ∈ HG, such that:

|Q⟩ = 1/√D |Ψ⟩ —– (12)

then, |Q⟩ is a normalized ket for which the price amplitudes are expressed in terms of their future value. Replacing in (11), we have:

PVi = D ⟨Q|πˆi|Q⟩ —– (13)

In the quantum game setting, the capitalized Arrow-Debreu price amplitudes ⟨fn…21|Q⟩ become quantum strategic configurations, resulting from the strategic cognition of the players with respect to the game. Given |Q⟩, each player’s strategic valuation of each pure strategy can be obtained by introducing the projector chains:

Cˆfi = ∑fn…i+1fi-1…1 Pˆfn…i+1 ⊗ Pˆfi ⊗ Pˆfi-1…1 —– (14)

with ∑fi Cˆfi = 1ˆ. For each alternative choice of the player i, the chain sums over all of the other choice paths for the rest of the players, such chains are called coarse-grained chains in the decoherent histories approach to quantum mechanics. Following this approach, one may introduce the pricing functional from the expression for the decoherence functional:

D (fi, gi : |Q⟩) = ⟨Q| Cˆfi Cgi|Q⟩  —– (15)

we, then have, for each player

D (fi, gi : |Q⟩) = 0, ∀ fi ≠ gi —– (16)

this is the usual quantum mechanics’ condition for an aditivity of measure (also known as decoherence condition), which means that the capitalized prices for two different alternative choices of player i are additive. Then, we can work with the pricing functional D(fi, fi :|Q⟩) as giving, for each player an Arrow-Debreu capitalized price associated with the pure strategy, expressed by fi. Given that (16) is satisfied, each player’s quantum Arrow-Debreu pricing matrix, defined analogously to the decoherence matrix from the decoherent histories approach, is a diagonal matrix and can be expanded as a linear combination of the projectors for each player’s pure strategies as follows:

Di (|Q⟩) = ∑fi D(fi, f: |Q⟩) Pˆfi —– (17)

which has the mathematical expression of a mixed strategy. Thus, each player chooses from all of the possible quantum computations, the one that maximizes the present value payoff function with all the other strategies held fixed, which is in agreement with Nash.


Probability Space Intertwines Random Walks – Thought of the Day 144.0


agByQMany deliberations of stochasticity start with “let (Ω, F, P) be a probability space”. One can actually follow such discussions without having the slightest idea what Ω is and who lives inside. So, what is “Ω, F, P” and why do we need it? Indeed, for many users of probability and statistics, a random variable X is synonymous with its probability distribution μX and all computations such as sums, expectations, etc., done on random variables amount to analytical operations such as integrations, Fourier transforms, convolutions, etc., done on their distributions. For defining such operations, you do not need a probability space. Isn’t this all there is to it?

One can in fact compute quite a lot of things without using probability spaces in an essential way. However the notions of probability space and random variable are central in modern probability theory so it is important to understand why and when these concepts are relevant.

From a modelling perspective, the starting point is a set of observations taking values in some set E (think for instance of numerical measurement, E = R) for which we would like to build a stochastic model. We would like to represent such observations x1, . . . , xn as samples drawn from a random variable X defined on some probability space (Ω, F, P). It is important to see that the only natural ingredient here is the set E where the random variables will take their values: the set of events Ω is not given a priori and there are many different ways to construct a probability space (Ω, F, P) for modelling the same set of observations.

Sometimes it is natural to identify Ω with E, i.e., to identify the randomness ω with its observed effect. For example if we consider the outcome of a dice rolling experiment as an integer-valued random variable X, we can define the set of events to be precisely the set of possible outcomes: Ω = {1, 2, 3, 4, 5, 6}. In this case, X(ω) = ω: the outcome of the randomness is identified with the randomness itself. This choice of Ω is called the canonical space for the random variable X. In this case the random variable X is simply the identity map X(ω) = ω and the probability measure P is formally the same as the distribution of X. Note that here X is a one-to-one map: given the outcome of X one knows which scenario has happened so any other random variable Y is completely determined by the observation of X. Therefore using the canonical construction for the random variable X, we cannot define, on the same probability space, another random variable which is independent of X: X will be the sole source of randomness for all other variables in the model. This also show that, although the canonical construction is the simplest way to construct a probability space for representing a given random variable, it forces us to identify this particular random variable with the “source of randomness” in the model. Therefore when we want to deal with models with a sufficiently rich structure, we need to distinguish Ω – the set of scenarios of randomness – from E, the set of values of our random variables.

Let us give an example where it is natural to distinguish the source of randomness from the random variable itself. For instance, if one is modelling the market value of a stock at some date T in the future as a random variable S1, one may consider that the stock value is affected by many factors such as external news, market supply and demand, economic indicators, etc., summed up in some abstract variable ω, which may not even have a numerical representation: it corresponds to a scenario for the future evolution of the market. S1(ω) is then the stock value if the market scenario which occurs is given by ω. If the only interesting quantity in the model is the stock price then one can always label the scenario ω by the value of the stock price S1(ω), which amounts to identifying all scenarios where the stock S1 takes the same value and using the canonical construction. However if one considers a richer model where there are now other stocks S2, S3, . . . involved, it is more natural to distinguish the scenario ω from the random variables S1(ω), S2(ω),… whose values are observed in these scenarios but may not completely pin them down: knowing S1(ω), S2(ω),… one does not necessarily know which scenario has happened. In this way one reserves the possibility of adding more random variables later on without changing the probability space.

These have the following important consequence: the probabilistic description of a random variable X can be reduced to the knowledge of its distribution μX only in the case where the random variable X is the only source of randomness. In this case, a stochastic model can be built using a canonical construction for X. In all other cases – as soon as we are concerned with a second random variable which is not a deterministic function of X – the underlying probability measure P contains more information on X than just its distribution. In particular, it contains all the information about the dependence of the random variable X with respect to all other random variables in the model: specifying P means specifying the joint distributions of all random variables constructed on Ω. For instance, knowing the distributions μX, μY of two variables X, Y does not allow to compute their covariance or joint moments. Only in the case where all random variables involved are mutually independent can one reduce all computations to operations on their distributions. This is the case covered in most introductory texts on probability, which explains why one can go quite far, for example in the study of random walks, without formalizing the notion of probability space.

Global Significance of Chinese Investments. My Deliberations in Mumbai (04/03/2018)


What are fitted values in statistics?

The values for an output variable that have been predicted by a model fitted to a set of data. a statistical is generally an equation, the graph of which includes or approximates a majority of data points in a given data set. Fitted values are generated by extending the model of past known data points in order to predict unknown values. These are also called predicted values.

What are outliers in statistics?

These are observation points that are distant from other observations and may arise due to variability in the measurement  or it may indicate experimental errors. These may also arise due to heavy tailed distribution.

What is LBS (Locational Banking statistics)?

The locational banking statistics gather quarterly data on international financial claims and liabilities of bank offices in the reporting countries. Total positions are broken down by currency, by sector (bank and non-bank), by country of residence of the counterparty, and by nationality of reporting banks. Both domestically-owned and foreign-owned banking offices in the reporting countries record their positions on a gross (unconsolidated) basis, including those vis-à-vis own affiliates in other countries. This is consistent with the residency principle of national accounts, balance of payments and external debt statistics.

What is CEIC?

Census and Economic Information Centre

What are spillover effects?

These refer to the impact that seemingly unrelated events in one nation can have on the economies of other nations. since 2009, China has emerged a major source of spillover effects. This is because Chinese manufacturers have driven much of the global commodity demand growth since 2000. With China now being the second largest economy in the world, the number of countries that experience spillover effects from a Chinese slowdown is significant. China slowing down has a palpable impact on worldwide trade in metals, energy, grains and other commodities.

How does China deal with its Non-Performing Assets?


China adopted a four-point strategy to address the problems. The first was to reduce risks by strengthening banks and spearheading reforms of the state-owned enterprises (SOEs) by reducing their level of debt. The Chinese ensured that the nationalized banks were strengthened by raising disclosure standards across the board.

The second important measure was enacting laws that allowed the creation of asset management companies, equity participation and most importantly, asset-based securitization. The “securitization” approach is being taken by the Chinese to handle even their current NPA issue and is reportedly being piloted by a handful of large banks with specific emphasis on domestic investors. According to the International Monetary Fund (IMF), this is a prudent and preferred strategy since it gets assets off the balance sheets quickly and allows banks to receive cash which could be used for lending.

The third key measure that the Chinese took was to ensure that the government had the financial loss of debt “discounted” and debt equity swaps were allowed in case a growth opportunity existed. The term “debt-equity swap” (or “debt-equity conversion”) means the conversion of a heavily indebted or financially distressed company’s debt into equity or the acquisition by a company’s creditors of shares in that company paid for by the value of their loans to the company. Or, to put it more simply, debt-equity swaps transfer bank loans from the liabilities section of company balance sheets to common stock or additional paid-in capital in the shareholders’ equity section.

Let us imagine a company, as on the left-hand side of the below figure, with assets of 500, bank loans of 300, miscellaneous debt of 200, common stock of 50 and a carry-forward loss of 50. By converting 100 of its debt into equity (transferring 50 to common stock and 50 to additional paid-in capital), thereby improving the balance sheet position and depleting additional paid-in capital (or using the net income from the following year), as on the right-hand side of the figure, the company escapes insolvency. The former creditors become shareholders, suddenly acquiring 50% of the voting shares and control of the company.

Screen Shot 2018-03-07 at 10.09.47 AM

The first benefit that results from this is the improvement in the company’s finances produced by the reduction in debt. The second benefit (from the change in control) is that the creditors become committed to reorganizing the company, and the scope for moral hazard by the management is limited. Another benefit is one peculiar to equity: a return (i.e., repayment) in the form of an increase in enterprise value in the future. In other words, the fact that the creditors stand to make a return on their original investment if the reorganization is successful and the value of the business rises means that, like the debtor company, they have more to gain from this than from simply writing off their loans. If the reorganization is not successful, the equity may, of course, prove worthless.

The fourth measure they took was producing incentives like tax breaks, exemption from administrative fees and transparent evaluations norms. These strategic measures ensured the Chinese were on top of the NPA issue in the early 2000s, when it was far larger than it is today. The noteworthy thing is that they were indeed successful in reducing NPAs. How is this relevant to India and how can we address the NPA issue more effectively?

For now, capital controls and the paying down of foreign currency loans imply that there are few channels through which a foreign-induced debt sell-off could trigger a collapse in asset prices. Despite concerns in 2016 over capital outflow, China’s foreign exchange reserves have stabilised.

But there is a long-term cost. China is now more vulnerable to capital outflow. Errors and omissions on its national accounts remain large, suggesting persistent unrecorded capital outflows. This loss of capital should act as a salutary reminder to those who believe that China can take the lead on globalisation or provide the investment or currency business to fuel things like a post-Brexit economy.

The Chinese government’s focus on debt management will mean tighter controls on speculative international investments. It will also provide a stern test of China’s centrally planned financial system for the foreseeable future.

Global Significance of Chinese investments

The Statistical Physics of Stock Markets. Thought of the Day 143.0

This video is an Order Routing Animation

The externalist view argues that we can make sense of, and profit from stock markets’ behavior, or at least few crucial properties of it, by crunching numbers and looking for patterns and regularities in certain sets of data. The notion of data, hence, is a key element in such an understanding and the quantitative side of the problem is prominent even if it does not mean that a qualitative analysis is ignored. The point here that the outside view maintains that it provides a better understanding than the internalist view. To this end, it endorses a functional perspective on finance and stock markets in particular.

The basic idea of the externalist view is that there are general properties and behavior of stock markets that can be detected and studied through mathematical lens, and they do not depend so much on contextual or domain-specific factors. The point at stake here is that the financial systems can be studied and approached at different scales, and it is virtually impossible to produce all the equations describing at a micro level all the objects of the system and their relations. So, in response, this view focuses on those properties that allow us to get an understanding of the behavior of the systems at a global level without having to produce a detailed conceptual and mathematical account of the inner ‘machinery’ of the system. Hence the two roads: The first one is to embrace an emergentist view on stock market, that is a specific metaphysical, ontological, and methodological thesis, while the second one is to embrace a heuristic view, that is the idea that the choice to focus on those properties that are tractable by the mathematical models is a pure problem-solving option.

A typical view of the externalist approach is the one provided, for instance, by statistical physics. In describing collective behavior, this discipline neglects all the conceptual and mathematical intricacies deriving from a detailed account of the inner, individual, and at micro level functioning of a system. Concepts such as stochastic dynamics, self-similarity, correlations (both short- and long-range), and scaling are tools to get this aim. Econophysics is a stock example in this sense: it employs methods taken from mathematics and mathematical physics in order to detect and forecast the driving forces of stock markets and their critical events, such as bubbles, crashes and their tipping points. Under this respect, markets are not ‘dark boxes’: you can see their characteristics from the outside, or better you can see specific dynamics that shape the trends of stock markets deeply and for a long time. Moreover, these dynamics are complex in the technical sense. This means that this class of behavior is such to encompass timescales, ontology, types of agents, ecologies, regulations, laws, etc. and can be detected, even if not strictly predictable. We can focus on the stock markets as a whole, on few of their critical events, looking at the data of prices (or other indexes) and ignoring all the other details and factors since they will be absorbed in these global dynamics. So this view provides a look at stock markets such that not only they do not appear as a unintelligible casino where wild gamblers face each other, but that shows the reasons and the properties of a systems that serve mostly as a means of fluid transactions that enable and ease the functioning of free markets.

Moreover the study of complex systems theory and that of stock markets seem to offer mutual benefits. On one side, complex systems theory seems to offer a key to understand and break through some of the most salient stock markets’ properties. On the other side, stock markets seem to provide a ‘stress test’ of the complexity theory. Didier Sornette expresses the analogies between stock markets and phase transitions, statistical mechanics, nonlinear dynamics, and disordered systems mold the view from outside:

Take our personal life. We are not really interested in knowing in advance at what time we will go to a given store or drive to a highway. We are much more interested in forecasting the major bifurcations ahead of us, involving the few important things, like health, love, and work, that count for our happiness. Similarly, predicting the detailed evolution of complex systems has no real value, and the fact that we are taught that it is out of reach from a fundamental point of view does not exclude the more interesting possibility of predicting phases of evolutions of complex systems that really count, like the extreme events. It turns out that most complex systems in natural and social sciences do exhibit rare and sudden transitions that occur over time intervals that are short compared to the characteristic time scales of their posterior evolution. Such extreme events express more than anything else the underlying “forces” usually hidden by almost perfect balance and thus provide the potential for a better scientific understanding of complex systems.

Phase transitions, critical points, extreme events seem to be so pervasive in stock markets that they are the crucial concepts to explain and, in case, foresee. And complexity theory provides us a fruitful reading key to understand their dynamics, namely their generation, growth and occurrence. Such a reading key proposes a clear-cut interpretation of them, which can be explained again by means of an analogy with physics, precisely with the unstable position of an object. Complexity theory suggests that critical or extreme events occurring at large scale are the outcome of interactions occurring at smaller scales. In the case of stock markets, this means that, unlike many approaches that attempt to account for crashes by searching for ‘mechanisms’ that work at very short time scales, complexity theory indicates that crashes have causes that date back months or year before it. This reading suggests that it is the increasing, inner interaction between the agents inside the markets that builds up the unstable dynamics (typically the financial bubbles) that eventually ends up with a critical event, the crash. But here the specific, final step that triggers the critical event: the collapse of the prices is not the key for its understanding: a crash occurs because the markets are in an unstable phase and any small interference or event may trigger it. The bottom line: the trigger can be virtually any event external to the markets. The real cause of the crash is its overall unstable position, the proximate ‘cause’ is secondary and accidental. Or, in other words, a crash could be fundamentally endogenous in nature, whilst an exogenous, external, shock is simply the occasional triggering factors of it. The instability is built up by a cooperative behavior among traders, who imitate each other (in this sense is an endogenous process) and contribute to form and reinforce trends that converge up to a critical point.

The main advantage of this approach is that the system (the market) would anticipate the crash by releasing precursory fingerprints observable in the stock market prices: the market prices contain information on impending crashes and this implies that:

if the traders were to learn how to decipher and use this information, they would act on it and on the knowledge that others act on it; nevertheless, the crashes would still probably happen. Our results suggest a weaker form of the “weak efficient market hypothesis”, according to which the market prices contain, in addition to the information generally available to all, subtle information formed by the global market that most or all individual traders have not yet learned to decipher and use. Instead of the usual interpretation of the efficient market hypothesis in which traders extract and consciously incorporate (by their action) all information contained in the market prices, we propose that the market as a whole can exhibit “emergent” behavior not shared by any of its constituents.

In a nutshell, the critical events emerge in a self-organized and cooperative fashion as the macro result of the internal and micro interactions of the traders, their imitation and mirroring.


BASEL III: The Deflationary Symbiotic Alliance Between Governments and Banking Sector. Thought of the Day 139.0


The Bank for International Settlements (BIS) is steering the banks to deal with government debt, since the governments have been running large deficits to deal with the catastrophe of BASEL 2-inspired mortgaged-backed securities collapse. The deficits are ranged anywhere between 3 to 7 per cent of the GDP, and in cases even higher. These deficits were being used to create a floor under growth by stimulating the economy and bailing out financial institutions that got carried away by the wholesale funding of real estate. And this is precisely what BASEL 2 promulgated, i.e. encouraging financial institutions to hold mortgage-backed securities for investments.

In comes the BASEL 3 rules that implore than banks must be in compliance with these regulations. But, who gets to decide these regulations? Actually, banks do, since they then come on board for discussions with the governments, and such negotiations are catered to bail banks out with government deficits in order to oil the engine of economic growth. The logic here underlines the fact that governments can continue to find a godown of sorts for their deficits, while the banks can buy government debt without any capital commitment and make a good spread without the risk, thus serving the interests of the both parties involved mutually. Moreover, for the government, the process is political, as no government would find it acceptable to be objective in its viewership of letting a bubble deflate, because any process of deleveraging would cause the banks to offset their lending orgy, which is detrimental to the engineered economic growth. Importantly, without these deficits, the financial system could go down the deflationary spiral, which might turn out to be a difficult proposition to recover if there isn’t any complicity in rhyme and reason accorded to this particular dysfunctional and symbiotic relationship. So, whats the implication of all this? The more government debt banks hold, the less overall capital they need. And who says so? BASEL 3.

But, the mesh just seems to be building up here. In the same way that banks engineered counterfeit AAA-backed securities that were in fact an improbable financial hoax, how can countries that have government debt/GDP ratio to the tune of 90 – 120 per cent get a Standard&Poor’s ratings of a double-A? They have these ratings because they belong to a apical club that gives their members exclusive rights to a high rating even if they are irresponsible with their issuing of debts. Well, is that this simple? Yes and no. Yes, as is above, and no is merely clothing itself in a bit of an economic jargon, in that these are the countries where the government debt can be held without any capital against it. In other words, if a debt cannot be held, it cannot be issued, and that is the reason why countries are striving for issuing debts that have a zero weighting.

Let us take snippets across gradations of BASEL 1, 2 and 3. In BASEL 1, the unintended consequences were that banks were all buying equity in cross-owned companies. When the unwinding happened, equity just fell apart, since any beginning of a financial crisis is tailored to smash bank equities to begin with. Thats the first wound to rationality. In BASEL 2, banks were told to hold as much AAA-rated paper as they wanted with no capital against it. What happened if these ratings were downgraded? It would trigger a tsunami cutting through pension and insurance schemes to begin with forcing them to sell their papers and pile up huge losses meant to absorbed by capital, which doesn’t exist against these papers. So whatever gets sold is politically cushioned and buffered for by the governments, for the risks cannot be afforded to get any more denser as that explosion would sound the catastrophic death knell for the economy. BASEL 3 doesn’t really help, even if it mandated to hold a concentrated portfolio of government debt without any capital against it, for absorption of losses in case of a crisis hitting would have to exhumed through government bail-outs in scenarios where government debts are a century plus. So, are the banks in-stability, or given to more instability via BASEL 3?  The incentives to ever more hold government securities increase bank exposure to sovereign bonds, adding to existing exposure of government securities via repurchase transactions, investments and trading inventories. A ratings downgrade results in a fall in value of bonds triggering losses. Banks would then face calls for additional collateral, which would drain liquidity, and which would then require additional capital as way of compensation. where would this capital come in from, if not for the governments to source it? One way out would be recapitalization through government debt. On the other hand, the markets are required to hedge against the large holdings of government securities and so short stocks, currencies and insurance companies are all made to stare in the face of volatility that rips through them, of which the net resultant is falling liquidity. So, this vicious cycle would continue to cycle its way through any downgrades. And thats why the deflationary symbiotic alliance between the governments and banking sector isn’t anything more than high-fatigue tolerance….

Conjuncted: Balance of Payments in a Dirty Float System, or Why Central Banks Find It Ineligible to Conduct Independent Monetary Policies? Thought of the Day


Screen Shot 2018-01-19 at 10.40.54 AM

If the rate of interest is partly a monetary phenomenon, money will have real effects working through variations in investment expenditure and the capital stock. Secondly, if there are unemployed resources, the impact of increases in the money supply will first be on output, and not on prices. It was, indeed, Keynes’s view expressed in his General Theory that throughout history the propensity to save has been greater than the propensity to invest, and that pervasive uncertainty and the desire for liquidity has in general kept the rate of interest too high. Given the prevailing economic conditions of the 1930s when Keynes was writing, it was no accident that he should have devoted part of the General Theory to a defence of mercantilism as containing important germs of truth:

What I want is to do justice to schools of thought which the classicals have treated as imbeciles for the last hundred years and, above all, to show that I am not really being so great an innovator, except as against the classical school, but have important predecessors, and am returning to an age-long tradition of common sense.

The mercantilists recognised, like Keynes, that the rate of interest is determined by monetary conditions, and that it could be too high to secure full employment, and in relation to the needs of growth. As Keynes put it in the General Theory:

mercantilist thought never supposed as later economists did [for example, Ricardo, and even Alfred Marshall] that there was a self-adjusting tendency by which the rate of interest would be established at the appropriate level [for full employment].

It was David Ricardo, in his The Principles of Political Economy and Taxation, who accepted and developed Say’s law of markets that supply creates its own demand, and who for the first time expounded the theory of comparative advantage, which laid the early foundations for orthodox trade and growth theory that has prevailed ever since. Ricardian trade theory, however, is real theory relating to the reallocation of real resources through trade which ignores the monetary aspects of trade; that is, the balance between exports and imports as trade takes place. In other words, it ignores the balance of payments effects of trade that arises as a result of trade specialization, and the feedback effects that the balance of payments can have on the real economy. Moreover, continuous full employment is assumed because supply creates its own demand through variations in the real rate of interest. These aspects question the prevalence of Ricardian theory in orthodox trade and growth theory to a large extent in today’s scenario. But in relation to trade, as Keynes put it:

free trade assumes that if you throw men out of work in one direction you re-employ them in another. As soon as that link in the chain is broken the whole of the free trade argument breaks down.

In other words, the real income gains from specialization may be offset by the real income losses from unemployment. Now, suppose that payments deficits arise in the process of international specialization and the freeing of trade, and the rate of interest has to be raised to attract foreign capital inflows to finance them. Or suppose deficits cannot be financed and income has to be deflated to reduce imports. The balance of payments consequences of trade may offset the real income gains from trade.

This raises the question of why the orthodoxy ignores the balance of payments? There are several reasons, both old and new, that all relate to the balance of payments as a self-adjusting process, or simply as a mirror image of autonomous capital flows, with no income adjustment implied. Until the First World War, the mechanism was the gold standard. The balance of payments was supposed to be self-equilibrating because countries in surplus, accumulating gold, would lose competitiveness through rising prices (Hume’s quantity theory of money), and countries in deficit losing gold would gain competitiveness through falling prices. The balance of payments was assumed effectively to look after itself through relative price adjustments without any change in income or output. After the external gold standard collapsed in 1931, the theory of flexible exchange rates was developed, and it was shown that if the real exchange rate is flexible, and the so-called Marshall–Lerner condition is satisfied (i.e. the sum of the price elasticities of demand for exports and imports is greater than unity), the balance of payments will equilibrate; again, without income adjustment.

In modern theory, balance of payments deficits are assumed to be inherently temporary as the outcome of inter-temporal decisions by private agents concerning consumption. Deficits are the outcome of rational decisions to consume now and pay later. Deficits are merely a form of consumption smoothing, and present no difficulty for countries. And then there is the Panglossian view that the current account of the balance of payments is of no consequence at all because it simply reflects the desire of foreigners to invest in a country. Current account deficits should be seen as a sign of economic success, not as a weakness.

It is not difficult to question how balance of payments looks after itself, or does not have consequences for long-run growth. As far as the old gold standard mechanism is concerned, instead of the price levels of deficit and surplus countries moving in opposite directions, there was a tendency in the nineteenth century for the price levels of countries to move together in the same direction. In practice, it was not movements in relative prices that equilibrated the balance of payments but expenditure and output changes associated with interest rate differentials. Interest rates rose in deficit countries which deflated demand and output, and fell in surplus countries stimulating demand.

On the question of flexible exchange rates as an equilibrating device, a distinction first needs to be made between the nominal exchange rate and the real exchange rate. It is easy for countries to adjust the nominal rate, but not so easy to adjust the real rate because competitors may “price to market” or retaliate, and domestic prices may rise with a nominal devaluation. Secondly, the Marshall–Lerner condition then has to be satisfied for the balance of payments to equilibrate. This may not be the case in the short run, or because of the nature of goods exported and imported by a particular country. The international evidence over the past almost half a century years since the breakdown of the Bretton Woods fixed exchange rate system suggests that exchange rate changes are not an efficient balance of payments adjustment weapon. Currencies appreciate and depreciate and still massive global imbalances of payments remain.

On the inter-temporal substitution effect, it is wrong to give the impression that inter-temporal shifts in consumption behaviour do not have real effects, particularly if interest rates have to rise to finance deficits caused by more consumption in the present if countries do not want their exchange rate to depreciate. On the view that deficits are a sign of success, an important distinction needs to be made between types of capital inflows. If the capital flows are autonomous, such as foreign direct investment, the argument is plausible, but if they are “accommodating” in the form of loans from the banking system or the sale of securities to foreign governments and international organizations, the probable need to raise interest rates will again have real effects by reducing investment and output domestically.

Austrian School of Economics: The Praxeological Synthetic. Thought of the Day 135.0


Within the Austrian economics (here, here, here and here), the a priori stance has dominated a tradition running from Carl Menger to Murray Rothbard. The idea here is that the basic structures of economy is entrenched in the more basic structures of human action as such. Nowhere is this more evident than in the work of Ludwig von Mises – his so-called ‘praxeology’, which rests on the fundamental axiom that individual human beings act on the primordial fact that individuals engage in conscious actions toward chosen goals, is built from the idea that all basic laws of economy can be derived apriorically from one premiss: the concept of human action. Of course, this concept is no simple concept, containing within itself purpose, product, time, scarcity of resources, etc. – so it would be more fair to say that economics lies as the implication of the basic schema of human action as such.

Even if the Austrian economists’ conception of the a priori is decidedly objectivist and anti-subjectivist, it is important to remark their insistence on subjectivity within their ontological domain. The Austrian economics tradition is famous exactly for their emphasis on the role of subjectivity in economy. From Carl Menger onwards, they protest against the mainstream economical assumption that the economic agent in the market is fully rational, knows his own preferences in detail, has constant preferences over time, has access to all prices for a given commodity at a given moment, etc. Thus, von Mises’ famous criticism of socialist planned economy is built on this idea: the system of ever-changing prices in the market constitutes a dispersed knowledge about the conditions of resource allocation which is a priori impossible for any single agent – let alone, any central planner’s office – to possess. Thus, their conception of the objective a priori laws of the economic domain perhaps surprisingly had the implication that they warned against a too objectivist conception of economy not taking into account the limits of economic rationality stemming from the general limitations of the capacities of real subjects. Their ensuing liberalism is thus built on a priori conclusions about the relative unpredictability of economics founded on the role played by subjective intentionality. For the same reason, Hayek ended up with a distinction between simple and complex processes, respectively, cutting across all empirical disciplines, where only the former permit precise, predictive, quantitative calculi based on mathemathical modeling while the latter permit only recognition of patterns (which may also be mathematically modeled, to be sure, but without quantitative predictability). It is of paramount importance, though, to distinguish this emphasis on the ineradicable role of subjectivity in certain regional domains from Kantian-like ideas about the foundational role of subjectivity in the construction of knowledge as such. The Austrians are as much subjectivists in the former respect as they are objectivists in the latter. In the history of economics, the Austrians occupy a middle position, being against historicism on the one hand as well as against positivism on the other. Against the former, they insist that a priori structures of economy transgress history which does not possess the power to form institutions at random but only as constrained by a priori structures. And against the latter, they insist that the mere accumulation of empirical data subject to induction will never in itself give rise to the formation of theoretical insights. Structures of intelligible concepts are in all cases necessary for any understanding of empirical regularities – in so far, the Austrian a priori approach is tantamount to a non-skepticist version of the doctrine of ‘theory-ladenness’ of observations.

A late descendant of the Austrian tradition after its emigration to the Anglo-Saxon world (von Mises, Hayek, and Schumpeter were such emigrés) was the anarcho-liberal economist Murray Rothbard, and it is the inspiration from him which allows Barry Smith to articulate the principles underlying the Austrians as ‘fallibilistic apriorism’. Rothbard characterizes in a brief paper what he calls ‘Extreme Apriorism’ as follows:

there are two basic differences between the positivists’ model science of physics on the one hand, and sciences dealing with human actions on the other: the former permits experimental verification of consequences of hypotheses, which the latter do not (or, only to a limited degree, we may add); the former admits of no possibility of testing the premisses of hypotheses (like: what is gravity?), while the latter permits a rational investigation of the premisses of hypotheses (like: what is human action?). This state of affairs makes it possible for economics to derive its basic laws with absolute – a priori – certainty: in addition to the fundamental axiom – the existence of human action – only two empirical postulates are needed: ‘(1) the most fundamental variety of resources, both natural and human. From this follows directly the division of labor, the market, etc.; (2) less important, that leisure is a consumer good’. On this basis, it may e.g. be inferred, ‘that every firm aims always at maximizing its psychic profit’.

Rothbard draws forth this example so as to counterargue traditional economists who will claim that the following proposition could be added as a corollary: ‘that every firm aims always at maximizing its money profit’. This cannot be inferred and is, according to Rothbard, an economical prejudice – the manager may, e.g. prefer for nepotistic reasons to employ his stupid brother even if that decreases the firm’s financial profit possibilities. This is an example of how the Austrians refute the basic premiss of absolute rationality in terms of maximal profit seeking. Given this basis, other immediate implications are:

the means-ends relationship, the time-structure of production, time-preference, the law of diminishing marginal utility, the law of optimum returns, etc.

Rothbard quotes Mises for seeing the fundamental Axiom as a ‘Law of Thought’ – while he himself sees this as a much too Kantian way of expressing it, he prefers instead the simple Aristotelian/Thomist idea of a ‘Law of Reality’. Rothbard furthermore insists that this doctrine is not inherently political – in order to attain the Austrians’ average liberalist political orientation, the preference for certain types of ends must be added to the a priori theory (such as the preference for life over death, abundance over poverty, etc.). This also displays the radicality of the Austrian approach: nothing is assumed about the content of human ends – this is why they will never subscribe to theories about Man as economically rational agent or Man as necessarily economical egotist. All different ends meet and compete on the market – including both desire for profit in one end and idealist, utopian, or altruist goals in the other. The principal interest, in these features of economical theory is the high degree of awareness of the difference between the – extreme – synthetic a priori theory developed, on the one hand, and its incarnation in concrete empirical cases and their limiting conditions on the other.