Equilibrium Market Prices are Unique – Convexity and Concavity Utility Functions on a Linear Case. Note Quote + Didactics.

slide_8

Consider a market consisting of a set B of buyers and a set A of divisible goods. Assume |A| = n and |B| = n′. We are given for each buyer i the amount ei of money she possesses and for each good j the amount bj of this good. In addition, we are given the utility functions of the buyers. Our critical assumption is that these functions are linear. Let uij denote the utility derived by i on obtaining a unit amount of good j. Thus if the buyer i is given xij units of good j, for 1 ≤ j ≤ n, then the happiness she derives is

j=1nuijxij —— (1)

Prices p1, . . . , pn of the goods are said to be market clearing prices if, after each buyer is assigned an optimal basket of goods relative to these prices, there is no surplus or deficiency of any of the goods. So, is it possible to compute such prices in polynomial time?

First observe that without loss of generality, we may assume that each bj is unit – by scaling the uij’s appropriately. The uij’s and ei’s are in general rational; by scaling appropriately, they may be assumed to be integral. By making the mild assumption that each good has a potential buyer, i.e., a buyer who derives nonzero utility from this good. Under this assumption, market clearing prices do exist.

It turns out that equilibrium allocations for Fisher’s linear case are captured as optimal solutions to a remarkable convex program, the Eisenberg–Gale convex program.

A convex program whose optimal solution is an equilibrium allocation must have as constraints the packing constraints on the xij’s. Furthermore, its objective function, which attempts to maximize utilities derived, should satisfy the following:

  1. If the utilities of any buyer are scaled by a constant, the optimal allocation remains unchanged.
  2. If the money of a buyer b is split among two new buyers whose utility functions are the same as that of b then sum of the optimal allocations of the new buyers should be an optimal allocation for b.

The money weighted geometric mean of buyers’ utilities satisfies both these conditions:

max (∏i∈Auiei)1/∑iei —– (2)

then, the following objective function is equivalent:

max (∏i∈Auiei) —– (3)

Its log is used in the Eisenberg–Gale convex program:

maximize, ∑i=1n’eilogui

subject to

ui = ∑j=1nuijxij ∀ i ∈ B

i=1n’ xij ≤ 1 ∀ j ∈ A

xij ≥ 0 ∀ i ∈ B, j ∈ A —– (4)

where xij is the amount of good j allocated to buyer i. Interpret Lagrangian variables, say pj’s, corresponding to the second set of conditions as prices of goods. Optimal solutions to xij’s and pj’s must satisfy the following:

    1. ∀ j ∈ A : p≥ 0
    2. ∀ j ∈ A : p> 0 ⇒ ∑i∈A xij = 1
    3. ∀ i ∈ B, j ∈ A : uij/pj ≤ ∑j∈Auijxij/ei
    4. ∀ i ∈ B, j ∈ A : xij > 0 ⇒ uij/pj = ∑j∈Auijxij/ei

From these conditions, one can derive that an optimal solution to convex program (4) must satisfy the market clearing conditions.

For the linear case of Fisher’s model:

  1. If each good has a potential buyer, equilibrium exists.
  2. The set of equilibrium allocations is convex.
  3. Equilibrium utilities and prices are unique.
  4. If all uij’s and ei’s are rational, then equilibrium allocations and prices are also rational. Moreover, they can be written using polynomially many bits in the length of the instance.

Corresponding to good j there is a buyer i such that uij > 0. By the third condition as stated above,

pj ≥ eiuij/∑juijxij > 0

By the second condition, ∑i∈A xij = 1, implying that prices of all goods are positive and all goods are fully sold. The third and fourth conditions imply that if buyer i gets good j then j must be among the goods that give buyer i maximum utility per unit money spent at current prices. Hence each buyer gets only a bundle consisting of her most desired goods, i.e., an optimal bundle.

The fourth condition is equivalent to

∀ i ∈ B, j ∈ A : eiuijxij/∑j∈Auijxij = pjxij

Summing over all j

∀ i ∈ B : eij uijxij/∑j∈Auijxij = pjxij

⇒ ∀ i ∈ B : ei = ∑jpjxij

Hence the money of each buyer is fully spent completing the proof that market equilibrium exists. Since each equilibrium allocation is an optimal solution to the Eisenberg-Gale convex program, the set of equilibrium allocations must form a convex set. Since log is a strictly concave function, if there is more than one equilibrium, the utility derived by each buyer must be the same in all equilibria. This fact, together with the fourth condition, gives that the equilibrium prices are unique.

Advertisement

Game’s Degeneracy Running Proportional to Polytope’s Redundancy.

For a given set of vertices V ⊆ RK a Polytope P can be defined as the following set of points:

P = {∑i=1|V|λivi ∈ RK | ∑i=1|V|λi = 1; λi ≥ 0; vi ∈ V}

Screen Shot 2019-01-02 at 11.03.28 AM

Polytope is an intersection of boundaries that separate the space into two distinct areas. If a polytope is to be defined as an intersection of half spaces, then for a matrix M ∈ Rmxn, and a vector b ∈ Rm, polytope P is defined as a set of points

P = {x ∈ Rn | Mx ≤ b}

Switching over to a two-player game, (A, B) ∈ Rmxn2>0, the row/column best response polytope P/Q is defined by:

P = {x ∈ Rm | x ≥ 0; xB ≤ 1}

Q = {y ∈ Rn | Ay ≤ 1; y ≥ 0}

The polytope P, corresponds to the set of points with an upper bound on the utility of those points when considered as row strategies against which the column player plays.

An affine combination of points z1,….zk in some Euclidean space is of the form ∑i=1kλizi, where λ1, …, λk are reals with ∑i=1kλi= 1. It is called a convex combination, if λ≥ 0 ∀ i. A set of points is convex if it is closed under forming convex combinations. Given points are affinely independent if none of these points are an affine combination of the others. A convex set has dimension d iff it has d + 1, but no more, affinely independent points.

A polyhedron P in Rd is a set {z ∈ Rd | Cz ≤ q} for some matrix C and vector q. It is called full-dimensional if it has dimension d. It is called a polytope if it is bounded. A face of P is a set {z ∈ P | cz = q0} for some c ∈ Rd, q0 ∈ R, such that the inequality cz ≤ q0 holds for all z in P. A vertex of P is the unique element of a zero-dimensional face of P. An edge of P is a one-dimensional face of P. A facet of a d-dimensional polyhedron P is a face of dimension d − 1. It can be shown that any nonempty face F of P can be obtained by turning some of the inequalities defining P into equalities, which are then called binding inequalities. That is, F = {z ∈ P | ciz = qi, i ∈ I}, where ciz ≤ qi for i ∈ I are some of the rows in Cz ≤ q. A facet is characterized by a single binding inequality which is irredundant; i.e., the inequality cannot be omitted without changing the polyhedron. A d-dimensional polyhedron P is called simple if no point belongs to more than d facets of P, which is true if there are no special dependencies between the facet-defining inequalities. The “best response polyhedron” of a player is the set of that player’s mixed strategies together with the “upper envelope” of expected payoffs (and any larger payoffs) to the other player.

Nondegeneracy of a bimatrix game (A, B) can be stated in terms of the polytopes P and Q as no point in P has more than m labels, and no point in Q has more than n labels. (If x ∈ P and x has support of size k and L is the set of labels of x, then |L ∩ M| = m − k, so |L| > m implies x has more than k best responses in L ∩ N. Then P and Q are simple polytopes, because a point of P, say, that is on more than m facets would have more than m labels. Even if P and Q are simple polytopes, the game can be degenerate if the description of a polytope is redundant in the sense that some inequality can be omitted, but nevertheless is sometimes binding. This occurs if a player has a pure strategy that is weakly dominated by or payoff equivalent to some other mixed strategy. Non-simple polytopes or redundant inequalities of this kind do not occur for “generic” payoffs; this illustrates the assumption of nondegeneracy from a geometric viewpoint. (A strictly dominated strategy may occur generically, but it defines a redundant inequality that is never binding, so this does not lead to a degenerate game.) Because the game is nondegenerate, only vertices of P can have m labels, and only vertices of Q can have n labels. Otherwise, a point of P with m labels that is not a vertex would be on a higher dimensional face, and a vertex of that face, which is a vertex of P, would have additional labels. Consequently, only vertices of P and Q have to be inspected as possible equilibrium strategies. Algorithmically, if the input is a nondegenerate bimatrix game, and output is an Nash equilibria of the game, then the method employed for each vertex x of P − {0}, and each vertex y of Q − {0}, if (x, y) is completely labeled, the output then is the Nash equilibrium (x · 1/1x, y · 1/1y).

Symmetrical – Asymmetrical Dialectics Within Catastrophical Dynamics. Thought of the Day 148.0

Screen Shot 2018-05-29 at 7.49.54 AM

Catastrophe theory has been developed as a deterministic theory for systems that may respond to continuous changes in control variables by a discontinuous change from one equilibrium state to another. A key idea is that system under study is driven towards an equilibrium state. The behavior of the dynamical systems under study is completely determined by a so-called potential function, which depends on behavioral and control variables. The behavioral, or state variable describes the state of the system, while control variables determine the behavior of the system. The dynamics under catastrophe models can become extremely complex, and according to the classification theory of Thom, there are seven different families based on the number of control and dependent variables.

Let us suppose that the process yt evolves over t = 1,…, T as

dyt = -dV(yt; α, β)dt/dyt —– (1)

where V (yt; α, β) is the potential function describing the dynamics of the state variable ycontrolled by parameters α and β determining the system. When the right-hand side of (1) equals zero, −dV (yt; α, β)/dyt = 0, the system is in equilibrium. If the system is at a non-equilibrium point, it will move back to its equilibrium where the potential function takes the minimum values with respect to yt. While the concept of potential function is very general, i.e. it can be quadratic yielding equilibrium of a simple flat response surface, one of the most applied potential functions in behavioral sciences, a cusp potential function is defined as

−V(yt; α, β) = −1/4yt4 + 1/2βyt2 + αyt —– (2)

with equilibria at

-dV(yt; α, β)dt/dyt = -yt3 + βyt + α —– (3)

being equal to zero. The two dimensions of the control space, α and β, further depend on realizations from i = 1 . . . , n independent variables xi,t. Thus it is convenient to think about them as functions

αx = α01x1,t +…+ αnxn,t —– (4)

βx = β0 + β1x1,t +…+ βnxn,t —– (5)

The control functions αx and βx are called normal and splitting factors, or asymmetry and bifurcation factors, respectively and they determine the predicted values of yt given xi,t. This means that for each combination of values of independent variables there might be up to three predicted values of the state variable given by roots of

-dV(yt; αx, βx)dt/dyt = -yt3 + βyt + α = 0 —– (6)

This equation has one solution if

δx = 1/4αx2 − 1/27βx3 —– (7)

is greater than zero, δx > 0 and three solutions if δx < 0. This construction can serve as a statistic for bimodality, one of the catastrophe flags. The set of values for which the discriminant is equal to zero, δx = 0 is the bifurcation set which determines the set of singularity points in the system. In the case of three roots, the central root is called an “anti-prediction” and is least probable state of the system. Inside the bifurcation, when δx < 0, the surface predicts two possible values of the state variable which means that the state variable is bimodal in this case.

Screen Shot 2018-05-29 at 7.36.54 AM

Most of the systems in behavioral sciences are subject to noise stemming from measurement errors or inherent stochastic nature of the system under study. Thus for a real-world applications, it is necessary to add non-deterministic behavior into the system. As catastrophe theory has primarily been developed to describe deterministic systems, it may not be obvious how to extend the theory to stochastic systems. An important bridge has been provided by the Itô stochastic differential equations to establish a link between the potential function of a deterministic catastrophe system and the stationary probability density function of the corresponding stochastic process. Adding a stochastic Gaussian white noise term to the system

dyt = -dV(yt; αx, βx)dt/dyt + σytdWt —– (8)

where -dV(yt; αx, βx)dt/dyt is the deterministic term, or drift function representing the equilibrium state of the cusp catastrophe, σyt is the diffusion function and Wt is a Wiener process. When the diffusion function is constant, σyt = σ, and the current measurement scale is not to be nonlinearly transformed, the stochastic potential function is proportional to deterministic potential function and probability distribution function corresponding to the solution from (8) converges to a probability distribution function of a limiting stationary stochastic process as dynamics of yt are assumed to be much faster than changes in xi,t. The probability density that describes the distribution of the system’s states at any t is then

fs(y|x) = ψ exp((−1/4)y4 + (βx/2)y2 + αxy)/σ —– (9)

The constant ψ normalizes the probability distribution function so its integral over the entire range equals to one. As bifurcation factor βx changes from negative to positive, the fs(y|x) changes its shape from unimodal to bimodal. On the other hand, αx causes asymmetry in fs(y|x).

Game Theory and Finite Strategies: Nash Equilibrium Takes Quantum Computations to Optimality.

nash_equilibrium_1_maximin

Finite games of strategy, within the framework of noncooperative quantum game theory, can be approached from finite chain categories, where, by finite chain category, it is understood a category C(n;N) that is generated by n objects and N morphic chains, called primitive chains, linking the objects in a specific order, such that there is a single labelling. C(n;N) is, thus, generated by N primitive chains of the form:

x0 →f1 x1 →f2 x1 → … xn-1 →fn xn —– (1)

A finite chain category is interpreted as a finite game category as follows: to each morphism in a chain xi-1 →fi xi, there corresponds a strategy played by a player that occupies the position i, in this way, a chain corresponds to a sequence of strategic choices available to the players. A quantum formal theory, for a finite game category C(n;N), is defined as a formal structure such that each morphic fundament fi of the morphic relation xi-1 →fi xis a tuple of the form:

fi := (Hi, Pi, Pˆfi) —– (2)

where Hi is the i-th player’s Hilbert space, Pi is a complete set of projectors onto a basis that spans the Hilbert space, and Pˆfi ∈ Pi. This structure is interpreted as follows: from the strategic Hilbert space Hi, given the pure strategies’ projectors Pi, the player chooses to play Pˆfi .

From the morphic fundament (2), an assumption has to be made on composition in the finite category, we assume the following tensor product composition operation:

fj ◦ fi = fji —– (3)

fji = (Hji = Hj ⊗ Hi, Pji = Pj ⊗ Pi, Pˆfji = Pˆfj ⊗ Pˆfi) —– (4)

From here, a morphism for a game choice path could be introduced as:

x0 →fn…21 xn —– (5)

fn…21 = (HG = ⊗i=n1 Hi, PG = ⊗i=n1 Pi, Pˆ fn…21 = ⊗i=n1fi) —– (6)

in this way, the choices along the chain of players are completely encoded in the tensor product projectors Pˆfn…21. There are N = ∏i=1n dim(Hi) such morphisms, a number that coincides with the number of primitive chains in the category C(n;N).

Each projector can be addressed as a strategic marker of a game path, and leads to the matrix form of an Arrow-Debreu security, therefore, we call it game Arrow-Debreu projector. While, in traditional financial economics, the Arrow-Debreu securities pay one unit of numeraire per state of nature, in the present game setting, they pay one unit of payoff per game path at the beginning of the game, however this analogy may be taken it must be addressed with some care, since these are not securities, rather, they represent, projectively, strategic choice chains in the game, so that the price of a projector Pˆfn…21 (the Arrow-Debreu price) is the price of a strategic choice and, therefore, the result of the strategic evaluation of the game by the different players.

Now, let |Ψ⟩ be a ket vector in the game’s Hilbert space HG, such that:

|Ψ⟩ = ∑fn…21 ψ(fn…21)|(fn…21⟩ —– (7)

where ψ(fn…21) is the Arrow-Debreu price amplitude, with the condition:

fn…21 |ψ(fn…21)|2 = D —– (8)

for D > 0, then, the |ψ(fn…21)|corresponds to the Arrow-Debreu prices for the game path fn…21 and D is the discount factor in riskless borrowing, defining an economic scale for temporal connections between one unit of payoff now and one unit of payoff at the end of the game, such that one unit of payoff now can be capitalized to the end of the game (when the decision takes place) through a multiplication by 1/D, while one unit of payoff at the end of the game can be discounted to the beginning of the game through multiplication by D.

In this case, the unit operator, 1ˆ = ∑fn…21 Pˆfn…21 has a similar profile as that of a bond in standard financial economics, with ⟨Ψ|1ˆ|Ψ⟩ = D, on the other hand, the general payoff system, for each player, can be addressed from an operator expansion:

πiˆ = ∑fn…21 πi (fn…21) Pˆfn…21 —– (9)

where each weight πi(fn…21) corresponds to quantities associated with each Arrow-Debreu projector that can be interpreted as similar to the quantities of each Arrow-Debreu security for a general asset. Multiplying each weight by the corresponding Arrow-Debreu price, one obtains the payoff value for each alternative such that the total payoff for the player at the end of the game is given by:

⟨Ψ|1ˆ|Ψ⟩ = ∑fn…21 πi(fn…21) |ψ(fn…21)|2/D —– (10)

We can discount the total payoff to the beginning of the game using the discount factor D, leading to the present value payoff for the player:

PVi = D ⟨Ψ|πiˆ|Ψ⟩ = D ∑fn…21 π (fn…21) |ψ(fn…21)|2/D —– (11)

, where π (fn…21) represents quantities, while the ratio |ψ(fn…21)|2/D represents the future value at the decision moment of the quantum Arrow- Debreu prices (capitalized quantum Arrow-Debreu prices). Introducing the ket

|Q⟩ ∈ HG, such that:

|Q⟩ = 1/√D |Ψ⟩ —– (12)

then, |Q⟩ is a normalized ket for which the price amplitudes are expressed in terms of their future value. Replacing in (11), we have:

PVi = D ⟨Q|πˆi|Q⟩ —– (13)

In the quantum game setting, the capitalized Arrow-Debreu price amplitudes ⟨fn…21|Q⟩ become quantum strategic configurations, resulting from the strategic cognition of the players with respect to the game. Given |Q⟩, each player’s strategic valuation of each pure strategy can be obtained by introducing the projector chains:

Cˆfi = ∑fn…i+1fi-1…1 Pˆfn…i+1 ⊗ Pˆfi ⊗ Pˆfi-1…1 —– (14)

with ∑fi Cˆfi = 1ˆ. For each alternative choice of the player i, the chain sums over all of the other choice paths for the rest of the players, such chains are called coarse-grained chains in the decoherent histories approach to quantum mechanics. Following this approach, one may introduce the pricing functional from the expression for the decoherence functional:

D (fi, gi : |Q⟩) = ⟨Q| Cˆfi Cgi|Q⟩  —– (15)

we, then have, for each player

D (fi, gi : |Q⟩) = 0, ∀ fi ≠ gi —– (16)

this is the usual quantum mechanics’ condition for an aditivity of measure (also known as decoherence condition), which means that the capitalized prices for two different alternative choices of player i are additive. Then, we can work with the pricing functional D(fi, fi :|Q⟩) as giving, for each player an Arrow-Debreu capitalized price associated with the pure strategy, expressed by fi. Given that (16) is satisfied, each player’s quantum Arrow-Debreu pricing matrix, defined analogously to the decoherence matrix from the decoherent histories approach, is a diagonal matrix and can be expanded as a linear combination of the projectors for each player’s pure strategies as follows:

Di (|Q⟩) = ∑fi D(fi, f: |Q⟩) Pˆfi —– (17)

which has the mathematical expression of a mixed strategy. Thus, each player chooses from all of the possible quantum computations, the one that maximizes the present value payoff function with all the other strategies held fixed, which is in agreement with Nash.

Adjacency of the Possible: Teleology of Autocatalysis. Thought of the Day 140.0

abiogenesisautocatalysis

Given a network of catalyzed chemical reactions, a (sub)set R of such reactions is called:

  1. Reflexively autocatalytic (RA) if every reaction in R is catalyzed by at least one molecule involved in any of the reactions in R;
  2. F-generated (F) if every reactant in R can be constructed from a small “food set” F by successive applications of reactions from R;
  3. Reflexively autocatalytic and F-generated (RAF) if it is both RA and F.

The food set F contains molecules that are assumed to be freely available in the environment. Thus, an RAF set formally captures the notion of “catalytic closure”, i.e., a self-sustaining set supported by a steady supply of (simple) molecules from some food set….

Stuart Kauffman begins with the Darwinian idea of the origin of life in a biological ‘primordial soup’ of organic chemicals and investigates the possibility of one chemical substance to catalyze the reaction of two others, forming new reagents in the soup. Such catalyses may, of course, form chains, so that one reagent catalyzes the formation of another catalyzing another, etc., and self-sustaining loops of reaction chains is an evident possibility in the appropriate chemical environment. A statistical analysis would reveal that such catalytic reactions may form interdependent networks when the rate of catalyzed reactions per molecule approaches one, creating a self-organizing chemical cycle which he calls an ‘autocatalytic set’. When the rate of catalyses per reagent is low, only small local reaction chains form, but as the rate approaches one, the reaction chains in the soup suddenly ‘freeze’ so that what was a group of chains or islands in the soup now connects into one large interdependent network, constituting an ‘autocatalytic set’. Such an interdependent reaction network constitutes the core of the body definition unfolding in Kauffman, and its cyclic character forms the basic precondition for self-sustainment. ‘Autonomous agent’ is an autocatalytic set able to reproduce and to undertake at least one thermodynamic work cycle.

This definition implies two things: reproduction possibility, and the appearance of completely new, interdependent goals in work cycles. The latter idea requires the ability of the autocatalytic set to save energy in order to spend it in its own self-organization, in its search for reagents necessary to uphold the network. These goals evidently introduce a – restricted, to be sure – teleology defined simply by the survival of the autocatalytic set itself: actions supporting this have a local teleological character. Thus, the autocatalytic set may, as it evolves, enlarge its cyclic network by recruiting new subcycles supporting and enhancing it in a developing structure of subcycles and sub-sub-cycles. 

Kauffman proposes that the concept of ‘autonomous agent’ implies a whole new cluster of interdependent concepts. Thus, the autonomy of the agent is defined by ‘catalytic closure’ (any reaction in the network demanding catalysis will get it) which is a genuine Gestalt property in the molecular system as a whole – and thus not in any way derivable from the chemistry of single chemical reactions alone.

Kauffman’s definitions on the basis of speculative chemistry thus entail not only the Kantian cyclic structure, but also the primitive perception and action phases of Uexküll’s functional circle. Thus, Kauffman’s definition of the organism in terms of an ‘autonomous agent’ basically builds on an Uexküllian intuition, namely the idea that the most basic property in a body is metabolism: the constrained, organizing processing of high-energy chemical material and the correlated perception and action performed to localize and utilize it – all of this constituting a metabolic cycle coordinating the organism’s in- and outside, defining teleological action. Perception and action phases are so to speak the extension of the cyclical structure of the closed catalytical set to encompass parts of its surroundings, so that the circle of metabolism may only be completed by means of successful perception and action parts.

The evolution of autonomous agents is taken as the empirical basis for the hypothesis of a general thermodynamic regularity based on non-ergodicity: the Big Bang universe (and, consequently, the biosphere) is not at equilibrium and will not reach equilibrium during the life-time of the universe. This gives rise to Kauffman’s idea of the ‘adjacent possible’. At a given point in evolution, one can define the set of chemical substances which do not exist in the universe – but which is at a distance of one chemical reaction only from a substance already existing in the universe. Biological evolution has, evidently, led to an enormous growth of types of organic macromolecules, and new such substances come into being every day. Maybe there is a sort of chemical potential leading from the actually realized substances and into the adjacent possible which is in some sense driving the evolution? In any case, Kauffman claims the hypothesis that the biosphere as such is supercritical in the sense that there is, in general, more than one action catalyzed by each reagent. Cells, in order not to be destroyed by this chemical storm, must be internally subcritical (even if close to the critical boundary). But if the biosphere as such is, in fact, supercritical, then this distinction seemingly a priori necessitates the existence of a boundary of the agent, protecting it against the environment.

Econophysics: Financial White Noise Switch. Thought of the Day 115.0

circle24

What is the cause of large market fluctuation? Some economists blame irrationality behind the fat-tail distribution. Some economists observed that social psychology might create market fad and panic, which can be modeled by collective behavior in statistical mechanics. For example, the bi-modular distribution was discovered from empirical data in option prices. One possible mechanism of polarized behavior is collective action studied in physics and social psychology. Sudden regime switch or phase transition may occur between uni-modular and bi-modular distribution when field parameter changes across some threshold. The Ising model in equilibrium statistical mechanics was borrowed to study social psychology. Its phase transition from uni-modular to bi-modular distribution describes statistical features when a stable society turns into a divided society. The problem of the Ising model is that its key parameter, the social temperature, has no operational definition in social system. A better alternative parameter is the intensity of social interaction in collective action.

A difficult issue in business cycle theory is how to explain the recurrent feature of business cycles that is widely observed from macro and financial indexes. The problem is: business cycles are not strictly periodic and not truly random. Their correlations are not short like random walk and have multiple frequencies that changing over time. Therefore, all kinds of math models are tried in business cycle theory, including deterministic, stochastic, linear and nonlinear models. We outline economic models in terms of their base function, including white noise with short correlations, persistent cycles with long correlations, and color chaos model with erratic amplitude and narrow frequency band like biological clock.

 

Untitled

The steady state of probability distribution function in the Ising Model of Collective Behavior with h = 0 (without central propaganda field). a. Uni-modular distribution with low social stress (k = 0). Moderate stable behavior with weak interaction and high social temperature. b. Marginal distribution at the phase transition with medium social stress (k = 2). Behavioral phase transition occurs between stable and unstable society induced by collective behavior. c. Bi-modular distribution with high social stress (k = 2.5). The society splits into two opposing groups under low social temperature and strong social interactions in unstable society. 

Deterministic models are used by Keynesian economists for endogenous mechanism of business cycles, such as the case of the accelerator-multiplier model. The stochastic models are used by the Frisch model of noise-driven cycles that attributes external shocks as the driving force of business fluctuations. Since 1980s, the discovery of economic chaos and the application of statistical mechanics provide more advanced models for describing business cycles. Graphically,

Untitled

The steady state of probability distribution function in socio-psychological model of collective choice. Here, “a” is the independent parameter; “b” is the interaction parameter. a Centered distribution with b < a (denoted by short dashed curve). It happens when independent decision rooted in individualistic orientation overcomes social pressure through mutual communication. b Horizontal flat distribution with b = a (denoted by long dashed line). Marginal case when individualistic orientation balances the social pressure. c Polarized distribution with b > a (denoted by solid line). It occurs when social pressure through mutual communication is stronger than independent judgment. 

Untitled

Numerical 1 autocorrelations from time series generated by random noise and harmonic wave. The solid line is white noise. The broken line is a sine wave with period P = 1. 

Linear harmonic cycles with unique frequency are introduced in business cycle theory. The auto-correlations from harmonic cycle and white noise are shown in the above figure. Auto-correlation function from harmonic cycles is a cosine wave. The amplitude of cosine wave is slightly decayed because of limited data points in numerical experiment. Auto-correlations from a random series are an erratic series with rapid decade from one to residual fluctuations in numerical calculation. The auto-regressive (AR) model in discrete time is a combination of white noise term for simulating short-term auto-correlations from empirical data.

The deterministic model of chaos can be classified into white chaos and color chaos. White chaos is generated by nonlinear difference equation in discrete-time, such as one-dimensional logistic map and two-dimensional Henon map. Its autocorrelations and power spectra look like white noise. Its correlation dimension can be less than one. White noise model is simple in mathematical analysis but rarely used in empirical analysis, since it needs intrinsic time unit.

Color chaos is generated by nonlinear differential equations in continuous-time, such as three-dimensional Lorenz model and one-dimensional model with delay-differential model in biology and economics. Its autocorrelations looks like a decayed cosine wave, and its power spectra seem a combination of harmonic cycles and white noise. The correlation dimension is between one and two for 3D differential equations, and varying for delay-differential equation.

Untitled

History shows the remarkable resilience of a market that experienced a series of wars and crises. The related issue is why the economy can recover from severe damage and out of equilibrium? Mathematically speaking, we may exam the regime stability under parameter change. One major weakness of the linear oscillator model is that the regime of periodic cycle is fragile or marginally stable under changing parameter. Only nonlinear oscillator model is capable of generating resilient cycles within a finite area under changing parameters. The typical example of linear models is the Samuelson model of multiplier-accelerator. Linear stochastic models have similar problem like linear deterministic models. For example, the so-called unit root solution occurs only at the borderline of the unit root. If a small parameter change leads to cross the unit circle, the stochastic solution will fall into damped (inside the unit circle) or explosive (outside the unit circle) solution.

Velocity of Money

Trump-ASX-Brexit-market-640x360

The most basic difference between the demand theory of money and exchange theory of money lies in the understanding of quantity equation

M . v = P . Y —– (1)

Here M is money supply, P is price and Y is real output; in addition, v is constant velocity of money. The demand theory understands that (1) reflects the needs of the economic individual for money, not only the meaning of exchange. Under the assumption of liquidity preference, the demand theory introduces nominal interest rate into demand function of money, thus exhibiting more economic pictures than traditional quantity theory does. Let us, however concentrate on the economic movement through linearization of exchange theory emphasizing exchange medium function of money.

Let us assume that the central bank provides a very small supply M of money, which implies that the value PY of products manufactured by the producer will be unable to be realized only through one transaction. The producer has to suspend the transaction until the purchasers possess money at hand again, which will elevate the transaction costs and even result in the bankruptcy of the producer. Then, will the producer do nothing and wait for the bankruptcy?

In reality, producers would rather adjust sales value through raising or lowering the price or amount of product to attempt the realization of a maximal sales value M than reserve the stock of products to subject the sale to the limit of velocity of money. In other words, producer would adjust price or real output to control the velocity of money, since the velocity of money can influence the realization of the product value.

Every time money changes hands, a transaction is completed; thus numerous turnovers of money for an individual during a given period of time constitute a macroeconomic exchange ∑ipiYi if the prices pi can be replaced by an average price P, then we can rewrite the value of exchange as ∑ipiYi = P . Y. In a real economy, the producer will manage to make P . Y close the money supply M as much as possible through adjusting the real output or its price.

For example, when a retailer comes to a strange community to sell her commodities, she always prefers to make a price through trial and error. If she finds that higher price can still promote the sales amount, then she will choose to continue raising the price until the sales amount less changes; on the other hand, if she confirms that lower price can create the more sales amount, then she will decrease the price of the commodity. Her strategy of pricing depends on price elasticity of demand for the commodity. However, the maximal value of the sales amount is determined by how much money the community can supply, thus the pricing of the retailer will make her sales close this maximal sale value, namely money for consumption of the community. This explains why the same commodity can always be sold at a higher price in the rich area.

Equation (1) is not an identical equation but an equilibrium state of exchange process in an economic system. Evidently, the difference M –  P . Y  between the supply of money and present sales value provides a vacancy for elevating sales value, in other words, the supply of money acts as the role of a carrying capacity for sales value. We assume that the vacancy is in direct proportion to velocity of increase of the sales value, and then derive a dynamical quantity equation

M(t) - P(t) . Y(t)  =  k . d[P(t) . Y(t)]/d(t) —– (2)

Here k is a positive constant and expresses a characteristic time with which the vacancy is filled. This is a speculated basic dynamical quantity equation of exchange by money. In reality, the money supply M(t) can usually be given; (2) is actually an evolution equation of sales value P(t)Y(t) , which can uniquely determine an evolving path of the price.

The role of money in (2) can be seen that money is only a medium of commodity exchange, just like the chopsticks for eating and the soap for washing. All needs for money are or will be order to carry out the commodity exchange. The behavior of holding money of the economic individuals implies a potential exchange in the future, whether for speculation or for the preservation of wealth, but it cannot directly determine the present price because every realistic price always comes from the commodity exchange, and no exchange and no price. In other words, what we are concerned with is not the reason of money generation, but form of money generation, namely we are concerned about money generation as a function of time rather than it as a function of income or interest rate. The potential needs for money which you can use various reasons to explain cannot contribute to price as long as the money does not participate in the exchange, thus the money supply not used to exchange will not occur in (2).

On the other hand, the change in money supply would result in a temporary vacancy of sales value, although sales value will also be achieved through exchanging with the new money supply at the next moment, since the price or sales volume may change. For example, a group of residents spend M(t) to buy houses of P(t)Y(t) through the loan at time t, evidently M(t) = P(t)Y(t). At time t+1, another group of residents spend M(t+1) to buy houses of P(t+1)Y(t+1) through the loan, and M(t+1) = P(t+1)Y(t+1). Thus, we can consider M(t+1) – M(t) as increase in money supply, and this increase can cause a temporary vacancy of sales value M(t+1) – P(t)Y(t). It is this vacancy that encourages sellers to try to maximize sales through adjusting the price by trial and error and also real estate developers to increase or decrease their housing production. Ultimately, new prices and production are produced and the exchange is completed at the level of M(t+1) = P(t+1)Y(t+1). In reality, the gap between M(t+1) and M(t) is often much smaller than the vacancy M(t+1) – P(t)Y(t), therefore we can approximately consider M(t+1) as M(t) if the money supply function M(t) is continuous and smooth.

However, it is necessary to emphasize that (2) is not a generation equation of demand function P(Y), which means (2) is a unique equation of determination of price (path), since, from the perspective of monetary exchange theory, the evolution of price depends only on money supply and production and arises from commodity exchange rather than relationship between supply and demand of products in the traditional economics where the meaning of the exchange is not obvious. In addition, velocity of money is not contained in this dynamical quantity equation, but its significance PY/M will be endogenously exhibited by the system.

Orthodoxy of the Neoclassical Synthesis: Minsky’s Capitalism Without Capitalists, Capital Assets, and Financial Markets

econschools

During the very years when orthodoxy turned Keynesianism on its head, extolling Reaganomics and Thatcherism as adequate for achieving stabilisation in the epoch of global capitalism, Minsky (Stabilizing an Unstable Economy) pointed to the destabilising consequences of this approach. The view that instability is the result of the internal processes of a capitalist economy, he wrote, stands in sharp contrast to neoclassical theory, whether Keynesian or monetarist, which holds that instability is due to events that are outside the working of the economy. The neoclassical synthesis and the Keynes theories are different because the focus of the neoclassical synthesis is on how a decentralized market economy achieves coherence and coordination in production and distribution, whereas the focus of the Keynes theory is upon the capital development of an economy. The neoclassical synthesis emphasizes equilibrium and equilibrating tendencies, whereas Keynes‘s theory revolves around bankers and businessmen making deals on Wall Street. The neoclassical synthesis ignores the capitalist nature of the economy, a fact that the Keynes theory is always aware of.

Minsky here identifies the main flaw of the neoclassical synthesis, which is that it ignores the capitalist nature of the economy, while authentic Keynesianism proceeds from precisely this nature. Minsky lays bare the preconceived approach of orthodoxy, which has mainstream economics concentrating all its focus on an equilibrium which is called upon to confirm the orthodox belief in the stability of capitalism. At the same time, orthodoxy fails to devote sufficient attention to the speculation in the area of finance and banking that is the precise cause of the instability of the capitalist economy.

Elsewhere, Minsky stresses still more firmly that from the theory of Keynes, the neoclassical standard included in its arsenal only those earlier-mentioned elements which could be interpreted as confirming its preconceived position that capitalism was so perfect that it could not have innate flaws. In this connection Minsky writes:

Whereas Keynes in The General Theory proposed that economists look at the economy in quite a different way from the way they had, only those parts of The General Theory that could be readily integrated into the old way of looking at things survive in today‘s standard theory. What was lost was a view of an economy always in transit because it accumulates in response to disequilibrating forces that are internal to the economy. As a result of the way accumulation takes place in a capitalist economy, Keynes‘s theory showed that success in operating the economy can only be transitory; instability is an inherent and inescapable flaw of capitalism. 

The view that survived is that a number of special things went wrong, which led the economy into the Great Depression. In this view, apt policy can assure that cannot happen again. The standard theory of the 1950s and 1960s seemed to assert that if policy were apt, then full employment at stable prices could be attained and sustained. The existence of internally disruptive forces was ignored; the neoclassical synthesis became the economics of capitalism without capitalists, capital assets, and financial markets. As a result, very little of Keynes has survived today in standard economics.

Here, resting on Keynes‘s analysis, one can find the central idea of Minsky‘s book: the innate instability of capitalism, which in time will lead the system to a new Great Depression. This forecast has now been brilliantly confirmed, but previously there were few who accepted it. Economic science was orchestrated by proponents of neoclassical orthodoxy under the direction of Nobel prizewinners, authors of popular economics textbooks, and other authorities recognized by the mainstream. These people argued that the main problems which capitalism had encountered in earlier times had already been overcome, and that before it lay a direct, sunny road to an even better future.

Robed in complex theoretical constructs, and underpinned by an abundance of mathematical formulae, these ideas of a cloudless future for capitalism interpreted the economic situation, it then seemed, in thoroughly convincing fashion. These analyses were balm for the souls of the people who had come to believe that capitalism had attained perfection. In this respect, capitalism has come to bear an uncanny resemblance to communism. There is, however, something beyond the preconceptions and prejudices innate to people in all social systems, and that is the reality of historical and economic development. This provides a filter for our ideas, and over time makes it easier to separate truth from error. The present financial and economic crisis is an example of such reality. While the mainstream was still euphoric about the future of capitalism, the post-Keynesians saw the approaching outlines of a new Great Depression. The fate of Post Keynesianism will depend very heavily on the future development of the world capitalist economy. If the business cycle has indeed been abolished (this time), so that stable, non-inflationary growth continues indefinitely under something approximating to the present neoclassical (or pseudo-monetarist) policy consensus, then there is unlikely to be a significant market for Post Keynesian ideas. Things would be very different in the event of a new Great Depression, to think one last time in terms of extreme possibilities. If it happened again, to quote Hyman Minsky, the appeal of both a radical interventionist programme and the analysis from which it was derived would be very greatly enhanced.

Neoclassical orthodoxy, that is, today‘s mainstream economic thinking proceeds from the position that capitalism is so good and perfect that an alternative to it does not and cannot exist. Post-Keynesianism takes a different standpoint. Unlike Marxism it is not so revolutionary a theory as to call for a complete rejection of capitalism. At the same time, it does not consider capitalism so perfect that there is nothing in it that needs to be changed. To the contrary, Post-Keynesianism maintains that capitalism has definite flaws, and requires changes of such scope as to allow alternative ways of running the economy to be fully effective. To the prejudices of the mainstream, post-Keynesianism counterposes an approach based on an objective analysis of the real situation. Its economic and philosophical approach – the methodology of critical realism – has been developed accordingly, the methodological import of which helps post-Keynesianism answer a broad range of questions, providing an alternative both to market fundamentalism, and to bureaucratic centralism within a planned economy. This is the source of its attraction for us….

Accelerated Capital as an Anathema to the Principles of Communicative Action. A Note Quote on the Reciprocity of Capital and Ethicality of Financial Economics

continuum

Markowitz portfolio theory explicitly observes that portfolio managers are not (expected) utility maximisers, as they diversify, and offers the hypothesis that a desire for reward is tempered by a fear of uncertainty. This model concludes that all investors should hold the same portfolio, their individual risk-reward objectives are satisfied by the weighting of this ‘index portfolio’ in comparison to riskless cash in the bank, a point on the capital market line. The slope of the Capital Market Line is the market price of risk, which is an important parameter in arbitrage arguments.

Merton had initially attempted to provide an alternative to Markowitz based on utility maximisation employing stochastic calculus. He was only able to resolve the problem by employing the hedging arguments of Black and Scholes, and in doing so built a model that was based on the absence of arbitrage, free of turpe-lucrum. That the prescriptive statement “it should not be possible to make sure profits”, is a statement explicit in the Efficient Markets Hypothesis and in employing an Arrow security in the context of the Law of One Price. Based on these observations, we conject that the whole paradigm for financial economics is built on the principle of balanced reciprocity. In order to explore this conjecture we shall examine the relationship between commerce and themes in Pragmatic philosophy. Specifically, we highlight Robert Brandom’s (Making It Explicit Reasoning, Representing, and Discursive Commitment) position that there is a pragmatist conception of norms – a notion of primitive correctnesses of performance implicit in practice that precludes and are presupposed by their explicit formulation in rules and principles.

The ‘primitive correctnesses’ of commercial practices was recognised by Aristotle when he investigated the nature of Justice in the context of commerce and then by Olivi when he looked favourably on merchants. It is exhibited in the doux-commerce thesis, compare Fourcade and Healey’s contemporary description of the thesis Commerce teaches ethics mainly through its communicative dimension, that is, by promoting conversations among equals and exchange between strangers, with Putnam’s description of Habermas’ communicative action based on the norm of sincerity, the norm of truth-telling, and the norm of asserting only what is rationally warranted …[and] is contrasted with manipulation (Hilary Putnam The Collapse of the Fact Value Dichotomy and Other Essays)

There are practices (that should be) implicit in commerce that make it an exemplar of communicative action. A further expression of markets as centres of communication is manifested in the Asian description of a market brings to mind Donald Davidson’s (Subjective, Intersubjective, Objective) argument that knowledge is not the product of a bipartite conversations but a tripartite relationship between two speakers and their shared environment. Replacing the negotiation between market agents with an algorithm that delivers a theoretical price replaces ‘knowledge’, generated through communication, with dogma. The problem with the performativity that Donald MacKenzie (An Engine, Not a Camera_ How Financial Models Shape Markets) is concerned with is one of monism. In employing pricing algorithms, the markets cannot perform to something that comes close to ‘true belief’, which can only be identified through communication between sapient humans. This is an almost trivial observation to (successful) market participants, but difficult to appreciate by spectators who seek to attain ‘objective’ knowledge of markets from a distance. To appreciate the relevance to financial crises of the position that ‘true belief’ is about establishing coherence through myriad triangulations centred on an asset rather than relying on a theoretical model.

Shifting gears now, unless the martingale measure is a by-product of a hedging approach, the price given by such martingale measures is not related to the cost of a hedging strategy therefore the meaning of such ‘prices’ is not clear. If the hedging argument cannot be employed, as in the markets studied by Cont and Tankov (Financial Modelling with Jump Processes), there is no conceptual framework supporting the prices obtained from the Fundamental Theorem of Asset Pricing. This lack of meaning can be interpreted as a consequence of the strict fact/value dichotomy in contemporary mathematics that came with the eclipse of Poincaré’s Intuitionism by Hilbert’s Formalism and Bourbaki’s Rationalism. The practical problem of supporting the social norms of market exchange has been replaced by a theoretical problem of developing formal models of markets. These models then legitimate the actions of agents in the market without having to make reference to explicitly normative values.

The Efficient Market Hypothesis is based on the axiom that the market price is determined by the balance between supply and demand, and so an increase in trading facilitates the convergence to equilibrium. If this axiom is replaced by the axiom of reciprocity, the justification for speculative activity in support of efficient markets disappears. In fact, the axiom of reciprocity would de-legitimise ‘true’ arbitrage opportunities, as being unfair. This would not necessarily make the activities of actual market arbitrageurs illicit, since there are rarely strategies that are without the risk of a loss, however, it would place more emphasis on the risks of speculation and inhibit the hubris that has been associated with the prelude to the recent Crisis. These points raise the question of the legitimacy of speculation in the markets. In an attempt to understand this issue Gabrielle and Reuven Brenner identify the three types of market participant. ‘Investors’ are preoccupied with future scarcity and so defer income. Because uncertainty exposes the investor to the risk of loss, investors wish to minimise uncertainty at the cost of potential profits, this is the basis of classical investment theory. ‘Gamblers’ will bet on an outcome taking odds that have been agreed on by society, such as with a sporting bet or in a casino, and relates to de Moivre’s and Montmort’s ‘taming of chance’. ‘Speculators’ bet on a mis-calculation of the odds quoted by society and the reason why speculators are regarded as socially questionable is that they have opinions that are explicitly at odds with the consensus: they are practitioners who rebel against a theoretical ‘Truth’. This is captured in Arjun Appadurai’s argument that the leading agents in modern finance believe in their capacity to channel the workings of chance to win in the games dominated by cultures of control . . . [they] are not those who wish to “tame chance” but those who wish to use chance to animate the otherwise deterministic play of risk [quantifiable uncertainty]”.

In the context of Pragmatism, financial speculators embody pluralism, a concept essential to Pragmatic thinking and an antidote to the problem of radical uncertainty. Appadurai was motivated to study finance by Marcel Mauss’ essay Le Don (The Gift), exploring the moral force behind reciprocity in primitive and archaic societies and goes on to say that the contemporary financial speculator is “betting on the obligation of return”, and this is the fundamental axiom of contemporary finance. David Graeber (Debt The First 5,000 Years) also recognises the fundamental position reciprocity has in finance, but where as Appadurai recognises the importance of reciprocity in the presence of uncertainty, Graeber essentially ignores uncertainty in his analysis that ends with the conclusion that “we don’t ‘all’ have to pay our debts”. In advocating that reciprocity need not be honoured, Graeber is not just challenging contemporary capitalism but also the foundations of the civitas, based on equality and reciprocity. The origins of Graeber’s argument are in the first half of the nineteenth century. In 1836 John Stuart Mill defined political economy as being concerned with [man] solely as a being who desires to possess wealth, and who is capable of judging of the comparative efficacy of means for obtaining that end.

In Principles of Political Economy With Some of Their Applications to Social Philosophy, Mill defended Thomas Malthus’ An Essay on the Principle of Population, which focused on scarcity. Mill was writing at a time when Europe was struck by the Cholera pandemic of 1829–1851 and the famines of 1845–1851 and while Lord Tennyson was describing nature as “red in tooth and claw”. At this time, society’s fear of uncertainty seems to have been replaced by a fear of scarcity, and these standards of objectivity dominated economic thought through the twentieth century. Almost a hundred years after Mill, Lionel Robbins defined economics as “the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses”. Dichotomies emerge in the aftermath of the Cartesian revolution that aims to remove doubt from philosophy. Theory and practice, subject and object, facts and values, means and ends are all separated. In this environment ex cathedra norms, in particular utility (profit) maximisation, encroach on commercial practice.

In order to set boundaries on commercial behaviour motivated by profit maximisation, particularly when market uncertainty returned after the Nixon shock of 1971, society imposes regulations on practice. As a consequence, two competing ethics, functional Consequential ethics guiding market practices and regulatory Deontological ethics attempting stabilise the system, vie for supremacy. It is in this debilitating competition between two essentially theoretical ethical frameworks that we offer an explanation for the Financial Crisis of 2007-2009: profit maximisation, not speculation, is destabilising in the presence of radical uncertainty and regulation cannot keep up with motivated profit maximisers who can justify their actions through abstract mathematical models that bare little resemblance to actual markets. An implication of reorienting financial economics to focus on the markets as centres of ‘communicative action’ is that markets could become self-regulating, in the same way that the legal or medical spheres are self-regulated through professions. This is not a ‘libertarian’ argument based on freeing the Consequential ethic from a Deontological brake. Rather it argues that being a market participant entails restricting norms on the agent such as sincerity and truth telling that support knowledge creation, of asset prices, within a broader objective of social cohesion. This immediately calls into question the legitimacy of algorithmic/high- frequency trading that seems an anathema in regard to the principles of communicative action.