Man Proposes OR Woman Proposes – “Matches” Happen Algorithmically and are Economically Walrasian Rather than Keynesian. Note Quote & Didactics.

Consider a set M of men and a set W of women. Each m ∈ M has a strict preference ordering over the elements of W and each w ∈ W has a strict preference ordering over men. Let us denote the preference ordering of an agent i by i and x ≻i y will mean that agent i ranks x above y. Now a marriage or matching would be considered as an assignment of men to women such that each man is assigned to at most one woman and vice versa. But, what if the agent decides to remain single. This is possible by two ways, viz. if a man or a woman is matched with oneself, or for each man or woman, there is a dummy woman or man in the set W or M that corresponds to being single. If this were the construction, then, we could safely assume |M| = |W|. But, there is another impediment here, whereby a subversion of sorts is possible, in that a group of agents could simply opt out of the matching exercise. In such a scenario, it becomes mandatory to define a blocking set. As an implication of such subversiveness, a matching is called unstable if there are two men m, m’ and two women w, w’ such that

  1. m is matched to w
  2. m’ is matched to w’, and
  3. w’ m w and m ≻w’ m’

then, the pair (m, w’) is a blocking pair. Any matching without the blocking pair is called stable.

Now, given the preferences of men and women, is it always possible to find stable matchings? For the same, what is used is Gale and Shapley’s deferred acceptance algorithm.

So, after a brief detour, let us concentrate on the male-proposal version.

First, each man proposes to his top-ranked choice. Next, each woman who has received at least two proposals keeps (tentatively) her top-ranked proposal and rejects the rest. Then, each man who has been rejected proposes to his top-ranked choice among the women who have not rejected him. Again each woman who has at least two proposals (including ones from previous rounds) keeps her top-ranked proposal and rejects the rest. The process repeats until no man has a woman to propose to or each woman has at most one proposal. At this point the algorithm terminates and each man is assigned to a woman who has not rejected his proposal. No man is assigned to more than one woman. Since each woman is allowed to keep only one proposal at any stage, no woman is assigned to more than one man. Therefore the algorithm terminates in a matching.

IMG_20190909_141003

Consider the matching {(m1, w1), (m2, w2), (m3, w3)}. This is an unstable matching since (m1, w2) is a blocking pair. The matching {(m1, w1), (m3, w2), (m2, w3)}, however, is stable. Now looking at the figure above, m1 proposes to w2, m2 to w1, and m3 to w1. At the end of this round, w1 is the only woman to have received two proposals. One from m3 and the other from m2. Since she ranks m3 above m2, she keeps m3 and rejects m2. Since m3 is the only man to have been rejected, he is the only one to propose again in the second round. This time he proposes to w3. Now each woman has only one proposal and the algorithm terminates with the matching {(m1, w2), (m2, w3), (m3, w2)}.

The male propose algorithm terminates in a stable matching.

Suppose not. Then ∃ a blocking pair (m1, w1) with m1 matched to w2, say, and w1 matched to m2. Since (m1, w1) is blocking and w1m1 w2, in the proposal algorithm, m1 would have proposed to w1 before w2. Since m1 was not matched with w1 by the algorithm, it must be because w1 received a proposal from a man that she ranked higher than m1. Since the algorithm matches her to m2 it follows that m2w1 m1. This contradicts the fact that (m1, w1) is a blocking pair.

Even if where the women propose, the outcome would still be stable matching. The only difference is in kind as the stable matching would be different from the one generated when the men propose. This would also imply that even if stable matching is guaranteed to exist, there is more than one such matching. Then what is the point to prefer one to the other? Well, there is a reason:

Denote a matching by μ. The woman assigned to man m in the matching μ is denoted μ(m). Similarly, μ(w) is the man assigned to woman w. A matching μ is male-optimal if there is no stable matching ν such that ν(m) ≻m μ(m) or ν(m) = μ(m) ∀ m with ν(j) ≻j μ(j) for at least one j ∈ M. Similarly for the female-optimality.

The stable matching produced by the (male-proposal) Deferred Acceptance Algorithm is male-optimal.

Let μ be the matching returned by the male-propose algorithm. Suppose μ is not male optimal. Then, there is a stable matching ν such that ν(m) ≻m μ(m) or ν(m) = μ(m) ∀ m with ν(j) ≻j μ(j) for at least one j ∈ M. Therefore, in the application of the proposal algorithm, there must be an iteration where some man j proposes to ν(j) before μ(j) since ν(j) ≻j μ(j) and is rejected by woman ν(j). Consider the first such iteration. Since woman ν(j) rejects j she must have received a proposal from a man i she prefers to man j. Since this is the first iteration at which a male is rejected by his partner under ν, it follows that man i ranks woman ν(j) higher than ν(i). Summarizing, i ≻ν(j) j and ν(j) ≻i ν(i) implying that ν is not stable, a contradiction.

Now, the obvious question is if this stable matching is optimal w.r.t. to both men and women? The answer this time around is NO. From above, it could easily be seen that there are two stable matchings, one of them is male-optimal and the other is female-optimal. At least, one female is strictly better-off under the female optimality than male optimality, and by this, no female is worse off. If the POV is men, a similar conclusion is drawn.  A stable marriage is immune to a pair of agents opting out of the matching. We could ask that no subset of agents should have an incentive to opt out of the matching. Formally, a matching μ′ dominates a matching μ if there is a set S ⊂ M ∪ W such that for all m, w ∈ S, both (i) μ′(m), μ′(w) ∈ S and (ii) μ′(m) ≻m μ(m) and μ′(w) ≻w μ(w). Stability is a special case of this dominance condition when we restrict attention to sets S consisting of a single couple. The set of undominated matchings is called the core of the matching game.

The direct mechanism associated with the male propose algorithm is strategy-proof for the males.

Suppose not. Then there is a profile of preferences π = (≻m1 , ≻m2 , . . . , ≻mn) for the men, such that man m1, say, can misreport his preferences and obtain a better match. To express this formally, let μ be the stable matching obtained by applying the male proposal algorithm to the profile π. Suppose that m1 reports the preference ordering ≻ instead. Let ν be the stable matching that results when the male-proposal algorithm is applied to the profile π1 = (≻, ≻m2 , . . . , ≻mn). For a contradiction, suppose ν(m1) ≻m1 μ(m1). For notational convenience let a ≽m b mean that a ≻m b or a = b.

First we show that m1 can achieve the same effect by choosing an ordering ≻̄ where woman ν(m1) is ranked first. Let π2 = (≻̄ , ≻m2 , . . . , ≻mn). Knowing that ν is stable w.r.t the profile π1 we show that it is stable with respect to the profile π2. Suppose not. Then under the profile π2 there must be a pair (m, w) that blocks ν. Since ν assigns to m1 its top choice with respect to π2, m1 cannot be part of this blocking pair. Now the preferences of all agents other than m1 are the same in π1 and π2. Therefore, if (m, w) blocks ν w.r.t the profile π2, it must block ν w.r.t the profile π1, contradicting the fact that ν is a stable matching under π1.

Let λ be the male propose stable matching for the profile π2. ν is a stable matching w.r.t the profile π2. As λ is male optimal w.r.t the profile π2, it follows that λ(m1) = ν(m1).

Let’s assume that ν(m1) is the top-ranked woman in the ordering ≻. Now we show that the set B = {mj : μ(mj) ≻mj ν(mj)} is empty. This means that all men, not just m1, are no worse off under ν compared to μ. Since ν is stable w.r.t the original profile, π this contradicts the male optimality of μ.

Suppose B ≠ ∅. Therefore, when the male proposal algorithm is applied to the profile π1, each mj ∈ B is rejected by their match under μ, i.e., μ(mj). Consider the first iteration of the proposal algorithm where some mj is rejected by μ(mj). This means that woman μ(mj) has a proposal from man mk that she ranks higher, i.e., mkμ(mj) mj. Since mk was not matched to μ(mj) under μ it must be that μ(mk) ≻mk μ(mj). Hence mk ∈ B , otherwise μ(mj) ≽ mkν(mk) ≽mk μ(mk) ≻mk μ(mj), which is a contradiction. Since mk ∈ B and mk has proposed to μ(mj) at the time man mj proposes, it means that mk must have been rejected by μ(mk) prior to mj being rejected, contradicting our choice of mj.

The mechanism associated with the male propose algorithm is not strategy-proof for the females. Let us see how this is the case by way of an example. The male propose algorithm returns the matching {(m1, w2), (m2, w3), (m3, w1)}. In the course of the algorithm the only woman who receives at least two proposals is w1. She received proposals from m2 and m3. She rejects m2 who goes on to propose to w3 and the algorithm terminates. Notice that w1 is matched with her second choice. Suppose now that she had rejected m3 instead. Then m3 would have gone on to proposes to w2. Woman w2 now has a choice between m1 and m3. She would keep m3 and reject m1, who would go on to propose to w1. Woman w1 would keep m1 over m2 and in the final matching be paired with a her first-rank choice.

Transposing this on to economic theory, this fits neatly into the Walrasian equilibrium. Walras’ law is an economic theory that the existence of excess supply in one market must be matched by excess demand in another market so that it balances out. Walras’ law asserts that an examined market must be in equilibrium if all other markets are in equilibrium, and this contrasts with Keynesianism, which by contrast, assumes that it is possible for just one market to be out of balance without a “matching” imbalance elsewhere. Moreover, Walrasian equilibria are the solutions of a fixed point problem. In the cases when they can be computed efficiently it is because the set of Walrasian equilibria can be described by a set of convex inequalities. The same can be said of stable matchings/marriages. The set of stable matchings is fixed points of a nondecreasing function defined on a lattice. 

Advertisement

Fascism’s Incognito – Conjuncted

“Being asked to define fascism is probably the scariest moment for any expert of fascism,” Montague said.
Communism-vs-Fascism
Brecht’s circular circuitry is here.
Allow me to make cross-sectional (both historically and geographically) references. I start with Mussolini, who talked of what use fascism could be put to by stating that capitalism throws itself into the protection of the state when it is in crisis, and he illustrated this point by referring to the Great Depression as a failure of laissez-faire capitalism and thus creating an opportunity for fascist state to provide an alternative to this failure. This in a way points to the fact that fascism springs to life economically in the event of capitalism’s deterioration. To highlight this point of fascism springing to life as a reaction to capitalism’s failure, let me take recourse to Samir Amin, who calls the fascist choice for managing a capitalist society in crisis as a categorial rejection of democracy, despite having reached that stage democratically. The masses are subjected to values of submission to a unity of socio-economic, political and/or religious ideological discourses. This is one reason why I call fascism not as a derivative category of capitalism in the sense of former being the historic phase of the latter, but rather as a coterminous tendency waiting in dormancy for capitalism to deteriorate, so that fascism could then detonate. But, are fascism and capitalism related in a multiple of ways is as good as how socialism is related with fascism, albeit only differently categorically.
It is imperative for me to add by way of what I perceive as financial capitalism and bureaucracy and where exactly art gets sandwiched in between the two, for more than anything else, I would firmly believe in Brecht as continuing the artistic practices of Marxian sociology and political-economy.
The financial capitalism combined with the impersonal bureaucracy has inverted the traditional schematic forcing us to live in a totalitarian system of financial governance divorced from democratic polity. It’s not even fascism in the older sense of the term, by being a collusion of state and corporate power, since the political is bankrupt and has become a mediatainment system of control and buffer against the fact of Plutocracies. The state will remain only as long as the police systems are needed to fend off people claiming rights to their rights. Politicians are dramaturgists and media personalities rather than workers in law.  If one were to just study the literature and paintings of the last 3-4 decades, it is fathomable where it is all going. Arts still continue to speak what we do not want to hear. Most of our academics are idiots clinging on to the ideological culture of the left that has put on its blinkers and has only one enemy, which is the right (whatever the hell that is). Instead of moving outside their straightjackets and embracing the world of the present, they still seem to be ensconced in 19th century utopianism with the only addition to their arsenal being the dramatic affects of mass media. Remember Thomas Pynchon of Gravity’s Rainbow fame (I prefer calling him the illegitimate cousin of James Joyce for his craftiness and smoothly sailing contrite plots: there goes off my first of paroxysms!!), who likened the system of techno-politics as an extension of our inhuman core, at best autonomous, intelligent and ever willing to exist outside the control of politics altogether. This befits the operational closure and echoing time and time again that technology isn’t an alien thing, but rather a manifestation of our inhuman core, a mutation of our shared fragments sieved together in ungodly ways. This is alien technologies in gratitude.
We have never been natural, and purportedly so by building defence systems against the natural both intrinsically and extrinsically. Take for example, Civilisation, the most artificial construct of all humans had busied themselves building and now busying themselves upholding. what is it? A Human Security System staving off entropy of existence through the self-perpetuation of a cultural complex of temporal immortalisation, if nothing less and vulnerable to editions by scores of pundits claiming to a larger schemata often overlooked by parochiality. Haven’t we become accustomed to hibernating in an artificial time now exposed by inhabiting the infosphere, creating dividualities by reckoning to data we intake, partake and outtake. Isn’t analysing the part/whole dividuality really scoring our worthiness? I know the answer is yes, but merely refusing to jump off the tongue. Democracies have made us indolent with extremities ever so flirting with electronic knowledge waiting to be turned to digital ash when confronted with the existential threat to our locus standi.
But, we always think of a secret cabal conspiring to dehumanise us. But we also forget the impersonality of the dataverse, the infosphere, the carnival we simply cannot avoid being a part of. Our mistaken beliefs lie in reductionism, and this is a serious detriment to causes created ex nihilo, for a fight is inevitably diluted if we pay insignificance to the global meshwork of complex systems of economics and control, for these far outstrip our ability to pin down to a critical apparatus. This apparatus needs to be different from ones based on criticism, for the latter is prone to sciolist tendencies. Maybe, one needs to admit allegiance to perils of our position and go along in a Socratic irony before turning in against the admittance at opportune times. Right deserves tackling through the Socratic irony, lest taking offences become platitudinous. Let us not forget that the modern state is nothing but a PR firm to keep the children asleep and unthinking and believing in the dramaturgy of the political as real. And this is where Brecht comes right back in, for he considered creation of bureaucracies as affronting not just fascist states, but even communist ones. The above aside, or digression is just a reality check on how much complex capitalism has become and with it, its derivatives of fascism as these are too intertwined within bureaucratic spaces. Even when Brecht was writing in his heydays, he took a deviation from his culinary-as-ever epic theatre to found a new form of what he called theatre as learning to play that resembled his political seminars modeled on the rejection of the concept of bureaucratic elitism in partisan politics where the theorists and functionaries issued directives and controlled activities on behalf of the masses to the point of submission of the latter to the former. This point is highlighted not just for fascist states, but equally well for socialist/communist regimes reiterating the fact that fascism is potent enough to develop in societies other than capitalistic ones.
Moving on to the point when mentions of democracy as bourgeois democracy is done in the same breath as regards equality only for those who are holders of capital are turning platitudinous. Well, structurally yes, this is what it seems like, but reality goes a bit deeper and thereafter fissures itself into looking at if capital indeed is what it is perceived as in general, or is there more to it than meets the eye. I quip this to confront two theorists of equality with one another: Piketty and Sally Goerner. Piketty misses a great opportunity to tie the “r > g” idea (after tax returns on capital r > growth rate of economy g) to the “limits to growth”. With a careful look at history, there are several quite important choice points along the path from the initial hope it won’t work out that way… to the inevitable distressing end he describes, and sees, and regrets. It’s what seduces us into so foolishly believing we can maintain “g > r”, despite the very clear and hard evidence of that faiIing all the time… that sometimes it doesn’t. The real “central contradiction of capitalism” then, is that it promises “g > r”, and then we inevitably find it is only temporary. Growth is actually nature’s universal start-up process, used to initially build every life, including the lives of every business, and the lives of every society. Nature begins building things with growth. She’s then also happy to destroy them with more of the same, those lives that began with healthy growth that make the fateful choice of continuing to devote their resources to driving their internal and external strains to the breaking point, trying to make g > r perpetual. It can’t be. So the secret to the puzzle seems to be: Once you’ve taken growth from “g > r” to spoiling its promise in its “r > g” you’ve missed the real opportunity it presented. Sally Goerner writes about how systems need to find new ways to grow through a process of rising intricacy that literally reorganizes the system into a higher level of complexity. Systems that fail to do that collapse. So smart growth is possible (a cell divides into multiple cells that then form an organ of higher complexity and greater intricacy through working cooperatively). Such smart growth is regenerative in that it manifests new potential. How different that feels than conventional scaling up of a business, often at the expense of intricacy (in order to achieve so called economies of scale). Leaps of complexity do satisfy growing demands for productivity, but only temporarily, as continually rising demands of productivity inevitably require ever bigger leaps of complexity. Reorganizing the system by adopting ever higher levels of intricacy eventually makes things ever more unmanageable, naturally becoming organizationally unstable, to collapse for that reason. So seeking the rise in productivity in exchange for a rising risk of disorderly collapse is like jumping out of the fry pan right into the fire! As a path to system longevity, then, it is tempting but risky, indeed appearing to be regenerative temporarily, until the same impossible challenge of keeping up with ever increasing demands for new productivity drives to abandon the next level of complexity too! The more intricacy (tight, small-scale weave) grows horizontally, the more unmanageable it becomes. That’s why all sorts of systems develop what we would call hierarchical structures. Here, however, hierarchal structures serve primarily as connective tissue that helps coordinate, facilitate and communicate across scales. One of the reasons human societies are falling apart is because many of our hierarchical structures no longer serve this connective tissue role, but rather fuel processes of draining and self-destruction by creating sinks where refuse could be regenerated. Capitalism, in its present financial form is precisely this sink, whereas capitalism wedded to fascism as an historical alliance doesn’t fit the purpose and thus proving once more that the collateral damage would be lent out to fascist states if that were to be the case, which would indeed materialize that way.
That democracy is bourgeois democracy is an idea associated with Swedish political theorist Goran Therborn, who as recent as the 2016 US elections proved his point by questioning the whole edifice of inclusive-exclusive aspects of democracy, when he said,
Even if capitalist markets do have an inclusive aspect, open to exchange with anyone…as long as it is profitable, capitalism as a whole is predominantly and inherently a system of social exclusion, dividing people by property and excluding the non-profitable. a system of this kind is, of course, incapable of allowing the capabilities of all humankind to be realized. and currently the the system looks well fortified, even though new critical currents are hitting against it.
Democracy did take on a positive meaning, and ironically enough, it was through rise of nation-states, consolidation of popular sovereignty championed by the west that it met its two most vociferous challenges in the form of communism and fascism, of which the latter was a reactionary response to the discontents of capitalist modernity. Its radically lay in racism and populism. A degree of deference toward the privileged and propertied, rather than radical opposition as in populism, went along with elite concessions affecting the welfare, social security, and improvement of the working masses. This was countered by, even in the programs of moderate and conservative parties by using state power to curtail the most malign effects of unfettered market dynamics. It was only in the works of Hayek that such interventions were beginning to represent the road to serfdom thus paving way to modern-day right-wing economies, of which state had absolutely no role to play as regards markets fundamentals and dynamics. The counter to bourgeois democracy was rooted in social democratic movements and is still is, one that is based on negotiation, compromise, give and take a a grudgingly given respect for the others (whether ideologically or individually). The point again is just to reiterate that fascism, in my opinion is not to be seen as a nakedest form of capitalism, but is generally seen to be floundering on the shoals of an economic slowdown or crisis of stagflation.
On ideal categories, I am not a Weberian at heart. I am a bit ambiguous or even ambivalent to the role of social science as a discipline that could draft a resolution to ideal types and interactions between those generating efficacies of real life. Though, it does form one aspect of it. My ontologies would lie in classificatory and constructive forms from more logical grounds that leave ample room for deviations and order-disorder dichotomies. Complexity is basically an offspring of entropy.
And here is where my student-days of philosophical pessimism surface, or were they ever dead, as the real way out is a dark path through the world we too long pretended did not exist.

Financial Entanglement and Complexity Theory. An Adumbration on Financial Crisis.

entanglement

The complex system approach in finance could be described through the concept of entanglement. The concept of entanglement bears the same features as a definition of a complex system given by a group of physicists working in a field of finance (Stanley et al,). As they defined it – in a complex system all depends upon everything. Just as in the complex system the notion of entanglement is a statement acknowledging interdependence of all the counterparties in financial markets including financial and non-financial corporations, the government and the central bank. How to identify entanglement empirically? Stanley H.E. et al formulated the process of scientific study in finance as a search for patterns. Such a search, going on under the auspices of “econophysics”, could exemplify a thorough analysis of a complex and unstructured assemblage of actual data being finalized in the discovery and experimental validation of an appropriate pattern. On the other side of a spectrum, some patterns underlying the actual processes might be discovered due to synthesizing a vast amount of historical and anecdotal information by applying appropriate reasoning and logical deliberations. The Austrian School of Economic Thought which, in its extreme form, rejects application of any formalized systems, or modeling of any kind, could be viewed as an example. A logical question follows out this comparison: Does there exist any intermediate way of searching for regular patters in finance and economics?

Importantly, patterns could be discovered by developing rather simple models of money and debt interrelationships. Debt cycles were studied extensively by many schools of economic thought (Shiller, Robert J._ Akerlof, George A – Animal Spirits: How Human Psychology Drives the Economy, and Why It Matters for Global Capitalism). The modern financial system worked by spreading risk, promoting economic efficiency and providing cheap capital. It had been formed during the years as bull markets in shares and bonds originated in the early 1990s. These markets were propelled by abundance of money, falling interest rates and new information technology. Financial markets, by combining debt and derivatives, could originate and distribute huge quantities of risky structurized products and sell them to different investors. Meanwhile, financial sector debt, only a tenth of the size of non-financial-sector debt in 1980, became half as big by the beginning of the credit crunch in 2007. As liquidity grew, banks could buy more assets, borrow more against them, and enjoy their value rose. By 2007 financial services were making 40% of America’s corporate profits while employing only 5% of its private sector workers. Thanks to cheap money, banks could have taken on more debt and, by designing complex structurized products, they were able to make their investment more profitable and risky. Securitization facilitating the emergence of the “shadow banking” system foments, simultaneously, bubbles on different segments of a global financial market.

Yet over the past decade this system, or a big part of it, began to lose touch with its ultimate purpose: to reallocate deficit resources in accordance with the social priorities. Instead of writing, managing and trading claims on future cashflows for the rest of the economy, finance became increasingly a game for fees and speculation. Due to disastrously lax regulation, investment banks did not lay aside enough capital in case something went wrong, and, as the crisis began in the middle of 2007, credit markets started to freeze up. Qualitatively, after the spectacular Lehman Brothers disaster in September 2008, laminar flows of financial activity came to an end. Banks began to suffer losses on their holdings of toxic securities and were reluctant to lend to one another that led to shortages of funding system. This only intensified in late 2007 when Nothern Rock, a British mortgage lender, experienced a bank run that started in the money markets. All of a sudden, liquidity became in a short supply, debt was unwound, and investors were forced to sell and write down the assets. For several years, up to now, the market counterparties no longer trust each other. As Walter Bagehot, an authority on bank runs, once wrote:

Every banker knows that if he has to prove that he is worth of credit, however good may be his arguments, in fact his credit is gone.

In an entangled financial system, his axiom should be stretched out to the whole market. And it means, precisely, financial meltdown or the crisis. The most fascinating feature of the post-crisis era on financial markets was the continuation of a ubiquitous liquidity expansion. To fight the market squeeze, all the major central banks have greatly expanded their balance sheets. The latter rose, roughly, from about 10 percent to 25-30 percent of GDP for the appropriate economies. For several years after the credit crunch 2007-09, central banks bought trillions of dollars of toxic and government debts thus increasing, without any precedent in modern history, money issuance. Paradoxically, this enormous credit expansion, though accelerating for several years, has been accompanied by a stagnating and depressed real economy. Yet, until now, central bankers are worried with downside risks and threats of price deflation, mainly. Otherwise, a hectic financial activity that is going on along unbounded credit expansion could be transformed by herding into autocatalytic process that, if being subject to accumulation of a new debt, might drive the entire system at a total collapse. From a financial point of view, this systemic collapse appears to be a natural result of unbounded credit expansion which is ‘supported’ with the zero real resources. Since the wealth of investors, as a whole, becomes nothing but the ‘fool’s gold’, financial process becomes a singular one, and the entire system collapses. In particular, three phases of investors’ behavior – hedge finance, speculation, and the Ponzi game, could be easily identified as a sequence of sub-cycles that unwound ultimately in the total collapse.

Hedging. Part 2. The Best Strategy to Hedge a Bond is to Short a Bond of the Same Maturity.

au6wkk

Definition:

Li = PPi ∫tT dx ∫tTi dx’σ(t, x) σ(t, x’) D(x, x′; t, TFR)

Mij = PiPj ∫tTi dx ∫tTj dx’σ(t, x) σ(t, x’) D(x, x′; t, TFR)

This definition allows the residual variance in

P2tTdx∫tT dx’ σ(t, x) σ(t, x’) D(x, x′; t, TFR) +2P ∑i=1NΔiPitTdx ∫tTdx’ + ∑i=1Nj=1NΔiΔjPiPjtTitTjdx’ σ(t, x) σ(t, x’) D(x, x′; t, TFR)

to be succinctly expressed as

P2 ∫tT dx ∫tT dx’σ(t, x) σ(t, x’) D(x, x′; t, TFR) + 2 ∑i=1iLi + ∑i=1Nj=1NΔiΔj Mij —– (1)

Theorem: Hedge parameter for bond in the field theory model equals

Δi = -∑j=1N Lj Mij-1

and represents the optimal amounts of P(t, Ti) to include in the hedge portfolio when hedging P(t,T). This theorem is proved by differentiating equation 1 with respect to Δi and subsequently solving for Δi. The corollary is proved by substituting the result of theorem into equation 1.

Corollary: Residual variance, the variance of the hedged portfolio equals

V =   P2 ∫tT dx ∫tT dx’σ(t, x) σ(t, x’) D(x, x′; t, TFR) – ∑i=1Nj=1NLiMij-1Lj

which declines monotonically as N increases. the residual variance in corollary enables the effectiveness of the hedge portfolio to be evaluated. Therefore corollary is the basis for studying the impact of including different bonds in the hedged portfolio. for N = 1, the hedge parameter in the theorem reduces to

Δ1 = -P/P1 (∫tT dx ∫tT1 dx’σ(t, x) σ(t, x’) D(x, x′; t, TFR))/(∫tT1 dx ∫tT1 dx’σ(t, x) σ(t, x’) D(x, x′; t, TFR)) —– (2)

What’s a Market Password Anyway? Towards Defining a Financial Market Random Sequence. Note Quote.

From the point of view of cryptanalysis, the algorithmic view based on frequency analysis may be taken as a hacker approach to the financial market. While the goal is clearly to find a sort of password unveiling the rules governing the price changes, what we claim is that the password may not be immune to a frequency analysis attack, because it is not the result of a true random process but rather the consequence of the application of a set of (mostly simple) rules. Yet that doesn’t mean one can crack the market once and for all, since for our system to find the said password it would have to outperform the unfolding processes affecting the market – which, as Wolfram’s PCE suggests, would require at least the same computational sophistication as the market itself, with at least one variable modelling the information being assimilated into prices by the market at any given moment. In other words, the market password is partially safe not because of the complexity of the password itself but because it reacts to the cracking method.

Figure-6-By-Extracting-a-Normal-Distribution-from-the-Market-Distribution-the-Long-Tail

Whichever kind of financial instrument one looks at, the sequences of prices at successive times show some overall trends and varying amounts of apparent randomness. However, despite the fact that there is no contingent necessity of true randomness behind the market, it can certainly look that way to anyone ignoring the generative processes, anyone unable to see what other, non-random signals may be driving market movements.

Von Mises’ approach to the definition of a random sequence, which seemed at the time of its formulation to be quite problematic, contained some of the basics of the modern approach adopted by Per Martin-Löf. It is during this time that the Keynesian kind of induction may have been resorted to as a starting point for Solomonoff’s seminal work (1 and 2) on algorithmic probability.

Per Martin-Löf gave the first suitable definition of a random sequence. Intuitively, an algorithmically random sequence (or random sequence) is an infinite sequence of binary digits that appears random to any algorithm. This contrasts with the idea of randomness in probability. In that theory, no particular element of a sample space can be said to be random. Martin-Löf randomness has since been shown to admit several equivalent characterisations in terms of compression, statistical tests, and gambling strategies.

The predictive aim of economics is actually profoundly related to the concept of predicting and betting. Imagine a random walk that goes up, down, left or right by one, with each step having the same probability. If the expected time at which the walk ends is finite, predicting that the expected stop position is equal to the initial position, it is called a martingale. This is because the chances of going up, down, left or right, are the same, so that one ends up close to one’s starting position,if not exactly at that position. In economics, this can be translated into a trader’s experience. The conditional expected assets of a trader are equal to his present assets if a sequence of events is truly random.

If market price differences accumulated in a normal distribution, a rounding would produce sequences of 0 differences only. The mean and the standard deviation of the market distribution are used to create a normal distribution, which is then subtracted from the market distribution.

Schnorr provided another equivalent definition in terms of martingales. The martingale characterisation of randomness says that no betting strategy implementable by any computer (even in the weak sense of constructive strategies, which are not necessarily computable) can make money betting on a random sequence. In a true random memoryless market, no betting strategy can improve the expected winnings, nor can any option cover the risks in the long term.

Over the last few decades, several systems have shifted towards ever greater levels of complexity and information density. The result has been a shift towards Paretian outcomes, particularly within any event that contains a high percentage of informational content, i.e. when one plots the frequency rank of words contained in a large corpus of text data versus the number of occurrences or actual frequencies, Zipf showed that one obtains a power-law distribution

Departures from normality could be accounted for by the algorithmic component acting in the market, as is consonant with some empirical observations and common assumptions in economics, such as rule-based markets and agents. The paper.

Top-down Causation in Financial Markets. Note Quote.

maelstrom

Regulators attempt to act on a financial market based on the intelligent and reasonable formulation of rules. For example, changing the market micro-structure at the lowest level in the hierarchy, can change the way that asset prices assimilate changes in information variables Zk,t or θi,m,t. Similarly, changes in accounting rules could change the meaning and behaviour of bottom-up information variables θi,m,t and changes in economic policy and policy implementation can change the meaning of top-down information variables Zk,t and influence shared risk factors rp,t.

In hierarchical analysis, theories and plans may be embodied in a symbolic system to build effective and robust models to be used for detecting deeper dependencies and emergent phenomena. Mechanisms for the transmission of information and asymmetric information information have impacts on market quality. Thus, Regulators can impact the activity and success of all the other actors, either directly or indirectly through knock-on effects. Examples include the following: Investor behaviour could change the goal selection of Traders; change in the latter could in turn impact variables coupled to Traders activity in such a way that Profiteers are able to benefit from change in liquidity or use leverage as a mean to achieve profit targets and overcome noise.

Idealistically, Regulators may aim for increasing productivity, managing inflation, reducing unemployment and eliminating malfeasance. However, the circumvention of rules, usually in the name of innovation or by claims of greater insight on optimality, is as much part of a complex system in which participants can respond to rules. Tax arbitrages are examples of actions which manipulate reporting to reduce levies paid to a profit- facilitating system. In regulatory arbitrage, rules may be followed technically, but nevertheless use relevant new information which has not been accounted for in system rules. Such activities are consistent with goals of profiteering but are not necessarily in agreement with longer term optimality of reliable and fair markets.

Rulers, i.e. agencies which control populations more generally, also impact markets and economies. Examples of top-down causation here include segregation of workers and differential assignment of economic rights to market participants, as in the evolution of local miners’ rights in the late 1800’s in South Africa and the national Native Land act of 1913 in South Africa, international agreements such as the Bretton Woods system, the Marshall plan of 1948, the lifting of the gold standard in 1973 and the regulation of capital allocations and capital flows between individual and aggregated participants. Ideas on target-based goal selection are already in circulation in the literature on applications of viability theory and stochastic control in economics. Such approaches provide alternatives to the Laplacian ideal of attaining perfect prediction by offering analysable future expectations to regulators and rulers.

Is Indian GDP data turning a little too Chinese? Why to be Askance @ India’s Growth Figures?

245_clipart-graph

India defied expectations on Tuesday to retain the title of the world’s fastest growing major economy, despite the pain caused by Prime Minister Narendra Modi’s shock crackdown on cash.

Annual gross domestic product (GDP) growth for the October-December period came in at 7.0 per cent, a tad slower than 7.4 per cent in the previous quarter but much faster than the 6.4 per cent expansion forecast by economists in a Reuters poll. Economists are scratching their heads its almost seen for the economy is untouched by demonetisation now you are one of the strongest defendant of demonetisation. Would you agree that the economy was almost left untouched by demonetisation some pain was warranted was it not?

Shaktikanta Das: As we have explained earlier, we have to go by real statistics. Now, when the Q2 figures where the second quarter figures for the current year released the advanced estimates were released that time also we had explained that we have to go by real statistics and not by anecdotal evidence.

Being the fastest-growing large economy in the world is India’s destiny, and even the most poorly conceived economic policy imaginable can’t stop destiny….To say the data is startling is an understatement. The IMF had predicted that India would grow at around 6 percent in the half-year after “demonetisation,” as it’s called. Most independent economists forecast GDP growth would come in somewhere between 6 and 7 percent. Those economists naturally assumed that withdrawing 86 percent of the country’s currency and reducing access to bank accounts would dampen private consumption.  

Yet if one believes the government’s numbers, taking away most of India’s cash overnight didn’t hurt private spending at all. In fact, private consumption rose by 10.1 percent over the quarter. That’s the highest growth in spending in over five years, and it came at a time when consumer confidence was falling sharply. 

My take on the statistics:
Well, this is a simple tweaking of the equations that differentiate the growth curve. In short, we have all been a part of exams where 9/10 is different from 99/100, even if just one number distances the actual score from the maximum one could score. On similar lines, the crimes of growth are factored in on growth year/base year. This is mathematical jugglery narrowed in on political ends. Whichever way one looks at the data, some of the indicators are still found lagging the composite growth, thereby dumbing down the economists when the growth curve mandates a pattern recognition.
GDP, when calculated at Factor Cost is related with GDP at Market Price, and written as an equation of the form,
GDP (FC) = GDP (MP) – indirect takes + subsidies
While, Gross Value Added,
GVA (basic prices) = Sum (net of production taxes & subsidies) to GDP (factor cost)
Stamp duties and property taxes make up the production taxes, whereas labour, capital and investment subsidies are the other half. Why is this done? To inflate GDP after it starts representing the GDP of a country in terms of total GVA, i.e. without discounting for depreciation. Moreover, GDP at market price adds taxes and deducts subsidies on products and services to GDP at factor cost. The sum total of the GVA in various economic activities is called the GDP at factor cost. With a change in method and a subsequent change in base year, India has increased or rather expanded its manufacturing base in the sense of capturing it.  This has also enabled the country to include informal sectors, which hitherto had not found its true manifestation. This is mere adherence to standards that become internationalized.
Now, what happens in India’s case is the part subsidies, which has been the fixed denominator for our GDP, unlike most of the developed world, or even the developing economies. So, our GDP hitherto had largely been GDP (FC). After rearranging the equation above, GDP (FC) would have subtraction of the subsidies part, and yield GDP (MP), thus changing the base completely, and giving a large share of the economy as growing, rather than the dismal one predicted in the wake of demonetization. This has been effectuated since 2012 implying that whatever happens after demonetization, the growth period would project only redundant figures. Slip that into the quarterly period, and yes, the new base would indicate a growing economy, as used by the WB/IMF to forecast India growing more than China. So, there is nothing really dastardly an act here, but more about how to integrate the parts into the composite to yell at the world, we are growing.

Nobel Prize in Economics and Crimino(logy)/(genic). How Contracts Work? Note Quote.

d40838e2-8ed2-11e6-af59-7ad1937f51f2_1280x720-777x437

How has the Swedish Central Bank’s committee that awards prizes in Economics in honor of Nobel responded to the field’s abject failures regarding the recent financial crisis and the Great Recession?  A lesser group would display humility, acknowledge its failures, and promise a fundamental rethink of the field.  Neoclassical economists, however, are made of sterner stuff.  The committee’s response is to praise the discipline for its theoretical advances and proposed policies related to finance, regulation, and corporate governance. Oliver Hart, and Bengt Holmström exemplify this pattern.

The economics prize is a bit different. It was created by Sweden’s Central Bank in 1969, nearly 75 years later. The award’s real name is the “Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel.” It was not established by Nobel, but supposedly in memory of Nobel. It’s a ruse and a PR trick, and I mean that literally. And it was done completely against the wishes of the Nobel family.

Sweden’s Central Bank quietly snuck it in with all the other Nobel Prizes to give free-market economics for the 1% credibility. One of the Federal Reserve banks explained it succinctly, “Few realize, especially outside of economists, that the prize in economics is not an “official” Nobel. . . . The award for economics came almost 70 years later—bootstrapped to the Nobel in 1968 as a bit of a marketing ploy to celebrate the Bank of Sweden’s 300th anniversary.” Yes, you read that right: “a marketing ploy.”

The Economics Prize has nestled itself in and is awarded as if it were a Nobel Prize. But it’s a PR coup by economists to improve their reputation,” Nobel’s great great nephew Peter Nobel told AFP in 2005, adding that “It’s most often awarded to stock market speculators …. There is nothing to indicate that [Alfred Nobel] would have wanted such a prize.

Members of the Nobel family are among the harshest, most persistent critics of the economics prize, and members of the family have repeatedly called for the prize to be abolished or renamed. In 2001, on the 100th anniversery of the Nobel Prizes, four family members published a letter in the Swedish paper Svenska Dagbladet, arguing that the economics prize degrades and cheapens the real Nobel Prizes. They aren’t the only ones.

Scientists never had much respect for the new economic Nobel prize. In fact, a scientist who headed Nixon’s Science Advisory Committee in 1969, was shocked to learn that economists were even allowed on stage to accept their award with the real Nobel laureates. He was incredulous: “You mean they sat on the platform with you?”

Why economics? To answer that question we have to go back to Sweden in the 1960s.

Around the time the prize was created, Sweden’s banking and business interests were busy trying to ram through various so-called “free-market” economic reforms. Their big objective at the time was to loosen political oversight and control over the country’s central bank. According to Philip Mirowski, a professor at the University of Notre Dame who specializes in the history of economics, the

Bank of Sweden was trying to become more independent of democratic accountability in the late 60s, and there was a big political dispute in Sweden as to whether the bank could have effective political independence. In order to support that position, the bank needed to claim that it had a kind of scientific credibility that was not grounded in political support.

Promoters of central bank independence couched their arguments in the obscure language of neoclassical economic theory of market efficiency. The problem was that few people in Sweden took their neoclassical babble very seriously, and saw their plan for central bank independence for what it was: an attempt to transfer control over economic matters from democratically elected government and place into the hands of big business interests, giving them a free hand in running Sweden’s economy without pesky interference from labor unions, voters and elected officials.

For the first few years, the Swedish Central Bank Prize in Economics went to fairly mainstream and maybe even semi-respectable economists. But after establishing the award as credible and serious, the prizes took a hard turn to the right. Over the next decade, the prize was awarded to the most fanatical supporters of theories that concentrated wealth among the top 1% of industrialized society of our time. At the time of the prizes, neoclassical economics were not fully accepted by the media and political establishment. But the Nobel Prize changed all that. What started as a project to help the Bank of Sweden achieve political independence, ended up boosting the credibility of the most regressive strains of free-market economics, and paving the way for widespread acceptance of libertarian ideology.

The Swedish Riksbank awarded this year’s Nobel prize for economic sciences to Oliver Hart, a British economist at Harvard University, and Bengt Holmstrom, a Finnish economist at MIT, for their work improving our understanding of how and why contracts work, and when they can be made to work better.

Their work focuses attention on the necessity of trade-offs in setting contract terms; it is yet another in a series of recent prizes which explores the unavoidable imperfections in many critical markets. Mr Holmstrom’s analyses of insurance contracts describe the inevitable trade-off between the completeness of an insurance contract and the extent to which that contract encourages moral hazard. From an insurance perspective, the co-payments that patients must sometimes make when receiving treatment are a waste; it would be better for people to be able to insure fully. Yet because insurers cannot know that all patients are receiving only the treatment they need and no more, they employ co-payments as a way to lean against the problem of moral hazard: that some people will choose to use much more health care than they need when the pool of all those being insured picks up the bill. A common and important thread in work by Messrs Hart and Holmstrom is the role of power in planning co-operative ventures. Individuals or firms with the ability to hold up arrangements – by withholding their service or the use of a resource they own – wield economic power. That power allows them to capture more of the value generated by a co-operative effort, and potentially to sink it entirely, even if the venture would yield big gains for all participants and society as a whole. Contracts exist to shape power relationships. In some cases, they are there to limit the exercise of hold-up power so that a venture can go forward. In others, they are intended to create or protect certain power relationships in order to encourage good behaviour: workers or firms with the right to exit a relationship, for instance, force other parties to that relationship to take their interests into account. The broader lesson – that power matters – is one economics too often neglects.

The theory holds that the contracting costs between economic units are shaped by the nature of the interaction between them. These costs are not operational costs, such as commission fees or transportation costs. Instead, they stem from the lack of clarity and enforceability of the terms of the interaction and each unit’s dependence on the interaction. And, in the words of today’s prize winners, they cause contracts to be incomplete. 

Difficulties in Negotiating a Transaction

Difficulties in Monitoring an Ongoing Transaction

Difficulties in Enforcing an Agreement

When managers spot these sorts of problems on the horizon, a deal that potentially will create value may not get done because the contract is bound to be incomplete. The danger is that the contract will not specify how to resolve conflicts in the future. This is because the agreement between the parties does not cover all contingencies, all issues, or all possible states of the world. To govern a partnership successfully, then, you need to manage the gaps in the contract. Traditional management techniques call for command and control in these situations, to respond quickly and decisively to new conditions. But this solution is missing from typical partnerships, most of which are characterized by a sharing of control. It may be a formal joint venture with shared ownership or a looser arrangement whereby one party controls certain parts of the joint project and the other party controls others. So, each partner’s control in these combinations is also incomplete.

Neoclassical economic dogma is that money is the “high power” incentive.  Normal humans know that this is preposterous.  The highest power incentives are rarely monetary.  People give up their lives for others.  Some of them do so nominally for “duty, honor, country,” but actually because of the effects of “small unit cohesion.” A second neoclassical dogma is ignoring fraud and predation.  The 2016 prizes show how, despite their knowledge of the falsity of the implicit assumption, neoclassical economists repeatedly ignore the manners in which CEOs shape perverse incentives and render the Laureates’ compensation and governance policies criminogenic.  A third neoclassical dogma is, implicitly, to assume that perverse incentives do not influence CEOs and those they suborn.  Holmström and Steven N. Kaplan’s article about corporate governance in light of the Enron-era frauds unintentionally displayed this third neoclassical dogma about incentives. The fourth dogma is that regulation cannot succeed because it lacks “high power” incentives. Criminologists’ understanding of incentives and how CEOs set and pervert incentives is far more sophisticated than neoclassical economists’ myths about incentives.  Criminologists provide the content to how CEOs that predate “rig the system.”  Criminologists agree that perverse financial incentives are important contributors to white-collar crime.

 

Hyman Minsky, Karl Polanyi, Deleterious Markets and if there is any Alt-Right to them? Apparently No.

Karl Polanyi has often highlighted on the insight about the predicaments of the market. these perils realize that when the markets are left to their own devices, they are enough o cause attrition to the social relations and fabric. However, the social consequences of financial instability can be understood only by employing a Polanyian perspective on how processes of commodification and market expansion jeopardize social institutions. For someone like Hyman Minsky, equilibrium and stability are elusive conditions in markets with debt contracts. His financial instability hypothesis suggests that capitalist economies lead, through their own dynamics, to “the development over historical time of liability structures that cannot be validated by market-determined cash flows or asset values”. According to Minsky, a stable period generates optimistic expectations. Increased confidence and positive expectations of future income streams cause economic actors to decrease margins of safety in their investment decisions. This feeds a surge in economic activity and profits, which turns into a boom as investments are financed by higher degrees of indebtedness. As the economic boom matures, an increasing number of financial intermediaries and firms switch from hedge finance to speculative and Ponzi finance. Minsky argued that economists, misreading Keynes, downplay the role of financial institutions. In particular, he argued that financial innovation can create economic euphoria for a while before destabilizing the economy and hurling it into crises rivaling the Great Depression. Minsky’s insights are evident in the effects of innovations in mortgages and mortgage securities. Actors using speculative and Ponzi finance are vulnerable to macroeconomic volatility and interest rate fluctuations. A boom ends when movements in short-term and long-term interest rates render the liability structures of speculative and Ponzi finance unsustainable. The likelihood of a financial crisis (as opposed to a business cycle) depends on the preponderance of speculative and Ponzi finance in the economy under question.

heres-what-prophetic-economist-hyman-minsky-would-say-about-todays-crisis

Minsky regularly criticized economists for failing to grasp Keynes’s ideas. In his book Stabilizing an Unstable Economy Minsky argued that while economists assimilated some of Keynes’s insights into standard economic theory, they failed to grasp the connection between the financial and real sectors. Specifically, he argued that finance is missing from macroeconomic theory, with its focus on capital structure, asset-liability management, agency theory, and contracts. He wrote:

Keynes’s theory revolves around bankers and businessmen making deals on Wall Street … One of the peculiarities of the neoclassical theory that preceded Keynes and the neoclassical synthesis that now predominates economic theory is that neither allows the activities that take place on Wall Street to have any significant impact upon the coordination or lack of coordination of the economy…

Minsky’s work on financial crises builds on Keynes’s insights, using terms such as “euphoric economy”, and “unrealistic euphoric expectations with respect to costs, markets, and their development over time”. Yet Minsky considered the issues of rational prices and market efficiency as only the tip of an iceberg. His broad framework addresses issues related to the lending practices by financial institutions, central bank policy, fiscal policy, the efficacy of financial market regulation, employment policy, and income distribution. Financial institutions, such as banks, become increasingly innovative in their use of financial products when the business cycle expands, boosting their leverage and funding projects with ever increasing risk. Minsky’s words on financial innovation are striking, as if foretelling the recent crisis.

Over an expansion, new financial instruments and new ways of financing activity develop. Typically, defects of the new ways and the new institutions are revealed when the crunch comes.

Commercial banks sponsored conduits to finance long-term assets through special purpose entities such as structured investment vehicles (SIVs), something similar to the Indian version of Special Purpose Vehicles (SPVs). These were off balance sheet entities, subjecting them to lower regulatory capital requirements. Special purpose entities used commercial paper to raise funds they then used to buy mortgages and mortgage securities. In effect, banks relied on Minsky-type speculative and Ponzi financing, borrowing short-term and using these borrowed funds to buy long-term assets. Wrote Minsky,

The standard analysis of banking has led to a game that is played by central banks, henceforth to be called the authorities, and profit-seeking banks. In this game, the authorities impose interest rates and reserve regulations and operate in money markets to get what they consider to be the right amount of money, and the banks invent and innovate in order to circumvent the authorities. The authorities may constrain the rate of growth of the reserve base, but the banking and financial structure determines the efficacy of reserves…This is an unfair game. The entrepreneurs of the banking community have much more at stake than the bureaucrats of the central banks. In the postwar period, the initiative has been with the banking community, and the authorities have been “surprised” by changes in the way financial markets operate. The profit-seeking bankers almost always win their game with the authorities, but, in winning, the banking community destabilizes the economy; the true losers are those who are hurt by unemployment and inflation.

1430246264044

Combining Hyman Minsky’s insights on financial fragility with a Polanyian focus on commodification offers a distinct perspective on the causes and consequences of the foreclosure crisis. First, following Polanyi, we should expect to find commodity fiction applied to arenas of social life previously isolated from markets to be at the heart of the recent financial crisis. Second, following Minsky, the transformations caused by novel uses of commodity fiction should be among the primary causes of financial fragility. Finally, in line with a Polanyian focus on the effects of supply-demand-price mechanism, the price fluctuations caused by financial fragility should disrupt existing social relations and institutions in a significant manner. So, how does this all peter down to alt-right? Right-wing libertarianism is basically impossible. The “free” market as we know it today needs the state to be implemented – without reading Polanyi, you just know for example that without the force of the state, you just can’t have private property or all the legal arrangements that underpin property, labour and money. So it wouldn’t work anyway. Polanyi’s point is that if we want democracy to survive, we need to beware of financial overlords and their ideological allies peddling free-market utopias. And if democracy even stinks of stability, then stability is destabilizing as Minsky would have had it, thus corroborating the cross-purposes between the two thinkers in question, at least to the point of a beginning.