The Mathematics of Political Policies and Economic Decisions – Dictatorial (Extremists) Versus Democratic. Note Quote. Part 1 Didactics.

Screen Shot 2019-02-07 at 8.50.31 AM

If a strategy-proof, onto rule does not pick x∗ when it is the median of peaks and ym‘s, then a contradiction is reached using preferences with peaks at piL and piH.

Let us take a look at single-peaked preferences over one-dimensional policy spaces. This domain can be used to model political policies, economic decisions, location problems, or any allocation problem where a single point must be chosen in an interval. The key assumption is that agents’ preferences are assumed to have a single most-preferred point in the interval, and that preferences are “decreasing” as one moves away from that peak.

Formally, the allocation space (or policy space) is the unit interval A = [0, 1]. An outcome is a single point x ∈ A. Each agent i ∈ N has a preference ordering ≽i, which is a weak order over the outcomes in [0, 1]. The preference relation ≽i is single-peaked if ∃ a point pi ∈ A (the peak of ≽i) such that ∀ x ∈ A\{pi} and all λ ∈ [0,1), (λx +(1−λ)pi) ≻i x. Let R denote the class of single-peaked preferences.

We denote the peaks of preference relations ≽i, ≽′i, ≽j, etc., respectively by pi, pi′, pj, etc. Denote a profile (n-tuple) of preferences as ≽ ∈ Rn.

One can imagine this model as representing a political decision such as an income tax rate, another political issue with conservative/liberal extremes, the location of a public facility on a road, or even something as simple as a group of people deciding on the temperature setting for a shared office. Here, the agents have an ideal preferred policy in mind, and would prefer that a decision be made as close as possible to this “peak.”

A rule f : Rn → A assigns an outcome f(≽) to any preference profile ≽. A rule is strategy-proof if it is a dominant strategy for each agent to report his preferences truthfully when the rule is being used to choose a point.

Our purpose then is to see that this class of problems admits a rich family of strategy-proof rules whose ranges include more than two alternatives. In fact, the family of such rules remains rich even when one restricts attention to rules that satisfy the following condition.

We say that a rule f is onto if ∀ x ∈ A ∃ ≽ ∈ Rn such that f(≽) = x. An onto rule cannot preclude an outcome from being chosen ex ante. It is not without loss of generality to impose this condition. For instance, fix two points x, y ∈ [0, 1] and consider a rule that chooses whichever of the two points is preferred to the other by a majority of agents (and where x is chosen in case of a tie). Such a rule is strategy-proof, but not onto. Similar strategy-proof rules can even break ties between x and y by using preference information about other points x′, y′, . . ., in [0, 1], even though x′, etc., are not in the range of the rule.

The onto condition is even weaker than what is called unanimity, which requires that whenever all agents’ preferences have the same peak (pi = pj ∀ i, j), the rule must choose that location as the outcome. In turn, unanimity is weaker than Pareto-optimality: ∀ ≽ ∈ Rn, ∃ no point x ∈ [0, 1] such that x ≽i f(≽) ∀ i ∈ N.

As it turns out, these three requirements are all equivalent among strategy-proof rules. Suppose f is strategy-proof. Then f is onto iff it is unanimous iff it is Pareto-optimal.

It is clear that Pareto-optimality implies the other two conditions. Suppose f is strategy-proof and onto. Fix x ∈ [0, 1] and let ≽ ∈ Rn be such that f(≽) = x. Consider any “unanimous” profile ≽′ ∈ Rn such that pi′ = x for each i ∈ N. By strategy-proofness, f (≽′1, ≽2, . . . , ≽n) = x, otherwise agent 1 could manipulate f. Repeating this argument, f (≽′1, ≽′2, ≽3, . . . , ≽n) = x, . . . ,f(≽′) = x. That is, f is unanimous.

In order to derive a contradiction, suppose that f is not Pareto-optimal at some profile ≽ ∈ Rn. This implies that either (i) f(≽) < pi ∀ i ∈ N or (ii) f(≽) > pi ∀ i ∈ N . Without loss of generality, assume (i) holds. Furthermore, assume that the agents are labeled so that p1 ≤ p2 ≤ ··· ≤ pn.

If p1 = pn then unanimity is violated, completing the proof. Otherwise, let j ∈ N be such that p1 = pj < pj+1; that is, j < n agents have the minimum peak. ∀ i > j, let ≽′i be a preference relation such that both pi′ = p1 and f(≽)≽′i pi.

Let xn = f(≽1,…, ≽n−1, ≽′n). By strategy-proofness, xn ∈ [f(≽), pn], otherwise agent n (with preference ≽′n) could manipulate f by reporting preference ≽n. Similarly, xn ∉ (f(≽), pn], otherwise agent n (with preference ≽n) could manipulate f by reporting preference ≽′n. Therefore xn = f(≽).

Repeating this argument as each i > j replaces ≽i with ≽′i, we have f(≽1,…, ≽j, ≽′j+1,…, ≽′n) = f(≽), which contradicts unanimity. Since a strategy-proof, onto rule must be unanimous, this is a contradiction.

The central strategy-proof rule on this domain is the simple median-voter rule. Suppose that the number of agents n is odd. Then the rule that picks the median of the agents’ peaks (pi ’s) is a strategy-proof rule.

It is easy to see why this rule is strategy-proof : If an agent’s peak pi lies below the median peak, then he can change the median only by reporting a preference relation whose peak lies above the true median. The effect of this misreport is for the rule to choose a point even further away from pi, making the agent worse off. A symmetric argument handles the case in which the peak is above the median. Finally, an agent cannot profitably misreport his preferences if his peak is the median one to begin with.

More generally, for any number of agents n and any positive integer k ≤ n, the rule that picks the kth highest peak is strategy-proof for precisely the same reasons. An agent can only move the kth peak further from his own. The median happens to be the case where k = (n + 1)/2.

The strategy-proofness of such rules stands in contrast to the incentives properties of rules that choose average-type statistics. Consider the rule that chooses the average of the n agents’ peaks. Any agent with peak pi ∈ (0, 1) that is not equal to the average can manipulate the rule by reporting preferences with a more extreme peak (closer to 0 or 1) than his true peak.

This would also hold for any weighted average of the agents’ peaks, with one exception. If a rule allocated all of the weight to one agent, then the resulting rule simply picks that agent’s peak always. Such a dictatorial rule is strategy-proof and onto.

In addition to favorable incentives properties, rules based on order statistics have the property that they require little information to be computed. Technically a rule requires agents to report an entire preference ordering over [0, 1]. There is a need to transcend the rules, which, only require agents to report their most preferred point, i.e., a single number. In fact, under the onto assumption, this informational property is a consequence of the strategy-proofness requirement; that is, all strategy-proof and onto rules have the property that they can be computed solely from information about the agents’ peaks.

Let us generalize the class of “kth-statistic rules”. Consider a fixed set of points y1, y2, . . . , yn−1 ∈ A. Consider the rule that, for any profile of preferences ≽, chooses the median of the 2n − 1 points consisting of the n agents’ peaks and the n − 1 points of y. This differs in that, for some choices of y and some profiles of preferences, the rule may choose a point that is not the peak of any agent’s preferences. Yet, such a rule is strategy-proof.

Such rules compose the entire class of strategy-proof and onto rules that treat agents symmetrically. To formalize this latter requirement, we call a rule anonymous if for any ≽ ∈ Rn and any permutation ≽′ of ≽, f(≽′) = f (≽). This requirement captures the idea that the agents’ names play no role in the behavior of a rule. Dictatorial rules are examples of rules that are strategy-proofand onto, but not anonymous.

A rule f is strategy-proof, onto, and anonymous iff ∃ y1, y2,…, yn−1 ∈ [0,1] such that ∀ ≽ ∈ Rn,

f(≽) = med{p1, p2,…, pn, y1, y2,…, yn−1} —– (1)

Suppose f is strategy-proof, onto, and anonymous. We make extensive use of the two (extreme) preference relations that have peaks at 0 and 1 respectively. Since preferences relations are ordinal, there is only one preference relation with a peak at 0 and only one with a peak at 1. Denote these two preference relations by ≽0i and ≽1i respectively.

For any 1 ≤ m ≤ n − 1, let ym denote the outcome of f when m agents have preference relation ≽1i and the remainder have ≽0i:

ym = f(≽01,…, ≽0n−m, ≽1n−m+1,…, ≽1n)

By anonymity, the order of the arguments of f is irrelevant; if precisely m agents have preference relation ≽1i and the rest have ≽0i then the outcome is ym.

With a profile of preferences ≽ ∈ Rn with peaks p1, . . ., pn, and without loss of generality (by anonymity), once one assumes that pi ≤ pi+1 for each i ≤ n−1, then,

f(≽) = x∗ ≡ med{p1,…, pn, y1,…, yn−1}.

If the median is some ym, then suppose x∗ = ym for some m. By monotonicity of the peaks and ym‘s, since x∗ is the median of 2n−1 points this implies pn−m ≤ x∗ = ym ≤ pn−m+1. By assumption,

x∗ = ym = f(≽01,…, ≽0n−m, ≽1n−m+1,…, ≽1n) —– (2)

Let x1 = f(≽1, ≽02,…, ≽0n−m, ≽1n−m+1,…, ≽1n). Strategy-proofness implies x1 ≥ x∗, otherwise agent 1 with preference ≽01 could manipulate f. Similarly, since p1 ≤ ym, we cannot have x1 > x∗, otherwise agent 1 with preference ≽1 could manipulate f. Hence x1 = x∗. Repeating this argument for all i ≤ n − m, x∗ = f(≽1,…,≽n−m, ≽1n−m+1,…, ≽1n). The symmetric argument for all i > n−m implies

f(≽1,…, ≽n) = x∗ —– (3)

If the median is an agent’s peak, then the remaining case is that ym < x∗ < ym+1 for some m. (The cases where x∗ < y1 and x∗ > yn−1 are similar, denoting y0 = 0 and yn = 1). In this case, since the agents’ peaks are in increasing order, we have x∗ = pn−m. If

f(≽01,…, ≽0n−m−1, ≽n−m, ≽1n−m+1,…, ≽1n) = x∗ = pn−m —– (4)

then, analogous to the way (2) implied (3), repeated applications of strategy-proofness (to the n−1 agents other than i = n−m) would imply f(≽1,…, ≽n) = x∗, and the proof would be finished.

Thus, the parameters (ym‘s) can be thought of as the rule’s degree of compromise when agents have extremist preferences. If m agents prefer the highest possible outcome (1), while n − m prefer the lowest (0), then which point should be chosen? A true median rule would pick whichever extreme (0 or 1) contains the most peaks. On the other hand, the other rules may choose intermediate points (ym) as a compromise. The degree of compromise (which ym) can depend on the degree to which the agents’ opinions are divided (the size of m).

The anonymity requirement is a natural one in situations where agents are to be treated as equals. If one does not require this, however, the class of strategy-proof rules becomes even larger. Along the dictatorial rules, which always chooses a predetermined agent’s peak, there are less extreme violations of anonymity: The full class of strategy-proof, onto rules, allows agents to be treated with varying degrees of asymmetry.

Fascism’s Incognito – Brechtian Circular Circuitry. Note Quote.

Carefully looking at the Brechtian article and unstitching it, herein lies the pence (this is reproduced via an email exchange and hence is too very basic in arguments!!):

6795-trtworld-349923-387990
1. When Brecht talks of acceding to the capitulation of Capitalism, in that, being a historic phase and new and old at the same time, this nakedest manifestation of Capitalism is attributed to relationality, which are driven by functionalist propositions and are non-linear, reversible schemas existing independently of the specific contents that are inserted as variables. This may sound a bit philosophical, but is the driving force behind Brecht’s understanding of Capitalism and is perfectly corroborated in his famous dictum, “Reality as such has slipped into the domain of the functional.” This dictum underlines what is new and what is old at the same time.
2. Sometime in the 30s, Brecht’s writings corroborated the linkages between Capitalism and Fascism, when the victories of European fascism prompted consideration of the relationship between collective violence and regressive social configurations. At its heart, his corpus during the times was a defining moment of finance capital, an elaborate systemic treatment of economic transactions within the literary narrative with fascistic overtones. It is here the capitalist is consummate par excellence motivated by the rational calculus (Ayn Rand rings the bells!!!). Eschewing the narrative desire of the traditional dramatic novel, Brecht compels the readers without any recourse to emotional intensity and catharsis, and capturing the attention via phlegmatic and sublimated pleasures of logical analysis, riddle solving, remainder less, and bookkeeping. This coming together of the financial capital with the rise in European Fascism, despite leading to barbaric times in due course, brought forth the progeny of corporation merging with the state incorporating social functions into integrated networks of production and consumption. What Brecht reflects as barbaric is incidentally penned in these tumultuous ear, where capital evolves from Fordist norms into Corporations and in the process atrophy human dimensions. This fact is extrapolated in contemporary times when capital has been financialized to the extent of artificial intelligences, HFTs and algorithmic decision making, just to sound a parallel to Nature 2.0.
But, before digressing a bit too far, where is Brecht lost in the history of class consciousness here? With capital evolving exponentially, even if there is no or little class consciousness in the proletariat, there will come a realization that exploitation is widespread. This is the fecund ground when nationalist and fascist rhetoric seeds into a full-grown tree, inciting xenophobias infused with radicalization (this happened historically in Italy and in Germany, and is getting replicated on micro-to-macro scales contemporarily). But, what Brecht has failed to come to terms with is the whole logic of fascists against the capitalist. Fascists struggle with the capitalist question within their own circles (a far-fetched parallel drawn here as regards India is the right ideologue’s opposition to FDI, for instance). Historically speaking and during times when Bertotl was actively writing, there were more working class members of the Italian fascists than anyone else with anti-capitalist numbers. In Nazi Germany, there were close to 30 per cent within stormtroopers as minimal identifies and sympathizers with communism. The rest looked up to fascism as a stronger alternative to socialism/communism in its militancy. The intellectual and for moral (might be a strikethrough term here, but in any case…) tonic was provided for by the bourgeois liberals who opposed fascism for their capitalist bent. All in all, Brecht could have been prescient to say the most, but was too ensconced, to say the least, in Marxist paradigms to analyze this suturing of ideological interests. That fascism ejected itself of a complete domineering to Capitalism, at least historically, is evident from the trajectory of a revolutionary syndicalist, Edmondo Rossoni, who was extremely critical of internationalism, and spearheaded Italian fascist unions far outnumbering Italian fascist membership. Failure to recognize this fractious relationship between Fascism and Capitalism jettisons the credibility of Brechtian piece linked.
3. Althusser once remarked that Brecht’s work displays two distinct forms of temporality that fail to achieve any mutual integration, which have no relation with one another, despite coexisting and interconnecting, never meet one another. The above linked essay is a prime example of Althusser’s remark. What Brecht achieves is demonstrating incongruities in temporalities of capital and the human (of Capitalism and Barbarianism/Fascism respectively), but is inadequate to take such incongruities to fit into the jigsaw puzzle of the size of Capitalism, not just in his active days, but even to very question of his being prescient for contemporary times, as was mentioned in point 2 in this response. Brecht’s reconstructing of the genealogy of Capitalism in tandem with Fascism parses out the link in commoditized linear history (A fallacy even with Marxian notion of history as history of class consciousness, in my opinion), ending up trapped in tautological circles, since the human mind is short of comprehending the paradoxical fact of Capitalism always seemingly good at presupposing itself.
It is for these reasons, why I opine that Brecht has a circular circuitry.

Serge Galam’s Sociophysics: A Physicist’s Modeling of Psycho-political Phenomena

The Trump phenomenon is argued to depart from current populist rise in Europe. According to a model of opinion dynamics from sociophysics the machinery of Trump’s amazing success obeys well-defined counter-intuitive rules. Therefore, his success was in principle predictable from the start. The model uses local majority rule arguments and obeys a threshold dynamics. The associated tipping points are found to depend on the leading collective beliefs, cognitive biases and prejudices of the social group which undertakes the public debate. And here comes the open sesame of the Trump campaign, which develops along two successive steps. During a first moment, Trump’s statement produces a majority of voters against him. But at the same time, according to the model the shocking character of the statement modifies the prejudice balance. In case the prejudice is present even being frozen among voters, the tipping point is lowered at Trump’s benefit. Nevertheless, although the tipping point has been lowered by the activation of frozen prejudices it is instrumental to preserve enough support from openly prejudiced people to be above the threshold.

Serge Galam – Sociophysics A Physicist’s Modeling of Psycho-political Phenomena

 

Knowledge Limited for Dummies….Didactics.

header_Pipes

Bertrand Russell with Alfred North Whitehead, in the Principia Mathematica aimed to demonstrate that “all pure mathematics follows from purely logical premises and uses only concepts defined in logical terms.” Its goal was to provide a formalized logic for all mathematics, to develop the full structure of mathematics where every premise could be proved from a clear set of initial axioms.

Russell observed of the dense and demanding work, “I used to know of only six people who had read the later parts of the book. Three of those were Poles, subsequently (I believe) liquidated by Hitler. The other three were Texans, subsequently successfully assimilated.” The complex mathematical symbols of the manuscript required it to be written by hand, and its sheer size – when it was finally ready for the publisher, Russell had to hire a panel truck to send it off – made it impossible to copy. Russell recounted that “every time that I went out for a walk I used to be afraid that the house would catch fire and the manuscript get burnt up.”

Momentous though it was, the greatest achievement of Principia Mathematica was realized two decades after its completion when it provided the fodder for the metamathematical enterprises of an Austrian, Kurt Gödel. Although Gödel did face the risk of being liquidated by Hitler (therefore fleeing to the Institute of Advanced Studies at Princeton), he was neither a Pole nor a Texan. In 1931, he wrote a treatise entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which demonstrated that the goal Russell and Whitehead had so single-mindedly pursued was unattainable.

The flavor of Gödel’s basic argument can be captured in the contradictions contained in a schoolboy’s brainteaser. A sheet of paper has the words “The statement on the other side of this paper is true” written on one side and “The statement on the other side of this paper is false” on the reverse. The conflict isn’t resolvable. Or, even more trivially, a statement like; “This statement is unprovable.” You cannot prove the statement is true, because doing so would contradict it. If you prove the statement is false, then that means its converse is true – it is provable – which again is a contradiction.

The key point of contradiction for these two examples is that they are self-referential. This same sort of self-referentiality is the keystone of Gödel’s proof, where he uses statements that imbed other statements within them. This problem did not totally escape Russell and Whitehead. By the end of 1901, Russell had completed the first round of writing Principia Mathematica and thought he was in the homestretch, but was increasingly beset by these sorts of apparently simple-minded contradictions falling in the path of his goal. He wrote that “it seemed unworthy of a grown man to spend his time on such trivialities, but . . . trivial or not, the matter was a challenge.” Attempts to address the challenge extended the development of Principia Mathematica by nearly a decade.

Yet Russell and Whitehead had, after all that effort, missed the central point. Like granite outcroppings piercing through a bed of moss, these apparently trivial contradictions were rooted in the core of mathematics and logic, and were only the most readily manifest examples of a limit to our ability to structure formal mathematical systems. Just four years before Gödel had defined the limits of our ability to conquer the intellectual world of mathematics and logic with the publication of his Undecidability Theorem, the German physicist Werner Heisenberg’s celebrated Uncertainty Principle had delineated the limits of inquiry into the physical world, thereby undoing the efforts of another celebrated intellect, the great mathematician Pierre-Simon Laplace. In the early 1800s Laplace had worked extensively to demonstrate the purely mechanical and predictable nature of planetary motion. He later extended this theory to the interaction of molecules. In the Laplacean view, molecules are just as subject to the laws of physical mechanics as the planets are. In theory, if we knew the position and velocity of each molecule, we could trace its path as it interacted with other molecules, and trace the course of the physical universe at the most fundamental level. Laplace envisioned a world of ever more precise prediction, where the laws of physical mechanics would be able to forecast nature in increasing detail and ever further into the future, a world where “the phenomena of nature can be reduced in the last analysis to actions at a distance between molecule and molecule.”

What Gödel did to the work of Russell and Whitehead, Heisenberg did to Laplace’s concept of causality. The Uncertainty Principle, though broadly applied and draped in metaphysical context, is a well-defined and elegantly simple statement of physical reality – namely, the combined accuracy of a measurement of an electron’s location and its momentum cannot vary far from a fixed value. The reason for this, viewed from the standpoint of classical physics, is that accurately measuring the position of an electron requires illuminating the electron with light of a very short wavelength. But the shorter the wavelength the greater the amount of energy that hits the electron, and the greater the energy hitting the electron the greater the impact on its velocity.

What is true in the subatomic sphere ends up being true – though with rapidly diminishing significance – for the macroscopic. Nothing can be measured with complete precision as to both location and velocity because the act of measuring alters the physical properties. The idea that if we know the present we can calculate the future was proven invalid – not because of a shortcoming in our knowledge of mechanics, but because the premise that we can perfectly know the present was proven wrong. These limits to measurement imply limits to prediction. After all, if we cannot know even the present with complete certainty, we cannot unfailingly predict the future. It was with this in mind that Heisenberg, ecstatic about his yet-to-be-published paper, exclaimed, “I think I have refuted the law of causality.”

The epistemological extrapolation of Heisenberg’s work was that the root of the problem was man – or, more precisely, man’s examination of nature, which inevitably impacts the natural phenomena under examination so that the phenomena cannot be objectively understood. Heisenberg’s principle was not something that was inherent in nature; it came from man’s examination of nature, from man becoming part of the experiment. (So in a way the Uncertainty Principle, like Gödel’s Undecidability Proposition, rested on self-referentiality.) While it did not directly refute Einstein’s assertion against the statistical nature of the predictions of quantum mechanics that “God does not play dice with the universe,” it did show that if there were a law of causality in nature, no one but God would ever be able to apply it. The implications of Heisenberg’s Uncertainty Principle were recognized immediately, and it became a simple metaphor reaching beyond quantum mechanics to the broader world.

This metaphor extends neatly into the world of financial markets. In the purely mechanistic universe of classical physics, we could apply Newtonian laws to project the future course of nature, if only we knew the location and velocity of every particle. In the world of finance, the elementary particles are the financial assets. In a purely mechanistic financial world, if we knew the position each investor has in each asset and the ability and willingness of liquidity providers to take on those assets in the event of a forced liquidation, we would be able to understand the market’s vulnerability. We would have an early-warning system for crises. We would know which firms are subject to a liquidity cycle, and which events might trigger that cycle. We would know which markets are being overrun by speculative traders, and thereby anticipate tactical correlations and shifts in the financial habitat. The randomness of nature and economic cycles might remain beyond our grasp, but the primary cause of market crisis, and the part of market crisis that is of our own making, would be firmly in hand.

The first step toward the Laplacean goal of complete knowledge is the advocacy by certain financial market regulators to increase the transparency of positions. Politically, that would be a difficult sell – as would any kind of increase in regulatory control. Practically, it wouldn’t work. Just as the atomic world turned out to be more complex than Laplace conceived, the financial world may be similarly complex and not reducible to a simple causality. The problems with position disclosure are many. Some financial instruments are complex and difficult to price, so it is impossible to measure precisely the risk exposure. Similarly, in hedge positions a slight error in the transmission of one part, or asynchronous pricing of the various legs of the strategy, will grossly misstate the total exposure. Indeed, the problems and inaccuracies in using position information to assess risk are exemplified by the fact that major investment banking firms choose to use summary statistics rather than position-by-position analysis for their firmwide risk management despite having enormous resources and computational power at their disposal.

Perhaps more importantly, position transparency also has implications for the efficient functioning of the financial markets beyond the practical problems involved in its implementation. The problems in the examination of elementary particles in the financial world are the same as in the physical world: Beyond the inherent randomness and complexity of the systems, there are simply limits to what we can know. To say that we do not know something is as much a challenge as it is a statement of the state of our knowledge. If we do not know something, that presumes that either it is not worth knowing or it is something that will be studied and eventually revealed. It is the hubris of man that all things are discoverable. But for all the progress that has been made, perhaps even more exciting than the rolling back of the boundaries of our knowledge is the identification of realms that can never be explored. A sign in Einstein’s Princeton office read, “Not everything that counts can be counted, and not everything that can be counted counts.”

The behavioral analogue to the Uncertainty Principle is obvious. There are many psychological inhibitions that lead people to behave differently when they are observed than when they are not. For traders it is a simple matter of dollars and cents that will lead them to behave differently when their trades are open to scrutiny. Beneficial though it may be for the liquidity demander and the investor, for the liquidity supplier trans- parency is bad. The liquidity supplier does not intend to hold the position for a long time, like the typical liquidity demander might. Like a market maker, the liquidity supplier will come back to the market to sell off the position – ideally when there is another investor who needs liquidity on the other side of the market. If other traders know the liquidity supplier’s positions, they will logically infer that there is a good likelihood these positions shortly will be put into the market. The other traders will be loath to be the first ones on the other side of these trades, or will demand more of a price concession if they do trade, knowing the overhang that remains in the market.

This means that increased transparency will reduce the amount of liquidity provided for any given change in prices. This is by no means a hypothetical argument. Frequently, even in the most liquid markets, broker-dealer market makers (liquidity providers) use brokers to enter their market bids rather than entering the market directly in order to preserve their anonymity.

The more information we extract to divine the behavior of traders and the resulting implications for the markets, the more the traders will alter their behavior. The paradox is that to understand and anticipate market crises, we must know positions, but knowing and acting on positions will itself generate a feedback into the market. This feedback often will reduce liquidity, making our observations less valuable and possibly contributing to a market crisis. Or, in rare instances, the observer/feedback loop could be manipulated to amass fortunes.

One might argue that the physical limits of knowledge asserted by Heisenberg’s Uncertainty Principle are critical for subatomic physics, but perhaps they are really just a curiosity for those dwelling in the macroscopic realm of the financial markets. We cannot measure an electron precisely, but certainly we still can “kind of know” the present, and if so, then we should be able to “pretty much” predict the future. Causality might be approximate, but if we can get it right to within a few wavelengths of light, that still ought to do the trick. The mathematical system may be demonstrably incomplete, and the world might not be pinned down on the fringes, but for all practical purposes the world can be known.

Unfortunately, while “almost” might work for horseshoes and hand grenades, 30 years after Gödel and Heisenberg yet a third limitation of our knowledge was in the wings, a limitation that would close the door on any attempt to block out the implications of microscopic uncertainty on predictability in our macroscopic world. Based on observations made by Edward Lorenz in the early 1960s and popularized by the so-called butterfly effect – the fanciful notion that the beating wings of a butterfly could change the predictions of an otherwise perfect weather forecasting system – this limitation arises because in some important cases immeasurably small errors can compound over time to limit prediction in the larger scale. Half a century after the limits of measurement and thus of physical knowledge were demonstrated by Heisenberg in the world of quantum mechanics, Lorenz piled on a result that showed how microscopic errors could propagate to have a stultifying impact in nonlinear dynamic systems. This limitation could come into the forefront only with the dawning of the computer age, because it is manifested in the subtle errors of computational accuracy.

The essence of the butterfly effect is that small perturbations can have large repercussions in massive, random forces such as weather. Edward Lorenz was testing and tweaking a model of weather dynamics on a rudimentary vacuum-tube computer. The program was based on a small system of simultaneous equations, but seemed to provide an inkling into the variability of weather patterns. At one point in his work, Lorenz decided to examine in more detail one of the solutions he had generated. To save time, rather than starting the run over from the beginning, he picked some intermediate conditions that had been printed out by the computer and used those as the new starting point. The values he typed in were the same as the values held in the original simulation at that point, so the results the simulation generated from that point forward should have been the same as in the original; after all, the computer was doing exactly the same operations. What he found was that as the simulated weather pattern progressed, the results of the new run diverged, first very slightly and then more and more markedly, from those of the first run. After a point, the new path followed a course that appeared totally unrelated to the original one, even though they had started at the same place.

Lorenz at first thought there was a computer glitch, but as he investigated further, he discovered the basis of a limit to knowledge that rivaled that of Heisenberg and Gödel. The problem was that the numbers he had used to restart the simulation had been reentered based on his printout from the earlier run, and the printout rounded the values to three decimal places while the computer carried the values to six decimal places. This rounding, clearly insignificant at first, promulgated a slight error in the next-round results, and this error grew with each new iteration of the program as it moved the simulation of the weather forward in time. The error doubled every four simulated days, so that after a few months the solutions were going their own separate ways. The slightest of changes in the initial conditions had traced out a wholly different pattern of weather.

Intrigued by his chance observation, Lorenz wrote an article entitled “Deterministic Nonperiodic Flow,” which stated that “nonperiodic solutions are ordinarily unstable with respect to small modifications, so that slightly differing initial states can evolve into considerably different states.” Translation: Long-range weather forecasting is worthless. For his application in the narrow scientific discipline of weather prediction, this meant that no matter how precise the starting measurements of weather conditions, there was a limit after which the residual imprecision would lead to unpredictable results, so that “long-range forecasting of specific weather conditions would be impossible.” And since this occurred in a very simple laboratory model of weather dynamics, it could only be worse in the more complex equations that would be needed to properly reflect the weather. Lorenz discovered the principle that would emerge over time into the field of chaos theory, where a deterministic system generated with simple nonlinear dynamics unravels into an unrepeated and apparently random path.

The simplicity of the dynamic system Lorenz had used suggests a far-reaching result: Because we cannot measure without some error (harking back to Heisenberg), for many dynamic systems our forecast errors will grow to the point that even an approximation will be out of our hands. We can run a purely mechanistic system that is designed with well-defined and apparently well-behaved equations, and it will move over time in ways that cannot be predicted and, indeed, that appear to be random. This gets us to Santa Fe.

The principal conceptual thread running through the Santa Fe research asks how apparently simple systems, like that discovered by Lorenz, can produce rich and complex results. Its method of analysis in some respects runs in the opposite direction of the usual path of scientific inquiry. Rather than taking the complexity of the world and distilling simplifying truths from it, the Santa Fe Institute builds a virtual world governed by simple equations that when unleashed explode into results that generate unexpected levels of complexity.

In economics and finance, institute’s agenda was to create artificial markets with traders and investors who followed simple and reasonable rules of behavior and to see what would happen. Some of the traders built into the model were trend followers, others bought or sold based on the difference between the market price and perceived value, and yet others traded at random times in response to liquidity needs. The simulations then printed out the paths of prices for the various market instruments. Qualitatively, these paths displayed all the richness and variation we observe in actual markets, replete with occasional bubbles and crashes. The exercises did not produce positive results for predicting or explaining market behavior, but they did illustrate that it is not hard to create a market that looks on the surface an awful lot like a real one, and to do so with actors who are following very simple rules. The mantra is that simple systems can give rise to complex, even unpredictable dynamics, an interesting converse to the point that much of the complexity of our world can – with suitable assumptions – be made to appear simple, summarized with concise physical laws and equations.

The systems explored by Lorenz were deterministic. They were governed definitively and exclusively by a set of equations where the value in every period could be unambiguously and precisely determined based on the values of the previous period. And the systems were not very complex. By contrast, whatever the set of equations are that might be divined to govern the financial world, they are not simple and, furthermore, they are not deterministic. There are random shocks from political and economic events and from the shifting preferences and attitudes of the actors. If we cannot hope to know the course of the deterministic systems like fluid mechanics, then no level of detail will allow us to forecast the long-term course of the financial world, buffeted as it is by the vagaries of the economy and the whims of psychology.

Why the Political needs the Pervert? Thought of the Day 102.1

toward-a-psychoanalytic-postmodern-horror-theory-1320x990

Thus perverts’ desire does not have the opportunity to be organized around finding a fantasmatic solution to the real of sexual difference. The classical scenario of Oedipal dynamics, with its share of lies, make believe, and sexual theories, is not accessible to them. This is why they will search desperately to access symbolic castration that could bring solace to their misery. — Judith Feher-Gurewich (Jean-Michel Rabaté – The Cambridge Companion to Lacan)

Nonetheless, it is contradictory to see the extra-ordinary’s goal as the reinsertion of castration, when in fact there is nothing in his perverse scenarios that incarcerates him in misery. It is more a fantasmatic solution to the deciphering of the enigma of sexual difference, precisely by veiling difference. The extra-ordinary wishes to maintain this veiling, in as much as his jouissance is derived this way. Even if the extra-ordinary efforts to infinitize jouissance are eventually sealed by castration, this is more a side effect of the “perverse” act. At the end, desire always reinscribes itself. Symbolic guilt is inserted in the extra-ordinary’s world through castration, not because the latter relieves him, but because his fantasy has failed. This failure is what creates the misery of the pervert, as in any other subject.

His main target is centred in filling the Other with jouissance. However, it is not something he produces, but more something he unlocks. The pervert unleashes a jouissance, already present in the Other, by eradicating the primacy of the phallic signifier and revealing the Other’s jouissance (the emptiness, the feminine). The neurotic’s anxiety concerns the preservation of desire through the duplication of castration, whereas the pervert’s anxiety emerges from the reverse condition. This is the question of how to extract jouissance from the object without it falling. He does not want to let the object fall, not for fear of castration, but because of the wish to retain jouissance. Inexorably, the nagging question of how to obstruct desire from returning to its initial place grips the pervert because, together with desire, the lack in the Other returns, restoring and maintaining his desiring status, instead of his enjoying status. Without doubt, these are fantasmatic relations that sustain “perverse” desire for jouissance and, at the same time, impose a safe distance from the horror of the Thing’s return.

Anxiety intervenes as the mediating term between desire and jouissance. The desiring subject seeks jouissance, but not in its pure form. Jouissance has to be related to the Other, to occupy a space within the Other of signification, to be put into words. This is what phallic jouissance, the jouissance of the idiot, aims at. The idiocy of it lies in its vain and limited character, since jouissance always fails signification and only a residue is left behind. The remainder is the object a, which perpetuates the desire of the subject. But the object is desired as absent. Coming too close to it, one finds this absence occupied by a real presence. In that case, the object has to fall, like the phallus in its exhausted stage, in order to maintain the desiring status of the subject. The moment desire returns, the object falls, or, better, the moment the object falls, desire returns.

While the subject is engaged in an impossible task (that of inscribing jouissance in the place of the Other) she draws closer to the object. The closer she gets, the more anxiety surfaces, alerting the subject about the presence of a real Other, a primitive pre- symbolic being. In the case of the pervert, things are somehow different. It is not so much the inscription of jouissance in the Other that troubles him, but more the erasure of desire from the field of the Other and its return to a state of unconstrained enjoyment. So, for the pervert, it is essential that the object maintains its potency, not in the service of desire but in the service of jouissance. The anxiety of the extra-ordinary becomes an erotic signal that calls the Other to abandon the locus of desire and indulge in jouissance. But, eventually, desire puts an end to it.

It is not the extra-ordinary that aims at castration, so that he lets loose some of his anxiety. As an integral part of sexual jouissance, the extra-ordinary does not want to give up anxiety, which is what the neurotic does with his symptom, in the reverse way. The Other’s anxiety, the exposition of its truth, requests the confinement of the jouissance operating in perversion. Castration has to be imposed because of the contaminating nature of the object’s jouissance. The more it maintains its omnipotent character, the more it threatens the Other’s consistency, as provided by desire. The extra-ordinary dramatizes the staging of castration. It is not an actual event, as the phallus does not belong to the order of the cosmic world. None the less, politics and power locate the phallus in the imaginary realm. Emblems of patriarchal power are handed from one authority figure to the next, propelling the replication of the same power mechanism and concealing the absence of the phallus.

The social and the political world needs the “pervert” in order to redefine and reinscribe the imaginary boundaries of its morality and, hence, since the patriarchal orientation of the majority is taken as a gnomon, enhance the existing moral code. This reflects the underlying imaginary dynamics of what social constructionism has long now described: the exception of the pervert makes the rule for the “normality” of the present moral, social, political, and cultural organization of the world. As long as the pervert remains outside of this world, the safety from the perilous obscenity and odiousness of real jouissance is ensured. Concomitantly, this is translated to further distance from desire and its permanent endurance, something that nourishes guilt, as was previously argued. As if guilt suggested a privileged moral state, power uses it as an essential demagogic tool, in order to secure its good and further vilify the “pervert”, who also experiences guilt for “betraying” desire, not in the sense of staying away from jouissance, but failing to fully consummate it.

Licence to Violence. Thought of the Day 101.0

violence

Every form of violence against fellow human beings is a problematic proposition for the overwhelming majority of people. With the exception of small minorities of individuals who are either morally indifferent to violence or categorically opposed to it, whatever the circumstances, the rest of the population operates in a context of ‘cognitive dissonance’ .

This state of mind is determined by fundamental conflicts between what is psychologically desirable, practically feasible, pragmatically expedient and morally justifiable. Violence against ‘contestant others’ may be (or may have become, depending on the circumstances) desirable to a number of people. Yet, the desirability of a life without others is usually offset by the much more profound notion of moral inadmissibility of the violent action per se, by a belief that such a prospect is impossible, by a fear of the consequences of such an action or by a combination of all these concerns.

With regard to the desirability of a violent encounter with ‘others’, nationalism, nation-statism and racialism had already made a significant contribution, accentuating the psychological distance between the national community and its particular ‘others’, often dehumanizing or delegitimizing them and fermenting negative passion against them. An act of physical elimination, however, requires much more than the mere desirability of violence or its outcomes. It is not just linked to a result but also to the action itself that involves a particular repugnant (violent) method. Therefore, authorization of violence and participation in its discharge require a negotiation of the state of cognitive dissonance, whereby desirability and expediency outweigh (even marginally or in ad hoc circumstances) the moral, legal and political impediments to violence or trivialize the problematic nature of the means used to achieve the desired goal.

The leap from abstract intention or desire to strong targeted passion and finally to concrete violent action presupposes a convincing resolution of the inner personal tension underpinning the state of cognitive dissonance. For genocide to take place, and for ordinary individuals to become active participants, this dissonance has to be first escalated by rendering the option of elimination more desirable or accessible. Then it has to be resolved one way or another by making the individual feel their actions are broadly consistent with their overall worldview. Cognitive dissonance may result either in the abandonment of the proposed action as irreconcilable with one’s ethical outlook or in the endorsement of the action through a process of changing the parameters of the dissonance itself-by endorsing new definitions of what is acceptable in the given circumstances, by ‘relativizing’ the problematic nature of the action in the light of expected outcomes or by altogether evading the dissonant mindset.

Cognitive dissonance, therefore, revolves around a tension between three main considerations: the psychological desirability, practical feasibility and moral admissibility of the action. Only a very small minority of people do not experience such tensions – either because they axiomatically reject any form of violence or because they do not see violence itself as problematic.

The majority usually find themselves pulled in different directions by each of these three considerations. They may distrust, fear or even despise ‘others’, but have fatalistically accepted the condition of coexistence, unable to conceive of a different scenario. They may long for a life without particular (or all) ‘others’, but perceive this condition as utopian, choosing instead to adapt to the awkward realities of living side by side. Alternatively, they may strongly desire the prospect of somehow ridding themselves of ‘others’, but nevertheless refrain from any violent action against them, either because they fear sanctions/reprisals or because they consider this course of action inadmissible in spite of the ostensible desirability of its effects.

In negotiating such tensions, the notion of external, authoritative licence is crucial in turning dissonance from an impediment into an incentive to unbound freedom of passion, behaviour and action. Licence is not a positive, normative freedom to act, but an ‘authorized transgression’, a special dispensation that creates a new, temporary and exceptional domain of diminished accountability. Its element of permissibility refers to particular circumstances of time and space, as well as goals and limits. Every licence redefines what is permissible in an expanded way, but it does not do so irreversibly or without caveats – conventional or new. Every new domain of licence constitutes a new moral order that is synonymous with the removal of sanctions and of accountability.

Whether authorized from above or claimed spontaneously in the absence of authority, licence makes sense only because of the awareness of the taboo nature of what it entails. However, its nature, scope and targets are determined by the authorization or by the circumstances that generated it. Like violence, it is not blind but is linked to predispositions and specific opportunities – there and then. As a form of special dispensation – exceptional in its devices, goals and particular targets – licence involved the conditional suspension of those hindrances that usually kept the exercise of sovereign violence at bay and prevented full decontestation. By removing, cancelling out or weakening constraints, it enables individuals and groups to accept the desirability of a violent scenario – even if the latter contradicts generic cultural understandings of defensible or just behaviour.

Licence may facilitate the acceptance of a particular course of violent action against a particular ‘other’ in a particular setting by strengthening the scenario’s relative desirability and/or by reducing the force of inhibiting factors and, little by little, through precedent and repetition, it may also redefine the moral universe of an individual or community by rendering previously taboo feelings and actions less troubling and more admissible. Thus, licence can be both an ad hoc dispensation and a long-term strategy for preparing a group for a new form of moral conduct they would previously consider unacceptable or problematic.

Perverse Ideologies. Thought of the Day 100.0

Arch2O-Jouissance-Surplus-05

Žižek (Fantasy as a Political Category A Lacanian Approach) says,

What we are thus arguing is not simply that ideology permeates also the alleged extra-ideological strata of everyday life, but that this materialization of ideology in the external materiality renders visible inherent antagonisms that the explicit formulation of ideology cannot afford to acknowledge. It is as if an ideological edifice, in order to function “normally,” must obey a kind of “imp of perversity” and articulate its inherent antagonism in the externality of its material existence.

In this fashion, Žižek recognizes an element of perversity in all ideologies, as a prerequisite for their “normal” functioning. This is because all ideologies disguise lack and thus desire through disavowal. They know that lack is there, but at the same time they believe it is eliminated. There is an object that takes over lack, that is to say the Good each ideology endorses, through imaginary means. If we generalize Žižek’s suggestion, we can either see all ideological relations mediated by a perverse liaison or perversion as a condition that simply helps the subjects relate to each other, when signification fails and they are confronted with the everlasting question of sexual difference, the non-representable dimension. Ideology, then, is just one solution that makes use of the perverse strategy when dealing with Difference. In any case, it is not pathological and cannot be determined mainly by relying on the role of disavowal. Instead of père-vers (this is a Lacanian neologism that denotes the meanings of “perversion” and “vers le père”, referring to the search for jouissance that does not abolish the division of the subject, her desire. In this respect, the père-vers is typical of both neurosis and perversion, where the Name-of-the-Father is not foreclosed and thereby complete jouissance remains unobtainable sexuality, that searches not for absolute jouissance, but jouissance related to desire, the political question is more pertinent to the père-versus, so to say, anything that goes against the recognition of the desire of the Other. Any attempt to disguise lack for instrumental purposes is a père-versus tactic.

To the extent that this external materialization of ideology is subjected to fantasmatic processes, it divulges nothing more than the perversity that organizes all social and political relations far from the sexual pathology associated with the pervert. The Other of power, this fictional Other that any ideology fabricates, is the One who disavows the discontinuities of the normative chain of society. Expressed through the signifiers used by leadership, this Other knows very well the cul-de-sac of the fictional view of society as a unified body, but still believes that unity is possible, substantiating this ideal.

The ideological Other disregards the impossibility of bridging Difference; therefore, it meets the perversion that it wants to associate with the extra-ordinary. Disengaging it from pathology, disavowal can be stated differently, as a prompt that says: “let’s pretend!” Pretend as if a universal harmony, good, and unity are feasible. Symbolic Difference is replaced with imaginary difference, which nourishes antagonism and hostility by fictionalizing an external threat that jeopardizes the unity of the social body. Thus, fantasy of the obscene extra-ordinary, who offends the conformist norm, is in itself a perverse fantasy. The Other knows very well that the pervert constitutes no threat, but still requires his punishment, moral reformation, or treatment.