Man Proposes OR Woman Proposes – “Matches” Happen Algorithmically and are Economically Walrasian Rather than Keynesian. Note Quote & Didactics.

Consider a set M of men and a set W of women. Each m ∈ M has a strict preference ordering over the elements of W and each w ∈ W has a strict preference ordering over men. Let us denote the preference ordering of an agent i by i and x ≻i y will mean that agent i ranks x above y. Now a marriage or matching would be considered as an assignment of men to women such that each man is assigned to at most one woman and vice versa. But, what if the agent decides to remain single. This is possible by two ways, viz. if a man or a woman is matched with oneself, or for each man or woman, there is a dummy woman or man in the set W or M that corresponds to being single. If this were the construction, then, we could safely assume |M| = |W|. But, there is another impediment here, whereby a subversion of sorts is possible, in that a group of agents could simply opt out of the matching exercise. In such a scenario, it becomes mandatory to define a blocking set. As an implication of such subversiveness, a matching is called unstable if there are two men m, m’ and two women w, w’ such that

  1. m is matched to w
  2. m’ is matched to w’, and
  3. w’ m w and m ≻w’ m’

then, the pair (m, w’) is a blocking pair. Any matching without the blocking pair is called stable.

Now, given the preferences of men and women, is it always possible to find stable matchings? For the same, what is used is Gale and Shapley’s deferred acceptance algorithm.

So, after a brief detour, let us concentrate on the male-proposal version.

First, each man proposes to his top-ranked choice. Next, each woman who has received at least two proposals keeps (tentatively) her top-ranked proposal and rejects the rest. Then, each man who has been rejected proposes to his top-ranked choice among the women who have not rejected him. Again each woman who has at least two proposals (including ones from previous rounds) keeps her top-ranked proposal and rejects the rest. The process repeats until no man has a woman to propose to or each woman has at most one proposal. At this point the algorithm terminates and each man is assigned to a woman who has not rejected his proposal. No man is assigned to more than one woman. Since each woman is allowed to keep only one proposal at any stage, no woman is assigned to more than one man. Therefore the algorithm terminates in a matching.

IMG_20190909_141003

Consider the matching {(m1, w1), (m2, w2), (m3, w3)}. This is an unstable matching since (m1, w2) is a blocking pair. The matching {(m1, w1), (m3, w2), (m2, w3)}, however, is stable. Now looking at the figure above, m1 proposes to w2, m2 to w1, and m3 to w1. At the end of this round, w1 is the only woman to have received two proposals. One from m3 and the other from m2. Since she ranks m3 above m2, she keeps m3 and rejects m2. Since m3 is the only man to have been rejected, he is the only one to propose again in the second round. This time he proposes to w3. Now each woman has only one proposal and the algorithm terminates with the matching {(m1, w2), (m2, w3), (m3, w2)}.

The male propose algorithm terminates in a stable matching.

Suppose not. Then ∃ a blocking pair (m1, w1) with m1 matched to w2, say, and w1 matched to m2. Since (m1, w1) is blocking and w1m1 w2, in the proposal algorithm, m1 would have proposed to w1 before w2. Since m1 was not matched with w1 by the algorithm, it must be because w1 received a proposal from a man that she ranked higher than m1. Since the algorithm matches her to m2 it follows that m2w1 m1. This contradicts the fact that (m1, w1) is a blocking pair.

Even if where the women propose, the outcome would still be stable matching. The only difference is in kind as the stable matching would be different from the one generated when the men propose. This would also imply that even if stable matching is guaranteed to exist, there is more than one such matching. Then what is the point to prefer one to the other? Well, there is a reason:

Denote a matching by μ. The woman assigned to man m in the matching μ is denoted μ(m). Similarly, μ(w) is the man assigned to woman w. A matching μ is male-optimal if there is no stable matching ν such that ν(m) ≻m μ(m) or ν(m) = μ(m) ∀ m with ν(j) ≻j μ(j) for at least one j ∈ M. Similarly for the female-optimality.

The stable matching produced by the (male-proposal) Deferred Acceptance Algorithm is male-optimal.

Let μ be the matching returned by the male-propose algorithm. Suppose μ is not male optimal. Then, there is a stable matching ν such that ν(m) ≻m μ(m) or ν(m) = μ(m) ∀ m with ν(j) ≻j μ(j) for at least one j ∈ M. Therefore, in the application of the proposal algorithm, there must be an iteration where some man j proposes to ν(j) before μ(j) since ν(j) ≻j μ(j) and is rejected by woman ν(j). Consider the first such iteration. Since woman ν(j) rejects j she must have received a proposal from a man i she prefers to man j. Since this is the first iteration at which a male is rejected by his partner under ν, it follows that man i ranks woman ν(j) higher than ν(i). Summarizing, i ≻ν(j) j and ν(j) ≻i ν(i) implying that ν is not stable, a contradiction.

Now, the obvious question is if this stable matching is optimal w.r.t. to both men and women? The answer this time around is NO. From above, it could easily be seen that there are two stable matchings, one of them is male-optimal and the other is female-optimal. At least, one female is strictly better-off under the female optimality than male optimality, and by this, no female is worse off. If the POV is men, a similar conclusion is drawn.  A stable marriage is immune to a pair of agents opting out of the matching. We could ask that no subset of agents should have an incentive to opt out of the matching. Formally, a matching μ′ dominates a matching μ if there is a set S ⊂ M ∪ W such that for all m, w ∈ S, both (i) μ′(m), μ′(w) ∈ S and (ii) μ′(m) ≻m μ(m) and μ′(w) ≻w μ(w). Stability is a special case of this dominance condition when we restrict attention to sets S consisting of a single couple. The set of undominated matchings is called the core of the matching game.

The direct mechanism associated with the male propose algorithm is strategy-proof for the males.

Suppose not. Then there is a profile of preferences π = (≻m1 , ≻m2 , . . . , ≻mn) for the men, such that man m1, say, can misreport his preferences and obtain a better match. To express this formally, let μ be the stable matching obtained by applying the male proposal algorithm to the profile π. Suppose that m1 reports the preference ordering ≻ instead. Let ν be the stable matching that results when the male-proposal algorithm is applied to the profile π1 = (≻, ≻m2 , . . . , ≻mn). For a contradiction, suppose ν(m1) ≻m1 μ(m1). For notational convenience let a ≽m b mean that a ≻m b or a = b.

First we show that m1 can achieve the same effect by choosing an ordering ≻̄ where woman ν(m1) is ranked first. Let π2 = (≻̄ , ≻m2 , . . . , ≻mn). Knowing that ν is stable w.r.t the profile π1 we show that it is stable with respect to the profile π2. Suppose not. Then under the profile π2 there must be a pair (m, w) that blocks ν. Since ν assigns to m1 its top choice with respect to π2, m1 cannot be part of this blocking pair. Now the preferences of all agents other than m1 are the same in π1 and π2. Therefore, if (m, w) blocks ν w.r.t the profile π2, it must block ν w.r.t the profile π1, contradicting the fact that ν is a stable matching under π1.

Let λ be the male propose stable matching for the profile π2. ν is a stable matching w.r.t the profile π2. As λ is male optimal w.r.t the profile π2, it follows that λ(m1) = ν(m1).

Let’s assume that ν(m1) is the top-ranked woman in the ordering ≻. Now we show that the set B = {mj : μ(mj) ≻mj ν(mj)} is empty. This means that all men, not just m1, are no worse off under ν compared to μ. Since ν is stable w.r.t the original profile, π this contradicts the male optimality of μ.

Suppose B ≠ ∅. Therefore, when the male proposal algorithm is applied to the profile π1, each mj ∈ B is rejected by their match under μ, i.e., μ(mj). Consider the first iteration of the proposal algorithm where some mj is rejected by μ(mj). This means that woman μ(mj) has a proposal from man mk that she ranks higher, i.e., mkμ(mj) mj. Since mk was not matched to μ(mj) under μ it must be that μ(mk) ≻mk μ(mj). Hence mk ∈ B , otherwise μ(mj) ≽ mkν(mk) ≽mk μ(mk) ≻mk μ(mj), which is a contradiction. Since mk ∈ B and mk has proposed to μ(mj) at the time man mj proposes, it means that mk must have been rejected by μ(mk) prior to mj being rejected, contradicting our choice of mj.

The mechanism associated with the male propose algorithm is not strategy-proof for the females. Let us see how this is the case by way of an example. The male propose algorithm returns the matching {(m1, w2), (m2, w3), (m3, w1)}. In the course of the algorithm the only woman who receives at least two proposals is w1. She received proposals from m2 and m3. She rejects m2 who goes on to propose to w3 and the algorithm terminates. Notice that w1 is matched with her second choice. Suppose now that she had rejected m3 instead. Then m3 would have gone on to proposes to w2. Woman w2 now has a choice between m1 and m3. She would keep m3 and reject m1, who would go on to propose to w1. Woman w1 would keep m1 over m2 and in the final matching be paired with a her first-rank choice.

Transposing this on to economic theory, this fits neatly into the Walrasian equilibrium. Walras’ law is an economic theory that the existence of excess supply in one market must be matched by excess demand in another market so that it balances out. Walras’ law asserts that an examined market must be in equilibrium if all other markets are in equilibrium, and this contrasts with Keynesianism, which by contrast, assumes that it is possible for just one market to be out of balance without a “matching” imbalance elsewhere. Moreover, Walrasian equilibria are the solutions of a fixed point problem. In the cases when they can be computed efficiently it is because the set of Walrasian equilibria can be described by a set of convex inequalities. The same can be said of stable matchings/marriages. The set of stable matchings is fixed points of a nondecreasing function defined on a lattice. 

Advertisement

AI learns to solve a Rubik’s Cube in 1.2 seconds

DeepCubeA uses a neural network (which apes how the human mind processes information) along with machine learning techniques, in which an AI system learns by detecting patterns and theorizing with little human input. It adopts a reinforcement learning approach, by which it learned “how to solve increasingly difficult states in reverse from the goal state without any specific domain knowledge.”

Endgadget

ABSTRACT

 

Recently, Approximate Policy Iteration (API) algorithms have achieved super- human proficiency in two-player zero-sum games such as Go, Chess, and Shogi without human data. These API algorithms iterate between two policies: a slow policy (tree search), and a fast policy (a neural network). In these two-player games, a reward is always received at the end of the game. However, the Rubik’s Cube has only a single solved state, and episodes are not guaranteed to terminate. This poses a major problem for these API algorithms since they rely on the reward received at the end of the game. We introduce Autodidactic Iteration: an API algorithm that overcomes the problem of sparse rewards by training on a distribution of states that allows the reward to propagate from the goal state to states farther away. Autodi- dactic Iteration is able to learn how to solve the Rubik’s Cube without relying on human data. Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves — less than or equal to solvers that employ human domain knowledge.

 

SOLVING THE RUBIK’S CUBE WITH APPROXIMATE POLICY ITERATION

Albert Camus Reads Richard K. Morgan: Unsaid Existential Absurdism

Humanity has spread to the stars. We set out like ancient seafarers to explore the limitless ocean of space. But no matter how far we venture into the unknown, the worst monsters are those we bring with us. – Takeshi Kovacs

What I purport to do in this paper is pick up two sci-fi works of Richard Morgan, Altered Carbon (teaser to Netflix series), the first of Takeshi Kovacs trilogy and sometimes a grisly tale of switching bodies to gain immortality transhumanism, either by means of enhanced biology, technology, or biotechnology, and posthumanism. The second is Market Forces, a brutal journey into the heart of corporatized conflict investment by way of conscience elimination. Thereafter a conflation with Camus’ absurdity unravels the very paradoxical ambiguity underlying absurdism as a human condition. The paradoxical ambiguity is as a result of Camus’ ambivalence towards the neo-Platonist conception of the ultimate unifying principle, while accepting Plotinus’ principled pattern, but rejecting its culmination.

Richard Morgan’s is a parody, a commentary, or even en epic fantasy overcharged almost to the point of absurdity and bordering extropianism. If at all there is a semblance of optimism in the future as a result of Moore’s Law of dense hardware realizable through computational extravagance, it is spectacularly offset by complexities of software codes resulting in a disconnect that Morgan brilliantly transposes on to a society in a dystopian ethic. This offsetting disconnect between the physical and mental, between the tangible and the intangible is the existential angst writ large on the societal maneuvered by the powers that be.

Morgan’s Altered Carbon won’t be a deflection from William Gibson’s cyberpunk, or at places even Philip K Dick’s Do Androids Dream of Electric Sheep?, which has inspired the cult classic Ridley Scott’s Blade Runner, wherein the interface between man and machine is coalescing (sleeves as called in the novel), while the singularity pundits are making hay. But, what if the very technological exponent is used against the progenitors, a point that defines much of Artificial Intelligence ethics today? What if the human mind is now digitized, uploaded and downloaded as a mere file, and transferred across platforms (by way of needlecast transmitting DHF, individual digital human freight) rendering the hardware dis- posable, and at the same time the software as a data vulnerable to the vagaries of the networked age? These aren’t questions keeping the ethic at stake alone, but rather a reformatting of humanity off the leash. This forever changes the concept of morality and of death as we know it, for now anyone with adequate resources (note the excess of capitalism here) can technically extend their life for as long as they desire by reserving themselves into cloned organics or by taking a leaf off Orwell’s Government to archive citizen records in perpetual storage. Between the publication in 2002 and now, the fiction in science fiction as a genre has indeed gotten blurred, and what has been the Cartesian devil in mind-body duality leverages the technological metempsychosis of consciousness in bringing forth a new perception on morality.

Imagine, the needle of moral compass behaving most erratically, ranging from extreme apathy to moderate conscience in consideration of the economic of collateral damage, with the narrative wrenching through senses, thoughts and emotions before settling down into a dystopian plot dense with politics, societal disparity, corruption, abuse of wealth and power, and repressively conservative justice. If extreme violence is distasteful in Altered Carbon, the spectacle is countered by the fact that human bodies and memories are informational commodities as digitized freight and cortical stacks, busted and mangled physical shells already having access to a sleeve to reincarnate and rehabilitate on to, opening up new vistas of philosophical dispositions and artificially intelligent deliberation on the ethics of fast-disappearing human-machine interface.

If, Personal is Political, Altered Carbon results in a concussion of overloaded themes of cyberpunk tropes and is indicative of Morgan’s political takes, a conclusion only to be commissioned upon reading his later works. This detective melange heavily slithers through human condition both light and dark without succumbing to the derivatives of high-tech and low-life and keeping the potentials of speculative fiction to explorations. The suffusive metaphysics of longevity, multiplicity of souls and spiritual tentacles meeting its adversary in Catholicism paints a believable futuristic on the canvass of science-fiction spectra.

Market Forces, on the other hand is where cyberpunk-style sci-fi is suddenly replaced with corporatized economy of profit lines via the cogency of conflict investment. The world is in a state of dysphoria with diplomatic lines having given way to negotiations with violence, and contracts won on Ronin-esque car duels shifting the battlefield from the cyberspace of Altered Carbon to the more terrestrial grounds. Directly importing from Gordon Gekko’s “Greed is Good”, corporates enhance their share of GDP via legal funding of foreign wars. The limits of philosophy of liberal politics are stretched on analogizing the widening gap between the rich and the marginalized in the backdrop of crime-ravaged not-so futuristic London. Security is rarefied according to economic stratifications, and surveillance by the rich reach absurd levels of sophistication in the absence of sousveillance by the marginalized.

Enter Chris Faulkner, the protagonist defined by conscience that starts to wither away when confronted with taking hard and decisive actions for his firm, Shorn Associates, in the face of brutality of power dynamics. The intent is real-life testosterone absolutism maximizing the tenets of western capitalism in an ostentatious exhibition of masculinity and competition. The obvious collateral damage is fissuring of familial and societal values born as a result of conscience. Market Forces has certain parallels from the past, in the writings of Robert Sheckley, the American sci-fi author, who would take an element of society and extrapolate on its inherent violence to the extent of the absurd sliding into satire. It’s this sliding wherein lies the question of the beyond, the inevitability of an endowment of aggression defining, or rather questioning the purpose of the hitherto given legacy of societal ethic.

With no dearth of violence, the dystopian future stagnates into dysphoria characterized by law and apparatus at the mercy of corporations, which transcend the Government constitutionally along rapacious capitalism. A capitalism that is so rampant that it transforms the hero into an anti-hero in the unfolding tension between interest and sympathy, disgust and repulsion. The perfectly achievable Market Forces is a realization round the corner seeking birth between the hallucinogenic madness of speculations and hyperreality hinging on the philosophy of free-markets taken to its logical ends in the direction of an unpleasant future. The reductio ad absurdum of neoliberalism is an environment of feral brutality masked with the thinnest veneer of corporate civilization, and is the speculation that portrays the world where all power equates violence. This violence is manifested in aggression in a road rage death match against competitors every time there is a bid for a tender. What goes slightly over the board, and in a pretty colloquial usage of absurdity is why would any competition entail the best of staff on such idiotic suicide missions?

Camus’ absurdity is born in The Myth of Sisyphus, and continues well into the The Rebel, but is barely able to free itself from the clutches of triviality. This might appear to be a bold claim, but the efficacy is to be tested through Camus’ intellectual indebtedness to Plotinus, the Neo-Platonist thinker. Plotinus supplemented the One and Many idea of Plato with gradations of explanatory orders, for only then a coalescing of explanations with reality was conceivable. This coalescing converges into the absolute unity, the One, the necessarily metaphysical ground. Now, Camus accepts Plotinus in the steganographic, but strips the Absolute of its metaphysics. A major strand of absurdity for Camus stems from his dic- tum, “to understand is, above all, to unify”, and the absence of such unifying principle vindicates absurdity. Herein, one is confronted with the first of paradoxes, in that, if the Absolute is rejected, why then is there in Camus a nostalgia for unity? The reason is peculiarly caught between his version of empiricism and monism. His empiricism gives accord to comprehensibility of ordinary experiences by way of language and meaning, while anything transcending the same is meaninglessness and hinges on the Plotinus’ Absolute for comprehensibility, thus making him sound a monist. Add to this contradiction is the face of the Christian God to appear if the Absolute were not to be rejected, which would then have warranted a clash between good and evil in the face of the paradox of the existing of the latter when God was invested with qualities of the former. Invoking modernism’s core dictum, Camus then, questions spontaneity in the presence of Absolute by calling to attention scholastic perplexity.

Having rejected the Absolute, Camus takes the absurd condition as a fact. If one were to carefully tread The Myth of Sisyphus, it works thusly: If a man removes himself, he destroys the situation and hence the absurd condition. Since, the absurd condition is taken as a fact, one who destroys himself denies this fact. But he who denies this fact puts himself in opposition to what is, Truth. To oppose the Truth, recognizing it to be true, is to contradict oneself. Recognizing a truth, one ought to preserve it rather than deny it. Therefore, it follows that one ought not to commit metaphysical suicide in the face of the meaningless universe. This is a major paradox in his thought, where the evaluative absurdity is deemed to be preserved starting from the premise that man and the universe juxtaposed together is absurdity itself. So, what we have here is a logical cul-de-sac. But, what is of cardinal import is the retention of life in mediating between the man and universe as absurdity in polarities. If this were confronting the absurd in life, eschatology is another confrontation with the absurd, an absolute that needs to be opposed, a doctrine that becomes a further sense of the absurd, an ethic of the creation of the absolute rule in a drama of man as a struggle against death.

It is this conjecture that builds up in The Rebel, death as an antagonist subjected to rebellion. The absurdity of death lies across our desire for immortality, the inexplicability of it, and negating and denying the only meaningful existence known. Contradictorily, death would not be absurd if immortality were possible, and existence as is known isn’t the only meaningful existence that there is. Camus is prone to a meshwork logic here, for his thought fluctuates between viewing death as an absolute evil and also as a liberator, because of which it lends legitimacy to freedom. For, it isn’t the case that Camus is unaware of the double bind of his logic, and admittedly he ejects himself out of this quandary by deliberating on death not as a transcendental phenomenon, but as an ordinary lived-experience. If the Myth of Sisyphus holds murder and suicide in an absurdist position by denying the transcendent source of value, The Rebel revels in antagonisms with Nihilism, be it either in the sense of nothing is prohibited, or the absolutist nihilism of “permit all” with a fulcrum on the Absolute. The Rebel epitomizes the intellectual impotency of nihilism. But due credit for the logical progression of Camus is mandated here, for any utopia contains the seed of nihilism, in that, any acceptance of an Absolute other than life ultimately leads to tyranny. If this were to be one strand in the essay, the other is exposited in terms of an unrelenting and absolute opposition to death. Consequently, The Rebel, which is the embodiment of Camus’ ethic cannot kill. To avoid any danger of absolutism in the name of some positive good or value, the absolute value becomes oppositional to death, and hence the Rebel’s ethic is one of ceaseless rebellion, opposition and conflict.

Now, with a not very exhaustive treatment to Camus’ notion of absurdity as there is more than meets the eye in his corpus, let us turn to conflation with Richard Morgan and justify our thesis that we set out with. We shall bring this about by a series of observations.

If antagonism to death is the hallmark of rebellion, then Altered Carbon with its special hard-drives called “Stacks” installed in the brainstem immortalizes consciousness to be ported across humans across spacetimes. Needlecasting, the process by which human consciousness in the format of data gets teleported suffers disorientation across human hardwares, if it could even be called that. Interestingly, this disorientation aggrandizes the receiver conflict-ready, a theme that runs continuously in Market Forces as well as in Altered Carbon. The state of being conflict- and combat-ready is erecting armies to quash down rebellions. To prevent immortality from getting exploited in the hands of the privileged, these armies are trained to withstand torture, drudgery, while at the same time heightening their perception via steganography. But where the plot goes haywire for Camus’ rebel is Richard Morgan’s can neutralize and eliminate. Thats the first observation.

On to the second, which deals with transhumanism. A particular character, Kovac’s partner Kristen Ortega has a neo-Catholic family that’s split over God’s view of resurrecting a loved one. The split is as a result of choosing religious coding, a neo-Catholic semblance being the dead cannot be brought back to life. In these cases, Altered Carbon pushes past its Blade Runner fetish and reflexive cynicism to find something human. But, when the larger world is so thin, it’s hard to put something like neo-Catholicism in a larger context. Characters have had centuries to get used to the idea of stacks begging the larger question: why are many still blindsided by their existence? And why do so few people, including the sour Meths, seem to be doing anything interesting with technology? Now Camus’ man is confronted with his absurd and meaningless existence, which will be extinguished by death. There are two choices to consider here: either he can live inauthentically, implying hiding from truth, the fact that life is meaningless, and accepting the standards and values of the crowd, and in the process escaping the inner misery and despair that results from an honest appraisal of facts. Or, he can take the authentic choice and live heroically, implying facing the truth, life’s futility, and temporarily, submitting to despair which is a necessary consequence, but which, if it does not lead to suicide, will eventually purify him. Despair will drive him out of himself and away from trivialities, and by it he will be impelled to commit himself to a life of dramatic choices. This is ingrained in the intellectual idea of neo-Catholicism, with Camus’ allusion as only the use of the Will can cause a man truly to be. Both Takeshi Kovacs in Altered Carbon and Chris Faulkner in Market Forces amply epitomize this neo-Catholicism, albeit not directly, but rather, as an existential angst in the form of an intrusion.

Now for the third observation. The truth in Altered Carbon is an excavation of the self, more than searching data and tweaking it into information. It admonishes to keep going no matter whichever direction, a scalar over the vector territorialization in order to decrypt that which seems hidden, an exercise in futility. Allow me to quote Morgan in full,

You are still young and stupid. Human life has no value. Haven’t you learned that yet, Takeshi, with all you’ve seen? It has no value, intrinsic to itself. Machines cost money to build. Raw materials cost money to extract. But people? You can always get some more people. They reproduce like cancer cells, whether you want them or not. They are abundant, Takeshi. Why should they be valuable? Do you know that it costs us less to recruit and use up a real snuff whore that it does to set up and run virtual equivalent format. Real human flesh is cheaper than a machine. It’s the axiomatic truth of our times?

In full consciousness and setting aside the impropriety above, Morgan’s prejudicing the machine over human flesh extricates essentialism, mirroring Camusian take on the meaning of life as inessential, but for the burning problem of suicide. This is a direct import from Nietzsche, for who, illusion (the arts, Remember Wagner!) lends credibility to life and resolves despair to some extent, whereas for Camus, despair is only coming to terms with this absurd condition, by way of machination in the full knowhow of condition’s futility and pointlessness. This fact is most brilliantly corroborated in Morgan’s dictum about how constant repetition can even make the most obvious truths irritating enough to disagree with (Woken Furies).

To conclude: Imagine the real world extending into the fictive milieu, or its mirror image, the fictive world territorializing the real leaving it to portend such an intercourse consequent to an existential angst. Such an imagination now moves along the coordinates of hyperreality, where it collaterally damages meaning in a violent burst of EX/IM-plosion. This violent burst disturbs the idealized truth overridden by a hallucinogenic madness prompting iniquities calibrated for an unpleasant future. This invading dissonant realism slithers through the science fiction before culminating in the human characteristics of expediency. Such expediencies abhor fixation to being in the world built on deluded principles, where absurdity is not only a human condition, but an affliction of the transhuman and posthuman condition as well. Only the latter is not necessarily a peep into the future, which it might very well be, but rather a disturbing look into the present-day topographies, which for Camus was acquiescing to predicament, and for Richard Morgan a search for the culpable.

Albert Camus Reads Richard Morgan: Unsaid Existential Absurdism…(Abstract/Blurb)

For the upcoming conference on “The Intellectual Geography of Albert Camus” on the 3rd of May, 2019, at the Alliance Française, New Delhi. Watch this space..

Imagine the real world extending into the fictive milieu, or its mirror image, the fictive world territorializing the real leaving it to portend such an intercourse consequent to an existential angst. Such an imagination now moves along the coordinates of hyperreality, where it collaterally damages meaning in a violent burst of EX/IM-plosion. This violent burst disturbs the idealized truth overridden by a hallucinogenic madness prompting iniquities calibrated for an unpleasant future. This invading dissonant realism slithers through the science fiction of Richard Morgan before it culminates in human characteristics of expediency. Such expediencies abhor fixation to being in the world built on deluded principles, which in my reading is Camus’ recommendation of confrontation with the absurd. This paper attempts to unravel the hyperreal as congruent on the absurd in a fictitious landscape of “existentialism meets the intensity of a relatable yet cold future”. 

———————–

What I purport to do in this paper is pick up two sci-fi works of Richard Morgan, the first of which also happens to be the first of the Takeshi Kovacs Trilogy, Altered Carbon, while the second is Market Forces,  a brutal journey into the heart of conflict investment by way of conscience elimination. Thereafter a conflation with Camus’ absurdity unravels the very paradoxical ambiguity underlying absurdism as a human condition. The paradoxical ambiguity is as a result of Camus’ ambivalence towards the neo-Platonist conception of the ultimate unifying principle, while accepting Plotinus’ principled pattern or steganography, but rejecting its culmination. 

Richard Morgan’s is a parody, a commentary, or even en epic fantasy overcharged almost to the point of absurdity and bordering extropianism. If at all there is a semblance of optimism in the future as a result of Moore’s Law of dense hardware realizable through computational extravagance, it is spectacularly offset by complexities of software codes resulting in a disconnect that Morgan brilliantly transposes on to a society in a dystopian ethic underlining his plot pattern recognitions. This offsetting disconnect between the physical and mental, between the tangible and the intangible is the existential angst writ large on the societal maneuvered by the powers that be…..to be continued

The Mathematics of Political Policies and Economic Decisions – Dictatorial (Extremists) Versus Democratic. Note Quote. Part 1 Didactics.

Screen Shot 2019-02-07 at 8.50.31 AM

If a strategy-proof, onto rule does not pick x∗ when it is the median of peaks and ym‘s, then a contradiction is reached using preferences with peaks at piL and piH.

Let us take a look at single-peaked preferences over one-dimensional policy spaces. This domain can be used to model political policies, economic decisions, location problems, or any allocation problem where a single point must be chosen in an interval. The key assumption is that agents’ preferences are assumed to have a single most-preferred point in the interval, and that preferences are “decreasing” as one moves away from that peak.

Formally, the allocation space (or policy space) is the unit interval A = [0, 1]. An outcome is a single point x ∈ A. Each agent i ∈ N has a preference ordering ≽i, which is a weak order over the outcomes in [0, 1]. The preference relation ≽i is single-peaked if ∃ a point pi ∈ A (the peak of ≽i) such that ∀ x ∈ A\{pi} and all λ ∈ [0,1), (λx +(1−λ)pi) ≻i x. Let R denote the class of single-peaked preferences.

We denote the peaks of preference relations ≽i, ≽′i, ≽j, etc., respectively by pi, pi′, pj, etc. Denote a profile (n-tuple) of preferences as ≽ ∈ Rn.

One can imagine this model as representing a political decision such as an income tax rate, another political issue with conservative/liberal extremes, the location of a public facility on a road, or even something as simple as a group of people deciding on the temperature setting for a shared office. Here, the agents have an ideal preferred policy in mind, and would prefer that a decision be made as close as possible to this “peak.”

A rule f : Rn → A assigns an outcome f(≽) to any preference profile ≽. A rule is strategy-proof if it is a dominant strategy for each agent to report his preferences truthfully when the rule is being used to choose a point.

Our purpose then is to see that this class of problems admits a rich family of strategy-proof rules whose ranges include more than two alternatives. In fact, the family of such rules remains rich even when one restricts attention to rules that satisfy the following condition.

We say that a rule f is onto if ∀ x ∈ A ∃ ≽ ∈ Rn such that f(≽) = x. An onto rule cannot preclude an outcome from being chosen ex ante. It is not without loss of generality to impose this condition. For instance, fix two points x, y ∈ [0, 1] and consider a rule that chooses whichever of the two points is preferred to the other by a majority of agents (and where x is chosen in case of a tie). Such a rule is strategy-proof, but not onto. Similar strategy-proof rules can even break ties between x and y by using preference information about other points x′, y′, . . ., in [0, 1], even though x′, etc., are not in the range of the rule.

The onto condition is even weaker than what is called unanimity, which requires that whenever all agents’ preferences have the same peak (pi = pj ∀ i, j), the rule must choose that location as the outcome. In turn, unanimity is weaker than Pareto-optimality: ∀ ≽ ∈ Rn, ∃ no point x ∈ [0, 1] such that x ≽i f(≽) ∀ i ∈ N.

As it turns out, these three requirements are all equivalent among strategy-proof rules. Suppose f is strategy-proof. Then f is onto iff it is unanimous iff it is Pareto-optimal.

It is clear that Pareto-optimality implies the other two conditions. Suppose f is strategy-proof and onto. Fix x ∈ [0, 1] and let ≽ ∈ Rn be such that f(≽) = x. Consider any “unanimous” profile ≽′ ∈ Rn such that pi′ = x for each i ∈ N. By strategy-proofness, f (≽′1, ≽2, . . . , ≽n) = x, otherwise agent 1 could manipulate f. Repeating this argument, f (≽′1, ≽′2, ≽3, . . . , ≽n) = x, . . . ,f(≽′) = x. That is, f is unanimous.

In order to derive a contradiction, suppose that f is not Pareto-optimal at some profile ≽ ∈ Rn. This implies that either (i) f(≽) < pi ∀ i ∈ N or (ii) f(≽) > pi ∀ i ∈ N . Without loss of generality, assume (i) holds. Furthermore, assume that the agents are labeled so that p1 ≤ p2 ≤ ··· ≤ pn.

If p1 = pn then unanimity is violated, completing the proof. Otherwise, let j ∈ N be such that p1 = pj < pj+1; that is, j < n agents have the minimum peak. ∀ i > j, let ≽′i be a preference relation such that both pi′ = p1 and f(≽)≽′i pi.

Let xn = f(≽1,…, ≽n−1, ≽′n). By strategy-proofness, xn ∈ [f(≽), pn], otherwise agent n (with preference ≽′n) could manipulate f by reporting preference ≽n. Similarly, xn ∉ (f(≽), pn], otherwise agent n (with preference ≽n) could manipulate f by reporting preference ≽′n. Therefore xn = f(≽).

Repeating this argument as each i > j replaces ≽i with ≽′i, we have f(≽1,…, ≽j, ≽′j+1,…, ≽′n) = f(≽), which contradicts unanimity. Since a strategy-proof, onto rule must be unanimous, this is a contradiction.

The central strategy-proof rule on this domain is the simple median-voter rule. Suppose that the number of agents n is odd. Then the rule that picks the median of the agents’ peaks (pi ’s) is a strategy-proof rule.

It is easy to see why this rule is strategy-proof : If an agent’s peak pi lies below the median peak, then he can change the median only by reporting a preference relation whose peak lies above the true median. The effect of this misreport is for the rule to choose a point even further away from pi, making the agent worse off. A symmetric argument handles the case in which the peak is above the median. Finally, an agent cannot profitably misreport his preferences if his peak is the median one to begin with.

More generally, for any number of agents n and any positive integer k ≤ n, the rule that picks the kth highest peak is strategy-proof for precisely the same reasons. An agent can only move the kth peak further from his own. The median happens to be the case where k = (n + 1)/2.

The strategy-proofness of such rules stands in contrast to the incentives properties of rules that choose average-type statistics. Consider the rule that chooses the average of the n agents’ peaks. Any agent with peak pi ∈ (0, 1) that is not equal to the average can manipulate the rule by reporting preferences with a more extreme peak (closer to 0 or 1) than his true peak.

This would also hold for any weighted average of the agents’ peaks, with one exception. If a rule allocated all of the weight to one agent, then the resulting rule simply picks that agent’s peak always. Such a dictatorial rule is strategy-proof and onto.

In addition to favorable incentives properties, rules based on order statistics have the property that they require little information to be computed. Technically a rule requires agents to report an entire preference ordering over [0, 1]. There is a need to transcend the rules, which, only require agents to report their most preferred point, i.e., a single number. In fact, under the onto assumption, this informational property is a consequence of the strategy-proofness requirement; that is, all strategy-proof and onto rules have the property that they can be computed solely from information about the agents’ peaks.

Let us generalize the class of “kth-statistic rules”. Consider a fixed set of points y1, y2, . . . , yn−1 ∈ A. Consider the rule that, for any profile of preferences ≽, chooses the median of the 2n − 1 points consisting of the n agents’ peaks and the n − 1 points of y. This differs in that, for some choices of y and some profiles of preferences, the rule may choose a point that is not the peak of any agent’s preferences. Yet, such a rule is strategy-proof.

Such rules compose the entire class of strategy-proof and onto rules that treat agents symmetrically. To formalize this latter requirement, we call a rule anonymous if for any ≽ ∈ Rn and any permutation ≽′ of ≽, f(≽′) = f (≽). This requirement captures the idea that the agents’ names play no role in the behavior of a rule. Dictatorial rules are examples of rules that are strategy-proofand onto, but not anonymous.

A rule f is strategy-proof, onto, and anonymous iff ∃ y1, y2,…, yn−1 ∈ [0,1] such that ∀ ≽ ∈ Rn,

f(≽) = med{p1, p2,…, pn, y1, y2,…, yn−1} —– (1)

Suppose f is strategy-proof, onto, and anonymous. We make extensive use of the two (extreme) preference relations that have peaks at 0 and 1 respectively. Since preferences relations are ordinal, there is only one preference relation with a peak at 0 and only one with a peak at 1. Denote these two preference relations by ≽0i and ≽1i respectively.

For any 1 ≤ m ≤ n − 1, let ym denote the outcome of f when m agents have preference relation ≽1i and the remainder have ≽0i:

ym = f(≽01,…, ≽0n−m, ≽1n−m+1,…, ≽1n)

By anonymity, the order of the arguments of f is irrelevant; if precisely m agents have preference relation ≽1i and the rest have ≽0i then the outcome is ym.

With a profile of preferences ≽ ∈ Rn with peaks p1, . . ., pn, and without loss of generality (by anonymity), once one assumes that pi ≤ pi+1 for each i ≤ n−1, then,

f(≽) = x∗ ≡ med{p1,…, pn, y1,…, yn−1}.

If the median is some ym, then suppose x∗ = ym for some m. By monotonicity of the peaks and ym‘s, since x∗ is the median of 2n−1 points this implies pn−m ≤ x∗ = ym ≤ pn−m+1. By assumption,

x∗ = ym = f(≽01,…, ≽0n−m, ≽1n−m+1,…, ≽1n) —– (2)

Let x1 = f(≽1, ≽02,…, ≽0n−m, ≽1n−m+1,…, ≽1n). Strategy-proofness implies x1 ≥ x∗, otherwise agent 1 with preference ≽01 could manipulate f. Similarly, since p1 ≤ ym, we cannot have x1 > x∗, otherwise agent 1 with preference ≽1 could manipulate f. Hence x1 = x∗. Repeating this argument for all i ≤ n − m, x∗ = f(≽1,…,≽n−m, ≽1n−m+1,…, ≽1n). The symmetric argument for all i > n−m implies

f(≽1,…, ≽n) = x∗ —– (3)

If the median is an agent’s peak, then the remaining case is that ym < x∗ < ym+1 for some m. (The cases where x∗ < y1 and x∗ > yn−1 are similar, denoting y0 = 0 and yn = 1). In this case, since the agents’ peaks are in increasing order, we have x∗ = pn−m. If

f(≽01,…, ≽0n−m−1, ≽n−m, ≽1n−m+1,…, ≽1n) = x∗ = pn−m —– (4)

then, analogous to the way (2) implied (3), repeated applications of strategy-proofness (to the n−1 agents other than i = n−m) would imply f(≽1,…, ≽n) = x∗, and the proof would be finished.

Thus, the parameters (ym‘s) can be thought of as the rule’s degree of compromise when agents have extremist preferences. If m agents prefer the highest possible outcome (1), while n − m prefer the lowest (0), then which point should be chosen? A true median rule would pick whichever extreme (0 or 1) contains the most peaks. On the other hand, the other rules may choose intermediate points (ym) as a compromise. The degree of compromise (which ym) can depend on the degree to which the agents’ opinions are divided (the size of m).

The anonymity requirement is a natural one in situations where agents are to be treated as equals. If one does not require this, however, the class of strategy-proof rules becomes even larger. Along the dictatorial rules, which always chooses a predetermined agent’s peak, there are less extreme violations of anonymity: The full class of strategy-proof, onto rules, allows agents to be treated with varying degrees of asymmetry.

Malicious Machine Learnings? Privacy Preservation and Computational Correctness Across Parties. Note Quote/Didactics.

Invincea_graph_DARPA

Multi-Party Computation deals with the following problem: There are n ≥ 2 parties P1, . . ., Pn where party Pi holds input ti, 1 ≤ i ≤ n, and they wish to compute together a functions = f (t1, . . . , tn) on their inputs. The goal is that each party will learn the output of the function, s, yet with the restriction that Pi will not learn any additional information about the input of the other parties aside from what can be deduced from the pair (ti, s). Clearly it is the secrecy restriction that adds complexity to the problem, as without it each party could announce its input to all other parties, and each party would locally compute the value of the function. Thus, the goal of Multi-Party Computation is to achieve the following two properties at the same time: correctness of the computation and privacy preservation of the inputs.

The following two generalizations are often useful:

(i) Probabilistic functions. Here the value of the function depends on some random string r chosen according to some distribution: s = f (t1, . . . , tn; r). An example of this is the coin-flipping functionality, which takes no inputs, and outputs an unbiased random bit. It is crucial that the value r is not controlled by any of the parties, but is somehow jointly generated during the computation.

(ii) Multioutput functions. It is not mandatory that there be a single output of the function. More generally there could be a unique output for each party, i.e., (s1, . . . , sn) = f(t1,…, tn). In this case, only party Pi learns the output si, and no other party learns any information about the other parties’ input and outputs aside from what can be derived from its own input and output.

One of the most interesting aspects of Multi-Party Computation is to reach the objective of computing the function value, but under the assumption that some of the parties may deviate from the protocol. In cryptography, the parties are usually divided into two types: honest and faulty. An honest party follows the protocol without any deviation. Otherwise, the party is considered to be faulty. The faulty behavior can exemplify itself in a wide range of possibilities. The most benign faulty behavior is where the parties follow the protocol, yet try to learn as much as possible about the inputs of the other parties. These parties are called honest-but-curious (or semihonest). At the other end of the spectrum, the parties may deviate from the prescribed protocol in any way that they desire, with the goal of either influencing the computed output value in some way, or of learning as much as possible about the inputs of the other parties. These parties are called malicious.

We envision an adversary A, who controls all the faulty parties and can coordinate their actions. Thus, in a sense we assume that the faulty parties are working together and can exert the most knowledge and influence over the computation out of this collusion. The adversary can corrupt any number of parties out of the n participating parties. Yet, in order to be able to achieve a solution to the problem, in many cases we would need to limit the number of corrupted parties. This limit is called the threshold k, indicating that the protocol remains secure as long as the number of corrupted parties is at most k.

Assume that there exists a trusted party who privately receives the inputs of all the participating parties, calculates the output value s, and then transmits this value to each one of the parties. This process clearly computes the correct output of f, and also does not enable the participating parties to learn any additional information about the inputs of others. We call this model the ideal model. The security of Multi-Party Computation then states that a protocol is secure if its execution satisfies the following: (1) the honest parties compute the same (correct) outputs as they would in the ideal model; and (2) the protocol does not expose more information than a comparable execution with the trusted party, in the ideal model.

Intuitively, the adversary’s interaction with the parties (on a vector of inputs) in the protocol generates a transcript. This transcript is a random variable that includes the outputs of all the honest parties, which is needed to ensure correctness, and the output of the adversary A. The latter output, without loss of generality, includes all the information that the adversary learned, including its inputs, private state, all the messages sent by the honest parties to A, and, depending on the model, maybe even include more information, such as public messages that the honest parties exchanged. If we show that exactly the same transcript distribution can be generated when interacting with the trusted party in the ideal model, then we are guaranteed that no information is leaked from the computation via the execution of the protocol, as we know that the ideal process does not expose any information about the inputs. More formally,

Let f be a function on n inputs and let π be a protocol that computes the function f. Given an adversary A, which controls some set of parties, we define REALA,π(t) to be the sequence of outputs of honest parties resulting from the execution of π on input vector t under the attack of A, in addition to the output of A. Similarly, given an adversary A′ which controls a set of parties, we define IDEALA′,f(t) to be the sequence of outputs of honest parties computed by the trusted party in the ideal model on input vector t, in addition to the output of A′. We say that π securely computes f if, for every adversary A as above, ∃ an adversary A′, which controls the same parties in the ideal model, such that, on any input vector t, we have that the distribution of REALA,π(t) is “indistinguishable” from the distribution of IDEALA′,f(t).

Intuitively, the task of the ideal adversary A′ is to generate (almost) the same output as A generates in the real execution or the real model. Thus, the attacker A′ is often called the simulator of A. The transcript value generated in the ideal model, IDEALA′,f(t), also includes the outputs of the honest parties (even though we do not give these outputs to A′), which we know were correctly computed by the trusted party. Thus, the real transcript REALA,π(t) should also include correct outputs of the honest parties in the real model.

We assumed that every party Pi has an input ti, which it enters into the computation. However, if Pi is faulty, nothing stops Pi from changing ti into some ti′. Thus, the notion of a “correct” input is defined only for honest parties. However, the “effective” input of a faulty party Pi could be defined as the value ti′ that the simulator A′ gives to the trusted party in the ideal model. Indeed, since the outputs of honest parties look the same in both models, for all effective purposes Pi must have “contributed” the same input ti′ in the real model.

Another possible misbehavior of Pi, even in the ideal model, might be a refusal to give any input at all to the trusted party. This can be handled in a variety of ways, ranging from aborting the entire computation to simply assigning ti some “default value.” For concreteness, we assume that the domain of f includes a special symbol ⊥ indicating this refusal to give the input, so that it is well defined how f should be computed on such missing inputs. What this requires is that in any real protocol we detect when a party does not enter its input and deal with it exactly in the same manner as if the party would input ⊥ in the ideal model.

As regards security, it is implicitly assumed that all honest parties receive the output of the computation. This is achieved by stating that IDEALA′,f(t) includes the outputs of all honest parties. We therefore say that our currency guarantees output delivery. A more relaxed property than output delivery is fairness. If fairness is achieved, then this means that if at least one (even faulty) party learns its outputs, then all (honest) parties eventually do too. A bit more formally, we allow the ideal model adversary A′ to instruct the trusted party not to compute any of the outputs. In this case, in the ideal model either all the parties learn the output, or none do. Since A’s transcript is indistinguishable from A′’s this guarantees that the same fairness guarantee must hold in the real model as well.

A further relaxation of the definition of security is to provide only correctness and privacy. This means that faulty parties can learn their outputs, and prevent the honest parties from learning theirs. Yet, at the same time the protocol will still guarantee that (1) if an honest party receives an output, then this is the correct value, and (2) the privacy of the inputs and outputs of the honest parties is preserved.

The basic security notions are universal and model-independent. However, specific implementations crucially depend on spelling out precisely the model where the computation will be carried out. In particular, the following issues must be specified:

  1. The faulty parties could be honest-but-curious or malicious, and there is usually an upper bound k on the number of parties that the adversary can corrupt.
  2. Distinguishing between the computational setting and the information theoretic setting, in the latter, the adversary is unlimited in its computing powers. Thus, the term “indistinguishable” is formalized by requiring the two transcript distributions to be either identical (so-called perfect security) or, at least, statistically close in their variation distance (so-called statistical security). On the other hand, in the computational, the power of the adversary (as well as that of the honest parties) is restricted. A bit more precisely, Multi-Party Computation problem is parameterized by the security parameter λ, in which case (a) all the computation and communication shall be done in time polynomial in λ; and (b) the misbehavior strategies of the faulty parties are also restricted to be run in time polynomial in λ. Furthermore, the term “indistinguishability” is formalized by computational indistinguishability: two distribution ensembles {Xλ}λ and {Yλ}λ are said to be computationally indistinguishable, if for any polynomial-time distinguisher D, the quantity ε, defined as |Pr[D(Xλ) = 1] − Pr[D(Yλ) = 1]|, is a “negligible” function of λ. This means that for any j > 0 and all sufficiently large λ, ε eventually becomes smaller than λ − j. This modeling helps us to build secure Multi-Party Computational protocols depending on plausible computational assumptions, such as the hardness of factoring large integers.
  3. The two common communication assumptions are the existence of a secure channel and the existence of a broadcast channel. Secure channels assume that every pair of parties Pi and Pj are connected via an authenticated, private channel. A broadcast channel is a channel with the following properties: if a party Pi (honest or faulty) broadcasts a message m, then m is correctly received by all the parties (who are also sure the message came from Pi). In particular, if an honest party receives m, then it knows that every other honest party also received m. A different communication assumption is the existence of envelopes. An envelope guarantees the following properties: a value m can be stored inside the envelope, it will be held without exposure for a given period of time, and then the value m will be revealed without modification. A ballot box is an enhancement of the envelope setting that also provides a random shuffling mechanism of the envelopes. These are, of course, idealized assumptions that allow for a clean description of a protocol, as they separate the communication issues from the computational ones. These idealized assumptions may be realized by a physical mechanisms, but in some settings such mechanisms may not be available. Then it is important to address the question if and under what circumstances we can remove a given communication assumption. For example, we know that the assumption of a secure channel can be substituted with a protocol, but under the introduction of a computational assumption and a public key infrastructure.

Equilibrium Market Prices are Unique – Convexity and Concavity Utility Functions on a Linear Case. Note Quote + Didactics.

slide_8

Consider a market consisting of a set B of buyers and a set A of divisible goods. Assume |A| = n and |B| = n′. We are given for each buyer i the amount ei of money she possesses and for each good j the amount bj of this good. In addition, we are given the utility functions of the buyers. Our critical assumption is that these functions are linear. Let uij denote the utility derived by i on obtaining a unit amount of good j. Thus if the buyer i is given xij units of good j, for 1 ≤ j ≤ n, then the happiness she derives is

j=1nuijxij —— (1)

Prices p1, . . . , pn of the goods are said to be market clearing prices if, after each buyer is assigned an optimal basket of goods relative to these prices, there is no surplus or deficiency of any of the goods. So, is it possible to compute such prices in polynomial time?

First observe that without loss of generality, we may assume that each bj is unit – by scaling the uij’s appropriately. The uij’s and ei’s are in general rational; by scaling appropriately, they may be assumed to be integral. By making the mild assumption that each good has a potential buyer, i.e., a buyer who derives nonzero utility from this good. Under this assumption, market clearing prices do exist.

It turns out that equilibrium allocations for Fisher’s linear case are captured as optimal solutions to a remarkable convex program, the Eisenberg–Gale convex program.

A convex program whose optimal solution is an equilibrium allocation must have as constraints the packing constraints on the xij’s. Furthermore, its objective function, which attempts to maximize utilities derived, should satisfy the following:

  1. If the utilities of any buyer are scaled by a constant, the optimal allocation remains unchanged.
  2. If the money of a buyer b is split among two new buyers whose utility functions are the same as that of b then sum of the optimal allocations of the new buyers should be an optimal allocation for b.

The money weighted geometric mean of buyers’ utilities satisfies both these conditions:

max (∏i∈Auiei)1/∑iei —– (2)

then, the following objective function is equivalent:

max (∏i∈Auiei) —– (3)

Its log is used in the Eisenberg–Gale convex program:

maximize, ∑i=1n’eilogui

subject to

ui = ∑j=1nuijxij ∀ i ∈ B

i=1n’ xij ≤ 1 ∀ j ∈ A

xij ≥ 0 ∀ i ∈ B, j ∈ A —– (4)

where xij is the amount of good j allocated to buyer i. Interpret Lagrangian variables, say pj’s, corresponding to the second set of conditions as prices of goods. Optimal solutions to xij’s and pj’s must satisfy the following:

    1. ∀ j ∈ A : p≥ 0
    2. ∀ j ∈ A : p> 0 ⇒ ∑i∈A xij = 1
    3. ∀ i ∈ B, j ∈ A : uij/pj ≤ ∑j∈Auijxij/ei
    4. ∀ i ∈ B, j ∈ A : xij > 0 ⇒ uij/pj = ∑j∈Auijxij/ei

From these conditions, one can derive that an optimal solution to convex program (4) must satisfy the market clearing conditions.

For the linear case of Fisher’s model:

  1. If each good has a potential buyer, equilibrium exists.
  2. The set of equilibrium allocations is convex.
  3. Equilibrium utilities and prices are unique.
  4. If all uij’s and ei’s are rational, then equilibrium allocations and prices are also rational. Moreover, they can be written using polynomially many bits in the length of the instance.

Corresponding to good j there is a buyer i such that uij > 0. By the third condition as stated above,

pj ≥ eiuij/∑juijxij > 0

By the second condition, ∑i∈A xij = 1, implying that prices of all goods are positive and all goods are fully sold. The third and fourth conditions imply that if buyer i gets good j then j must be among the goods that give buyer i maximum utility per unit money spent at current prices. Hence each buyer gets only a bundle consisting of her most desired goods, i.e., an optimal bundle.

The fourth condition is equivalent to

∀ i ∈ B, j ∈ A : eiuijxij/∑j∈Auijxij = pjxij

Summing over all j

∀ i ∈ B : eij uijxij/∑j∈Auijxij = pjxij

⇒ ∀ i ∈ B : ei = ∑jpjxij

Hence the money of each buyer is fully spent completing the proof that market equilibrium exists. Since each equilibrium allocation is an optimal solution to the Eisenberg-Gale convex program, the set of equilibrium allocations must form a convex set. Since log is a strictly concave function, if there is more than one equilibrium, the utility derived by each buyer must be the same in all equilibria. This fact, together with the fourth condition, gives that the equilibrium prices are unique.

Conjuncted: Integer Pivoting as a Polynomial-Time Algorithm

hqdefault1

The Lemke-Howson Algorithm follows the edges of a polyhedron, which is implemented algebraically by pivoting as used by the simplex algorithm for solving a linear program. Let us see, if there is an efficient implementation that has no numerical errors by storing integers of arbitrary precision. The constraints defining the polyhedron are thereby represented as linear equations with nonnegative slack variables. For the polytopes P and Q in

P = {x ∈ RM| x ≥ 0, Bx ≤ 1},

Q = {y ∈ RN |Ay ≤ 1, y ≥ 0}

these slack variables are nonnegative vectors s ∈ RN and r ∈ RM so that x ∈ P and y ∈ Q iff

Bx + s = 1, r + Ay = 1 —– (1)

and

x ≥ 0, s ≥ 0, r ≥ 0, y ≥ 0 —— (2)

A binding inequality corresponds to a zero slack variable. The pair (x, y) is completely labeled iff xiri = 0 ∀ i ∈ M and yjsj = 0 ∀ j ∈ N, which by (2) can be written as the orthogonality condition

xr = 0, ys = 0

A basic solution to (1) is given by n basic (linearly independent) columns of Bx + s = 1 and m basic columns of r + Ay = 1, where the nonbasic variables that correspond to the m respectively n other (nonbasic) columns are set to zero, so that the basic variables are uniquely determined. A basic feasible solution also fulfills (2), and defines a vertex x of P and y of Q. The labels of such a vertex are given by the respective nonbasic columns.

Pivoting is a change of the basis where a nonbasic variable enters and a basic variable leaves the set of basic variables, while preserving feasibility (2).

Integer pivoting always maintains an integer matrix (or “tableau”) of coefficients of a system of linear equations that is equivalent to the original system Bx + s = 1, in the form

CBx + Cs = C1 —– (3)

In (3), C is the inverse of the basis matrix given by the basic columns of the original system, multiplied by the determinant of the basis matrix. The matrix C is given by the (integer) cofactors of the basis matrix; the cofactor of a matrix entry is the determinant of the matrix when the row and column of that element are deleted. When each entry has a bounded number of digits (by at most a factor of n log n compared to the original matrix entries), then integer pivoting is a polynomial-time algorithm. It is also superior to using fractions of integers or rational numbers because their cancelation requires greatest common divisor computations that take the bulk of computation time.

Lemke-Howson Algorithm – Symmetric Game with Symmetric Or NonSymmetric Equilibria. Note Quote.

Lemke-Howson Algorithm (LHA) function computes a sample mixed strategy Nash equilibrium in a bimatrix game. This function implements the Lemke-Howson complementary pivoting algorithm for solving Bimatrix Games, a variant of the Lemke algorithm for linear complementarity problems (LCPs). The LHA not only provides an elementary proof for the existence of equilibrium points, but also an efficient computational method for finding at least one equilibrium point. The LHA follows a path (called LH path) of vertex pairs (x, y) of P × Q, for the polytopes P and Q,

P = {x ∈ RM| x ≥ 0, Bx ≤ 1},

Q = {y ∈ RN |Ay ≤ 1, y ≥ 0}

that starts at (0, 0) and ends at a Nash equilibrium. An LH path alternately follows edges of P and Q, keeping the vertex in the other polytope fixed. Because the game is nondegenerate, a vertex of P is given by m labels, and a vertex of Q is given by n labels. An edge of P is defined by m−1 labels.

untitled

For example, in the above figure, the edge defined by labels 1 and 3 joins the vertices 0 and c. Dropping a label l of a vertex x of P, say, means traversing the unique edge that has all the labels of x except for l. For example, dropping label 2 of the vertex 0 of P in the figure gives the edge, defined by labels 1 and 3, that joins 0 to vertex c. The endpoint of the edge has a new label, which is said to be picked up, for example, label 5 is picked up at vertex c.

The LHA starts from (0, 0) in P × Q. This is called the artificial equilibrium, which is a completely labeled vertex pair because every pure strategy has probability zero. It does not represent a Nash equilibrium of the game because the zero vector cannot be rescaled to a mixed strategy vector. An initial free choice of the LHA is a pure strategy k of a player (any label in M ∪ N ), called the missing label. Starting with (x, y) = (0, 0), label k is dropped. At the endpoint of the corresponding edge (of P if k ∈ M, of Q if k ∈ N), the new label that is picked up is duplicate because it was present in the other polytope. That duplicate label is then dropped in the other polytope, picking up a new label. If the newly picked label is the missing label, the algorithm terminates and has found a Nash equilibrium. Otherwise, the algorithm repeats by dropping the duplicate label in the other polytope, and continues in this fashion.

Input: Nondegenerate bimatrix game.

Output: One Nash equilibrium of the game.

Method: Choose k ∈ M ∪ N , called the missing label. Let (x, y) = (0, 0) ∈ P × Q. Drop label k (from x in P if k ∈ M, from y in Q if k ∈ N).

Loop: Call the new vertex pair (x, y). Let l be the label that is picked up. If l = k, terminate with Nash equilibrium (x, y) (rescaled as mixed strategy pair). Otherwise, drop l in the other polytope and repeat.

The LHA terminates, and finds a Nash equilibrium, because P × Q has only finitely many vertex pairs. The next vertex pair on the path is always unique. Hence, a given vertex pair cannot be revisited because that would provide an additional possibility to proceed in the first place.

What we seem to have done is describe the LH path for missing label k by means of alternating edges between two polytopes. In fact, it is a path on the product polytope P × Q, given by the set of pairs (x, y) of P × Q that are k-almost completely labeled, meaning that every label in M ∪ N − {k} appears as a label of either x or y. In the above figure for k = 2, the vertex pairs on the path are (0, 0), (c, 0), (c, p), (d, p), (d, q).

For a fixed missing label k, the k-almost completely labeled vertices and edges of the product polytope P × Q form a graph of degree 1 or 2. Clearly, such a graph consists of disjoints paths and cycles. The endpoints of the paths are completely labeled. They are the Nash equilibria of the game and the artificial equilibrium (0, 0).

Though, there is a corollary to the this, in that, a nondegenerate bimatrix game has an odd number of Nash equilibria. The LHA can start at any Nash equilibrium, not just the artificial equilibrium. In the figure with missing label 2, starting the algorithm at the Nash equilibrium (d, q) would just generate the known LH path backward to (0, 0). When started at the Nash equilibrium (a, s), the LH path for the missing label 2 gives the vertex pair (b, s), where label 5 is duplicate, and then the equilibrium (b, r). This path cannot go back to (0, 0) because the path leading to (0, 0) starts at (d, q). This gives the three Nash equilibria of the game as endpoints of the two LH paths for missing label 2. These three equilibria can also be found by the LHA by varying the missing label.

However, some Nash equilibria can remain elusive to the LHA. An example is the following symmetric 3 × 3 game with

A = B =  untitled

Every Nash equilibrium (x, y) of this game is symmetric, i.e., x = y, where x is (0, 0, 1), (1/2, 1/4, 1/4), or (3/4, 1/4, 0). Only the first of these is found by the LHA, for any missing label; because the game is symmetric, it suffices to consider the missing labels 1, 2, 3. (A symmetric game remains unchanged when the players are exchanged; a symmetric game has always a symmetric equilibrium, but may also have nonsymmetric equilibria, which obviously come in pairs.)

Game’s Degeneracy Running Proportional to Polytope’s Redundancy.

For a given set of vertices V ⊆ RK a Polytope P can be defined as the following set of points:

P = {∑i=1|V|λivi ∈ RK | ∑i=1|V|λi = 1; λi ≥ 0; vi ∈ V}

Screen Shot 2019-01-02 at 11.03.28 AM

Polytope is an intersection of boundaries that separate the space into two distinct areas. If a polytope is to be defined as an intersection of half spaces, then for a matrix M ∈ Rmxn, and a vector b ∈ Rm, polytope P is defined as a set of points

P = {x ∈ Rn | Mx ≤ b}

Switching over to a two-player game, (A, B) ∈ Rmxn2>0, the row/column best response polytope P/Q is defined by:

P = {x ∈ Rm | x ≥ 0; xB ≤ 1}

Q = {y ∈ Rn | Ay ≤ 1; y ≥ 0}

The polytope P, corresponds to the set of points with an upper bound on the utility of those points when considered as row strategies against which the column player plays.

An affine combination of points z1,….zk in some Euclidean space is of the form ∑i=1kλizi, where λ1, …, λk are reals with ∑i=1kλi= 1. It is called a convex combination, if λ≥ 0 ∀ i. A set of points is convex if it is closed under forming convex combinations. Given points are affinely independent if none of these points are an affine combination of the others. A convex set has dimension d iff it has d + 1, but no more, affinely independent points.

A polyhedron P in Rd is a set {z ∈ Rd | Cz ≤ q} for some matrix C and vector q. It is called full-dimensional if it has dimension d. It is called a polytope if it is bounded. A face of P is a set {z ∈ P | cz = q0} for some c ∈ Rd, q0 ∈ R, such that the inequality cz ≤ q0 holds for all z in P. A vertex of P is the unique element of a zero-dimensional face of P. An edge of P is a one-dimensional face of P. A facet of a d-dimensional polyhedron P is a face of dimension d − 1. It can be shown that any nonempty face F of P can be obtained by turning some of the inequalities defining P into equalities, which are then called binding inequalities. That is, F = {z ∈ P | ciz = qi, i ∈ I}, where ciz ≤ qi for i ∈ I are some of the rows in Cz ≤ q. A facet is characterized by a single binding inequality which is irredundant; i.e., the inequality cannot be omitted without changing the polyhedron. A d-dimensional polyhedron P is called simple if no point belongs to more than d facets of P, which is true if there are no special dependencies between the facet-defining inequalities. The “best response polyhedron” of a player is the set of that player’s mixed strategies together with the “upper envelope” of expected payoffs (and any larger payoffs) to the other player.

Nondegeneracy of a bimatrix game (A, B) can be stated in terms of the polytopes P and Q as no point in P has more than m labels, and no point in Q has more than n labels. (If x ∈ P and x has support of size k and L is the set of labels of x, then |L ∩ M| = m − k, so |L| > m implies x has more than k best responses in L ∩ N. Then P and Q are simple polytopes, because a point of P, say, that is on more than m facets would have more than m labels. Even if P and Q are simple polytopes, the game can be degenerate if the description of a polytope is redundant in the sense that some inequality can be omitted, but nevertheless is sometimes binding. This occurs if a player has a pure strategy that is weakly dominated by or payoff equivalent to some other mixed strategy. Non-simple polytopes or redundant inequalities of this kind do not occur for “generic” payoffs; this illustrates the assumption of nondegeneracy from a geometric viewpoint. (A strictly dominated strategy may occur generically, but it defines a redundant inequality that is never binding, so this does not lead to a degenerate game.) Because the game is nondegenerate, only vertices of P can have m labels, and only vertices of Q can have n labels. Otherwise, a point of P with m labels that is not a vertex would be on a higher dimensional face, and a vertex of that face, which is a vertex of P, would have additional labels. Consequently, only vertices of P and Q have to be inspected as possible equilibrium strategies. Algorithmically, if the input is a nondegenerate bimatrix game, and output is an Nash equilibria of the game, then the method employed for each vertex x of P − {0}, and each vertex y of Q − {0}, if (x, y) is completely labeled, the output then is the Nash equilibrium (x · 1/1x, y · 1/1y).