The Mathematics of Political Policies and Economic Decisions – Dictatorial (Extremists) Versus Democratic. Note Quote. Part 1 Didactics.

Screen Shot 2019-02-07 at 8.50.31 AM

If a strategy-proof, onto rule does not pick x∗ when it is the median of peaks and ym‘s, then a contradiction is reached using preferences with peaks at piL and piH.

Let us take a look at single-peaked preferences over one-dimensional policy spaces. This domain can be used to model political policies, economic decisions, location problems, or any allocation problem where a single point must be chosen in an interval. The key assumption is that agents’ preferences are assumed to have a single most-preferred point in the interval, and that preferences are “decreasing” as one moves away from that peak.

Formally, the allocation space (or policy space) is the unit interval A = [0, 1]. An outcome is a single point x ∈ A. Each agent i ∈ N has a preference ordering ≽i, which is a weak order over the outcomes in [0, 1]. The preference relation ≽i is single-peaked if ∃ a point pi ∈ A (the peak of ≽i) such that ∀ x ∈ A\{pi} and all λ ∈ [0,1), (λx +(1−λ)pi) ≻i x. Let R denote the class of single-peaked preferences.

We denote the peaks of preference relations ≽i, ≽′i, ≽j, etc., respectively by pi, pi′, pj, etc. Denote a profile (n-tuple) of preferences as ≽ ∈ Rn.

One can imagine this model as representing a political decision such as an income tax rate, another political issue with conservative/liberal extremes, the location of a public facility on a road, or even something as simple as a group of people deciding on the temperature setting for a shared office. Here, the agents have an ideal preferred policy in mind, and would prefer that a decision be made as close as possible to this “peak.”

A rule f : Rn → A assigns an outcome f(≽) to any preference profile ≽. A rule is strategy-proof if it is a dominant strategy for each agent to report his preferences truthfully when the rule is being used to choose a point.

Our purpose then is to see that this class of problems admits a rich family of strategy-proof rules whose ranges include more than two alternatives. In fact, the family of such rules remains rich even when one restricts attention to rules that satisfy the following condition.

We say that a rule f is onto if ∀ x ∈ A ∃ ≽ ∈ Rn such that f(≽) = x. An onto rule cannot preclude an outcome from being chosen ex ante. It is not without loss of generality to impose this condition. For instance, fix two points x, y ∈ [0, 1] and consider a rule that chooses whichever of the two points is preferred to the other by a majority of agents (and where x is chosen in case of a tie). Such a rule is strategy-proof, but not onto. Similar strategy-proof rules can even break ties between x and y by using preference information about other points x′, y′, . . ., in [0, 1], even though x′, etc., are not in the range of the rule.

The onto condition is even weaker than what is called unanimity, which requires that whenever all agents’ preferences have the same peak (pi = pj ∀ i, j), the rule must choose that location as the outcome. In turn, unanimity is weaker than Pareto-optimality: ∀ ≽ ∈ Rn, ∃ no point x ∈ [0, 1] such that x ≽i f(≽) ∀ i ∈ N.

As it turns out, these three requirements are all equivalent among strategy-proof rules. Suppose f is strategy-proof. Then f is onto iff it is unanimous iff it is Pareto-optimal.

It is clear that Pareto-optimality implies the other two conditions. Suppose f is strategy-proof and onto. Fix x ∈ [0, 1] and let ≽ ∈ Rn be such that f(≽) = x. Consider any “unanimous” profile ≽′ ∈ Rn such that pi′ = x for each i ∈ N. By strategy-proofness, f (≽′1, ≽2, . . . , ≽n) = x, otherwise agent 1 could manipulate f. Repeating this argument, f (≽′1, ≽′2, ≽3, . . . , ≽n) = x, . . . ,f(≽′) = x. That is, f is unanimous.

In order to derive a contradiction, suppose that f is not Pareto-optimal at some profile ≽ ∈ Rn. This implies that either (i) f(≽) < pi ∀ i ∈ N or (ii) f(≽) > pi ∀ i ∈ N . Without loss of generality, assume (i) holds. Furthermore, assume that the agents are labeled so that p1 ≤ p2 ≤ ··· ≤ pn.

If p1 = pn then unanimity is violated, completing the proof. Otherwise, let j ∈ N be such that p1 = pj < pj+1; that is, j < n agents have the minimum peak. ∀ i > j, let ≽′i be a preference relation such that both pi′ = p1 and f(≽)≽′i pi.

Let xn = f(≽1,…, ≽n−1, ≽′n). By strategy-proofness, xn ∈ [f(≽), pn], otherwise agent n (with preference ≽′n) could manipulate f by reporting preference ≽n. Similarly, xn ∉ (f(≽), pn], otherwise agent n (with preference ≽n) could manipulate f by reporting preference ≽′n. Therefore xn = f(≽).

Repeating this argument as each i > j replaces ≽i with ≽′i, we have f(≽1,…, ≽j, ≽′j+1,…, ≽′n) = f(≽), which contradicts unanimity. Since a strategy-proof, onto rule must be unanimous, this is a contradiction.

The central strategy-proof rule on this domain is the simple median-voter rule. Suppose that the number of agents n is odd. Then the rule that picks the median of the agents’ peaks (pi ’s) is a strategy-proof rule.

It is easy to see why this rule is strategy-proof : If an agent’s peak pi lies below the median peak, then he can change the median only by reporting a preference relation whose peak lies above the true median. The effect of this misreport is for the rule to choose a point even further away from pi, making the agent worse off. A symmetric argument handles the case in which the peak is above the median. Finally, an agent cannot profitably misreport his preferences if his peak is the median one to begin with.

More generally, for any number of agents n and any positive integer k ≤ n, the rule that picks the kth highest peak is strategy-proof for precisely the same reasons. An agent can only move the kth peak further from his own. The median happens to be the case where k = (n + 1)/2.

The strategy-proofness of such rules stands in contrast to the incentives properties of rules that choose average-type statistics. Consider the rule that chooses the average of the n agents’ peaks. Any agent with peak pi ∈ (0, 1) that is not equal to the average can manipulate the rule by reporting preferences with a more extreme peak (closer to 0 or 1) than his true peak.

This would also hold for any weighted average of the agents’ peaks, with one exception. If a rule allocated all of the weight to one agent, then the resulting rule simply picks that agent’s peak always. Such a dictatorial rule is strategy-proof and onto.

In addition to favorable incentives properties, rules based on order statistics have the property that they require little information to be computed. Technically a rule requires agents to report an entire preference ordering over [0, 1]. There is a need to transcend the rules, which, only require agents to report their most preferred point, i.e., a single number. In fact, under the onto assumption, this informational property is a consequence of the strategy-proofness requirement; that is, all strategy-proof and onto rules have the property that they can be computed solely from information about the agents’ peaks.

Let us generalize the class of “kth-statistic rules”. Consider a fixed set of points y1, y2, . . . , yn−1 ∈ A. Consider the rule that, for any profile of preferences ≽, chooses the median of the 2n − 1 points consisting of the n agents’ peaks and the n − 1 points of y. This differs in that, for some choices of y and some profiles of preferences, the rule may choose a point that is not the peak of any agent’s preferences. Yet, such a rule is strategy-proof.

Such rules compose the entire class of strategy-proof and onto rules that treat agents symmetrically. To formalize this latter requirement, we call a rule anonymous if for any ≽ ∈ Rn and any permutation ≽′ of ≽, f(≽′) = f (≽). This requirement captures the idea that the agents’ names play no role in the behavior of a rule. Dictatorial rules are examples of rules that are strategy-proofand onto, but not anonymous.

A rule f is strategy-proof, onto, and anonymous iff ∃ y1, y2,…, yn−1 ∈ [0,1] such that ∀ ≽ ∈ Rn,

f(≽) = med{p1, p2,…, pn, y1, y2,…, yn−1} —– (1)

Suppose f is strategy-proof, onto, and anonymous. We make extensive use of the two (extreme) preference relations that have peaks at 0 and 1 respectively. Since preferences relations are ordinal, there is only one preference relation with a peak at 0 and only one with a peak at 1. Denote these two preference relations by ≽0i and ≽1i respectively.

For any 1 ≤ m ≤ n − 1, let ym denote the outcome of f when m agents have preference relation ≽1i and the remainder have ≽0i:

ym = f(≽01,…, ≽0n−m, ≽1n−m+1,…, ≽1n)

By anonymity, the order of the arguments of f is irrelevant; if precisely m agents have preference relation ≽1i and the rest have ≽0i then the outcome is ym.

With a profile of preferences ≽ ∈ Rn with peaks p1, . . ., pn, and without loss of generality (by anonymity), once one assumes that pi ≤ pi+1 for each i ≤ n−1, then,

f(≽) = x∗ ≡ med{p1,…, pn, y1,…, yn−1}.

If the median is some ym, then suppose x∗ = ym for some m. By monotonicity of the peaks and ym‘s, since x∗ is the median of 2n−1 points this implies pn−m ≤ x∗ = ym ≤ pn−m+1. By assumption,

x∗ = ym = f(≽01,…, ≽0n−m, ≽1n−m+1,…, ≽1n) —– (2)

Let x1 = f(≽1, ≽02,…, ≽0n−m, ≽1n−m+1,…, ≽1n). Strategy-proofness implies x1 ≥ x∗, otherwise agent 1 with preference ≽01 could manipulate f. Similarly, since p1 ≤ ym, we cannot have x1 > x∗, otherwise agent 1 with preference ≽1 could manipulate f. Hence x1 = x∗. Repeating this argument for all i ≤ n − m, x∗ = f(≽1,…,≽n−m, ≽1n−m+1,…, ≽1n). The symmetric argument for all i > n−m implies

f(≽1,…, ≽n) = x∗ —– (3)

If the median is an agent’s peak, then the remaining case is that ym < x∗ < ym+1 for some m. (The cases where x∗ < y1 and x∗ > yn−1 are similar, denoting y0 = 0 and yn = 1). In this case, since the agents’ peaks are in increasing order, we have x∗ = pn−m. If

f(≽01,…, ≽0n−m−1, ≽n−m, ≽1n−m+1,…, ≽1n) = x∗ = pn−m —– (4)

then, analogous to the way (2) implied (3), repeated applications of strategy-proofness (to the n−1 agents other than i = n−m) would imply f(≽1,…, ≽n) = x∗, and the proof would be finished.

Thus, the parameters (ym‘s) can be thought of as the rule’s degree of compromise when agents have extremist preferences. If m agents prefer the highest possible outcome (1), while n − m prefer the lowest (0), then which point should be chosen? A true median rule would pick whichever extreme (0 or 1) contains the most peaks. On the other hand, the other rules may choose intermediate points (ym) as a compromise. The degree of compromise (which ym) can depend on the degree to which the agents’ opinions are divided (the size of m).

The anonymity requirement is a natural one in situations where agents are to be treated as equals. If one does not require this, however, the class of strategy-proof rules becomes even larger. Along the dictatorial rules, which always chooses a predetermined agent’s peak, there are less extreme violations of anonymity: The full class of strategy-proof, onto rules, allows agents to be treated with varying degrees of asymmetry.

Advertisements

Minimum Support Price (MSP) for Farmers – Ruminations for the Grassroots.

24farm1

Minimum Support Price (MSP) is an insurance given by the Government of India to insure farmers and agricultural workers against any sharp fall in farm prices. MSP is a policy instrument at the disposal of the government and is introduced based on the recommendations of the Commission for Agricultural Costs and Prices (CACP) generally at the beginning of sowing season. The major objective of MSP is protecting and supporting farmers during bumper production periods by pouring food grains for public distribution. There are two ways in which an effective MSP can be implemented, viz. procurement of commodities and as remunerative. The remunerative nature for farmers compensates the difference between MSP and the prices received by them.

With the agrarian crisis looming large, the policies need to emphasize on measures that can bring forth immediate results. These results could be achieved through the components of price and non-price factors. Non-price factors are long-term oriented and rely on market reforms, institutional reforms and innovations in technology in order to bring in an upward drift growth and income brackets of the farmers. Price factors are short-term oriented that necessitate immediate upward drift in remunerative prices for farm produce. It is within the ambit of price factors that MSP stands. The government notifies MSP for 23 commodities and FRP (fair and remunerative price) for sugarcane. These crops cover about 84% of total area under cultivation in all the seasons of a year. About 5% area is under fodder crops which is not amenable for MSP intervention. According to this arithmetic, close to 90% of the total cultivated area is applicable to MSP intervention, leaving a small segment of producers amenable to price benefits, if the MSP were to be fully implemented.

So, how exactly does the CACP determine the Minimum Support Price (MSP)? CACP takes the following factors under consideration while determining the MSP:

  1. Cost of cultivation per hectare and structure of costs across various regions in the country and the changes therein.
  2. Cost of production per quintal across various regions of the country and the changes therein.
  3. Prices of various inputs and the changes therein.
  4. Market prices of products and the changes therein.
  5. Prices of commodities sold by the farmers and of those purchased by them and the changes therein.
  6. supply-related information like area, yield and production, imports, exports and domestic availability and stocks with the Government/Public agencies or industry.
  7. Demand-related information, which includes the total and per capita consumption, trends and capacity of the processing industry.
  8. Prices in the international markets and the changes therein.
  9. Prices of the derivatives of the farm products such as sugar, jaggery, jute, edible and non-edible oils, cotton yarns and changes therein.
  10. Cost of processing of agricultural products and the changes therein.
  11. Cost of marketing and services, storage, transportation, processing, taxes/fees, and margins retained by market functionaries, and
  12. Macroeconomic variables such as general level of prices, consumer price indices and those reflecting monetary and fiscal factors.

As can be seen, this is an extensive set of parameters that the Commission relies on for calculating the Minimum Support Price (MSP). But, then the question is: where does the Commission get access to this data set? The data is generally gathered from agricultural scientists, farmer leaders, social workers, central ministries, Food Corporation of India (FCI), National Agricultural Cooperative Marketing Federation of India (NAFED), Cotton Corporation of India (CCI), Jute Corporation of India, traders’ organizations and research institutes. The Commission then calculates the MSP and sends it to the Central Government for approval, which then sends it to the states for their suggestions. Once the states given their nods, the Cabinet Committee on Economic Affairs subscribes to these figures that are then released on CACP portals.

During the first year of UPA-1 Government in the centre in 2004, a National Commission on Farmers (NCF) was formed with M S Swaminathan (Research Foundation) as its Chairman. One of the major objectives of the Commission was to make farm commodities cost-competitive and profitable. To achieve this task, a three-tiered structure for calculating the farming cost was devised, viz. A2, FL and C2. A2 is the actual paid out costs, while, A2+FL is the actual paid-out cost plus imputed value of family labour, where imputing is assigning a value to something by inference from the value of the products or processes to which it contributes. C2 is the comprehensive cost including imputed rent and interest on owned land and capital. It is evident that C2 > A2+FL > A2

The Commission for Agricultural Costs and Prices (CACP) while recommending prices takes into account all important factors including costs of production, changes in input prices, input/output parity, trends in market prices, inter crop price parity, demand and supply situation, parity between prices paid and prices received by the farmers etc. In fixing the support prices, CACP relies on the cost concept which covers all items of expenses of cultivation including that of the imputed value of the inputs owned by the farmers such as rental value of owned land and interest on fixed capital. some of the important cost concepts are C2 and C3:

C3: C2 + 10% of C2 to account for managerial remuneration to the farmer.

Swaminathan Commission Report categorically states that farmers should get an MSP, which is 50% higher than the comprehensive cost of production. this cost + 50% formula came from the Swaminathan Commission and it had categorically stated that the cost of production is the comprehensive cost of production, which is C2 and not A2+FL. C2 includes all actual expenses in cash and kind incurred in the production by the actual owner + rent paid for leased land + imputed value of family labour + interest on the value of owned capital assets (excluding land) + rental value of the owned land (net of land revenue). Costs of production are calculated both on a per quintal and per hectare basis. Since cost variation are large over states, CACP recommends that MSP should be considered on the basis of C2. However, increases in MSP have been so substantial in case of paddy and wheat that in most of the states, MSPs are way above not only the C2, but even C3 as well.

Screen Shot 2019-02-05 at 12.14.15 PM

This is where the political economy of MSP stares back at the hapless farmers. Though 23 crops are to be notified on MSP, not more than 3 are are actually ensured. The Indian farm sector is also plagued by low scale production restricted by small-sized holdings, which ensures that margin over cost within the prevailing system generates at best low income for the farmers. This is precisely the point of convergence of reasons why the farmers have been demanding effective implementation of MSP by keeping the MSP 50% higher than the costs incurred. Farmers and farmers’ organizations have demanded that the MSP be increased to cost of production + 50%, since for them, cost of production has meant C2 and not A2+FL. At present, the CACP adds A2 and FL to determine the MSP. The Government then adds 50% of the value obtained by adding A2 and FL only to fix the MSP, thus ignoring C2. What the farmers and farmers’ organizations have been demanding is an addition of 50% to C2 to fix the MSP, which is sadly missing the hole point of Governmental announcements. This difference between what the farmers want and what the government gives is a reason behind so much unrest as regards support prices to the farmers.

Ramesh Chand, who is currently serving in the NITI Aayog, is still a voice of reason over and above what the Government has been implementing by way of sops. Chand has also recommended that the interest on working capital should be given for the whole season against the existing half-season, and the actual rental value prevailing in the village should be considered without a ceiling on the rent. Moreover, post-harvest costs, cleaning, grading, drying, packaging, marketing and transportation should be included. C2 should be hiked by 10% to account for the risk premium and managerial charges.

According to Ramesh Chand of NITI Aayog, there is an urgent need to take into account the market clearance price in recommending the MSP. This would reflect both the demand and supply sides. When the MSP is fixed depending on the demand-side factors, then the need for government intervention to implement MSPs would be reduced only to the situation where the markets are not competitive or when the private trade turns exploitative. However, if there is a deficiency price payment mechanism or crops for which an MSP declared but the purchase doesn’t materialize, then the Government should compensate the farmers for the difference between the MSP and lower market price. such a mechanism has been implemented in Madhya Pradesh under the name of Bhavantar Bhugtan Yojana (BBY), where the Government, rather than accept its poor track record in procurement directly from the farmers has been compensating the farmers with direct cash transfers when the market prices fall below MSP. The scheme has had its downsides with long delays in payments and heavy transaction costs. There is also a glut in supply with the markets getting flooded with low-quality grains, which then depress the already low crop prices. Unless, his and MS Swaminathan’s recommendations are taken seriously, the solution to the agrarian crisis is hiding towards a capitalist catastrophe. And why does one say that?

In order to negotiate the price deficient mechanism towards resolution, the Government is left with another option in the form of procurement. But, here is a paradox. The Government clearly does not have the bandwidth to first create a system and then manage the procurement of crops for which the MSP has been announced, which now number 20. If there is a dead-end reached here, the likelihood of Government turning towards private markets cannot be ruled out. And once that turn is taken, thee markets would become vulnerable to whims and fancies of local politicians who would normally have influencing powers in their functioning, thus taking the system on their discretionary rides.

There obviously are certain questions that deem an answer and these fall within the ambit of policy making. For instance, is there a provision in the budget to increase the ambit of farmers who are covered by the MSP? Secondly, calculations of MSP involve private costs and benefits, and thus exhibit one side of the story. For an exhaustive understanding, social costs and benefits must also be incorporated. With a focus primarily on private costs and benefits, socially wasteful production and specialization is encouraged, like paddy production in north India with attendant consequences to which we have become grim witnesses. Would this double-bind ever be overcome is a policy matter, and at the moment what is being witnessed is a policy paralysis and lack of political will transforming only in embanking the vote bank. Thats a pity!

Malicious Machine Learnings? Privacy Preservation and Computational Correctness Across Parties. Note Quote/Didactics.

Invincea_graph_DARPA

Multi-Party Computation deals with the following problem: There are n ≥ 2 parties P1, . . ., Pn where party Pi holds input ti, 1 ≤ i ≤ n, and they wish to compute together a functions = f (t1, . . . , tn) on their inputs. The goal is that each party will learn the output of the function, s, yet with the restriction that Pi will not learn any additional information about the input of the other parties aside from what can be deduced from the pair (ti, s). Clearly it is the secrecy restriction that adds complexity to the problem, as without it each party could announce its input to all other parties, and each party would locally compute the value of the function. Thus, the goal of Multi-Party Computation is to achieve the following two properties at the same time: correctness of the computation and privacy preservation of the inputs.

The following two generalizations are often useful:

(i) Probabilistic functions. Here the value of the function depends on some random string r chosen according to some distribution: s = f (t1, . . . , tn; r). An example of this is the coin-flipping functionality, which takes no inputs, and outputs an unbiased random bit. It is crucial that the value r is not controlled by any of the parties, but is somehow jointly generated during the computation.

(ii) Multioutput functions. It is not mandatory that there be a single output of the function. More generally there could be a unique output for each party, i.e., (s1, . . . , sn) = f(t1,…, tn). In this case, only party Pi learns the output si, and no other party learns any information about the other parties’ input and outputs aside from what can be derived from its own input and output.

One of the most interesting aspects of Multi-Party Computation is to reach the objective of computing the function value, but under the assumption that some of the parties may deviate from the protocol. In cryptography, the parties are usually divided into two types: honest and faulty. An honest party follows the protocol without any deviation. Otherwise, the party is considered to be faulty. The faulty behavior can exemplify itself in a wide range of possibilities. The most benign faulty behavior is where the parties follow the protocol, yet try to learn as much as possible about the inputs of the other parties. These parties are called honest-but-curious (or semihonest). At the other end of the spectrum, the parties may deviate from the prescribed protocol in any way that they desire, with the goal of either influencing the computed output value in some way, or of learning as much as possible about the inputs of the other parties. These parties are called malicious.

We envision an adversary A, who controls all the faulty parties and can coordinate their actions. Thus, in a sense we assume that the faulty parties are working together and can exert the most knowledge and influence over the computation out of this collusion. The adversary can corrupt any number of parties out of the n participating parties. Yet, in order to be able to achieve a solution to the problem, in many cases we would need to limit the number of corrupted parties. This limit is called the threshold k, indicating that the protocol remains secure as long as the number of corrupted parties is at most k.

Assume that there exists a trusted party who privately receives the inputs of all the participating parties, calculates the output value s, and then transmits this value to each one of the parties. This process clearly computes the correct output of f, and also does not enable the participating parties to learn any additional information about the inputs of others. We call this model the ideal model. The security of Multi-Party Computation then states that a protocol is secure if its execution satisfies the following: (1) the honest parties compute the same (correct) outputs as they would in the ideal model; and (2) the protocol does not expose more information than a comparable execution with the trusted party, in the ideal model.

Intuitively, the adversary’s interaction with the parties (on a vector of inputs) in the protocol generates a transcript. This transcript is a random variable that includes the outputs of all the honest parties, which is needed to ensure correctness, and the output of the adversary A. The latter output, without loss of generality, includes all the information that the adversary learned, including its inputs, private state, all the messages sent by the honest parties to A, and, depending on the model, maybe even include more information, such as public messages that the honest parties exchanged. If we show that exactly the same transcript distribution can be generated when interacting with the trusted party in the ideal model, then we are guaranteed that no information is leaked from the computation via the execution of the protocol, as we know that the ideal process does not expose any information about the inputs. More formally,

Let f be a function on n inputs and let π be a protocol that computes the function f. Given an adversary A, which controls some set of parties, we define REALA,π(t) to be the sequence of outputs of honest parties resulting from the execution of π on input vector t under the attack of A, in addition to the output of A. Similarly, given an adversary A′ which controls a set of parties, we define IDEALA′,f(t) to be the sequence of outputs of honest parties computed by the trusted party in the ideal model on input vector t, in addition to the output of A′. We say that π securely computes f if, for every adversary A as above, ∃ an adversary A′, which controls the same parties in the ideal model, such that, on any input vector t, we have that the distribution of REALA,π(t) is “indistinguishable” from the distribution of IDEALA′,f(t).

Intuitively, the task of the ideal adversary A′ is to generate (almost) the same output as A generates in the real execution or the real model. Thus, the attacker A′ is often called the simulator of A. The transcript value generated in the ideal model, IDEALA′,f(t), also includes the outputs of the honest parties (even though we do not give these outputs to A′), which we know were correctly computed by the trusted party. Thus, the real transcript REALA,π(t) should also include correct outputs of the honest parties in the real model.

We assumed that every party Pi has an input ti, which it enters into the computation. However, if Pi is faulty, nothing stops Pi from changing ti into some ti′. Thus, the notion of a “correct” input is defined only for honest parties. However, the “effective” input of a faulty party Pi could be defined as the value ti′ that the simulator A′ gives to the trusted party in the ideal model. Indeed, since the outputs of honest parties look the same in both models, for all effective purposes Pi must have “contributed” the same input ti′ in the real model.

Another possible misbehavior of Pi, even in the ideal model, might be a refusal to give any input at all to the trusted party. This can be handled in a variety of ways, ranging from aborting the entire computation to simply assigning ti some “default value.” For concreteness, we assume that the domain of f includes a special symbol ⊥ indicating this refusal to give the input, so that it is well defined how f should be computed on such missing inputs. What this requires is that in any real protocol we detect when a party does not enter its input and deal with it exactly in the same manner as if the party would input ⊥ in the ideal model.

As regards security, it is implicitly assumed that all honest parties receive the output of the computation. This is achieved by stating that IDEALA′,f(t) includes the outputs of all honest parties. We therefore say that our currency guarantees output delivery. A more relaxed property than output delivery is fairness. If fairness is achieved, then this means that if at least one (even faulty) party learns its outputs, then all (honest) parties eventually do too. A bit more formally, we allow the ideal model adversary A′ to instruct the trusted party not to compute any of the outputs. In this case, in the ideal model either all the parties learn the output, or none do. Since A’s transcript is indistinguishable from A′’s this guarantees that the same fairness guarantee must hold in the real model as well.

A further relaxation of the definition of security is to provide only correctness and privacy. This means that faulty parties can learn their outputs, and prevent the honest parties from learning theirs. Yet, at the same time the protocol will still guarantee that (1) if an honest party receives an output, then this is the correct value, and (2) the privacy of the inputs and outputs of the honest parties is preserved.

The basic security notions are universal and model-independent. However, specific implementations crucially depend on spelling out precisely the model where the computation will be carried out. In particular, the following issues must be specified:

  1. The faulty parties could be honest-but-curious or malicious, and there is usually an upper bound k on the number of parties that the adversary can corrupt.
  2. Distinguishing between the computational setting and the information theoretic setting, in the latter, the adversary is unlimited in its computing powers. Thus, the term “indistinguishable” is formalized by requiring the two transcript distributions to be either identical (so-called perfect security) or, at least, statistically close in their variation distance (so-called statistical security). On the other hand, in the computational, the power of the adversary (as well as that of the honest parties) is restricted. A bit more precisely, Multi-Party Computation problem is parameterized by the security parameter λ, in which case (a) all the computation and communication shall be done in time polynomial in λ; and (b) the misbehavior strategies of the faulty parties are also restricted to be run in time polynomial in λ. Furthermore, the term “indistinguishability” is formalized by computational indistinguishability: two distribution ensembles {Xλ}λ and {Yλ}λ are said to be computationally indistinguishable, if for any polynomial-time distinguisher D, the quantity ε, defined as |Pr[D(Xλ) = 1] − Pr[D(Yλ) = 1]|, is a “negligible” function of λ. This means that for any j > 0 and all sufficiently large λ, ε eventually becomes smaller than λ − j. This modeling helps us to build secure Multi-Party Computational protocols depending on plausible computational assumptions, such as the hardness of factoring large integers.
  3. The two common communication assumptions are the existence of a secure channel and the existence of a broadcast channel. Secure channels assume that every pair of parties Pi and Pj are connected via an authenticated, private channel. A broadcast channel is a channel with the following properties: if a party Pi (honest or faulty) broadcasts a message m, then m is correctly received by all the parties (who are also sure the message came from Pi). In particular, if an honest party receives m, then it knows that every other honest party also received m. A different communication assumption is the existence of envelopes. An envelope guarantees the following properties: a value m can be stored inside the envelope, it will be held without exposure for a given period of time, and then the value m will be revealed without modification. A ballot box is an enhancement of the envelope setting that also provides a random shuffling mechanism of the envelopes. These are, of course, idealized assumptions that allow for a clean description of a protocol, as they separate the communication issues from the computational ones. These idealized assumptions may be realized by a physical mechanisms, but in some settings such mechanisms may not be available. Then it is important to address the question if and under what circumstances we can remove a given communication assumption. For example, we know that the assumption of a secure channel can be substituted with a protocol, but under the introduction of a computational assumption and a public key infrastructure.

Equilibrium Market Prices are Unique – Convexity and Concavity Utility Functions on a Linear Case. Note Quote + Didactics.

slide_8

Consider a market consisting of a set B of buyers and a set A of divisible goods. Assume |A| = n and |B| = n′. We are given for each buyer i the amount ei of money she possesses and for each good j the amount bj of this good. In addition, we are given the utility functions of the buyers. Our critical assumption is that these functions are linear. Let uij denote the utility derived by i on obtaining a unit amount of good j. Thus if the buyer i is given xij units of good j, for 1 ≤ j ≤ n, then the happiness she derives is

j=1nuijxij —— (1)

Prices p1, . . . , pn of the goods are said to be market clearing prices if, after each buyer is assigned an optimal basket of goods relative to these prices, there is no surplus or deficiency of any of the goods. So, is it possible to compute such prices in polynomial time?

First observe that without loss of generality, we may assume that each bj is unit – by scaling the uij’s appropriately. The uij’s and ei’s are in general rational; by scaling appropriately, they may be assumed to be integral. By making the mild assumption that each good has a potential buyer, i.e., a buyer who derives nonzero utility from this good. Under this assumption, market clearing prices do exist.

It turns out that equilibrium allocations for Fisher’s linear case are captured as optimal solutions to a remarkable convex program, the Eisenberg–Gale convex program.

A convex program whose optimal solution is an equilibrium allocation must have as constraints the packing constraints on the xij’s. Furthermore, its objective function, which attempts to maximize utilities derived, should satisfy the following:

  1. If the utilities of any buyer are scaled by a constant, the optimal allocation remains unchanged.
  2. If the money of a buyer b is split among two new buyers whose utility functions are the same as that of b then sum of the optimal allocations of the new buyers should be an optimal allocation for b.

The money weighted geometric mean of buyers’ utilities satisfies both these conditions:

max (∏i∈Auiei)1/∑iei —– (2)

then, the following objective function is equivalent:

max (∏i∈Auiei) —– (3)

Its log is used in the Eisenberg–Gale convex program:

maximize, ∑i=1n’eilogui

subject to

ui = ∑j=1nuijxij ∀ i ∈ B

i=1n’ xij ≤ 1 ∀ j ∈ A

xij ≥ 0 ∀ i ∈ B, j ∈ A —– (4)

where xij is the amount of good j allocated to buyer i. Interpret Lagrangian variables, say pj’s, corresponding to the second set of conditions as prices of goods. Optimal solutions to xij’s and pj’s must satisfy the following:

    1. ∀ j ∈ A : p≥ 0
    2. ∀ j ∈ A : p> 0 ⇒ ∑i∈A xij = 1
    3. ∀ i ∈ B, j ∈ A : uij/pj ≤ ∑j∈Auijxij/ei
    4. ∀ i ∈ B, j ∈ A : xij > 0 ⇒ uij/pj = ∑j∈Auijxij/ei

From these conditions, one can derive that an optimal solution to convex program (4) must satisfy the market clearing conditions.

For the linear case of Fisher’s model:

  1. If each good has a potential buyer, equilibrium exists.
  2. The set of equilibrium allocations is convex.
  3. Equilibrium utilities and prices are unique.
  4. If all uij’s and ei’s are rational, then equilibrium allocations and prices are also rational. Moreover, they can be written using polynomially many bits in the length of the instance.

Corresponding to good j there is a buyer i such that uij > 0. By the third condition as stated above,

pj ≥ eiuij/∑juijxij > 0

By the second condition, ∑i∈A xij = 1, implying that prices of all goods are positive and all goods are fully sold. The third and fourth conditions imply that if buyer i gets good j then j must be among the goods that give buyer i maximum utility per unit money spent at current prices. Hence each buyer gets only a bundle consisting of her most desired goods, i.e., an optimal bundle.

The fourth condition is equivalent to

∀ i ∈ B, j ∈ A : eiuijxij/∑j∈Auijxij = pjxij

Summing over all j

∀ i ∈ B : eij uijxij/∑j∈Auijxij = pjxij

⇒ ∀ i ∈ B : ei = ∑jpjxij

Hence the money of each buyer is fully spent completing the proof that market equilibrium exists. Since each equilibrium allocation is an optimal solution to the Eisenberg-Gale convex program, the set of equilibrium allocations must form a convex set. Since log is a strictly concave function, if there is more than one equilibrium, the utility derived by each buyer must be the same in all equilibria. This fact, together with the fourth condition, gives that the equilibrium prices are unique.

Blue Economy – Sagarmala Financial Engineering: Yet Another Dig. Skeletal Sketch of an Upcoming Talk in Somnath, Gujarat.

untitled

Authorized Share Capital in the case of Sagarmala happens to be INR 1000 crore, and is the number of stock units Sagarmala Development Company Limited (SDCL) has issued in its articles of incorporation. This ASC is open, in that share capital isn’t fully used, and there is ample room for future issuance of additional stock in order to raise capital quickly as and when a demand arises. SDCL can increase the authorized capital at anytime with shareholders’ approval and paying an additional fee to the RoC, Registrar of Companies. 

Capital Budgeting: Business determines and evaluates potential large expenditures/investments. Capital Budgeting is generally a long-term venture, and is a process that SDCL would use (and uses) to identify hat capital projects would create the biggest returns compared with the funds invested in the project. The system of ranking helps establish a potential return in the future, such that the SDCL management can choose where to invest first and most. Let us simply call it the first and most principle of budgeting. Blue Economy that instantiates itself via Sagarmala in India has options to choose from as regards its Capital Budgeting, viz. 

  1. Throughput analysis – This defines the main motives behind a project, where all the costs are operating costs, and the main emphasis is on maximizing profits in passing through a bottleneck. The best example for Sagarmala speculatively thought out is the marking of Western Shipping Corridor for container traffic and posing a livelihood threat to traditional fishermen. Throughput is an alternative to the traditional cost accounting, but is neither accounting, not costing, since it is focused on cash flows. It does not allocate fixed costs to products and services sold or provided and treats direct labour as a fixed expense. Decisions made are based on three critical monetary variables: throughput, investment or inventory and operating expenses. Mathematically, this is defined as revenue minus totally variable expenses, the cost of raw materials or services incurred to produce the products sold or services delivred. T = R – TVE. 
  2. Net Present Value (NPV) – this s the value of all future cash flows, either positive or negative over the entire life of an investment discounted to the present. NPV forms a part of an intrinsic valuation, and is employed for valuing business, investment security, capital project, new venture, cost reduction and almost anything involving cash flows. 

NPV = z1/(1 + r) + z2/(1 + r)2 – X

      , where z1 is the cash flow in time 1, z2 is the cash flow in time 2, r is the discount       range, and X is the purchase price, or initial investment. NPV takes into account the timing of each cash flow that can result in a large impact on the present value of an investment. It is always better to have cash inflows sooner and cash outflows later. this is one spect where SDCL might encounter a bottleneck and thereby take recourse to throughput analysis. Importantly, NPV deliberates on revolving funds.  

  1. Internal Rate of Return (IRR) – this is an interest rate at which NPV from all cash flows become zero. IRR qualifies attractiveness of an investment, whereby if IRR of a new project exceeds company’s required rate of return, then investment in that project is desirable, else project stands in need of a rejection. IRR escapes derivation analytically, and must be noted via mathematical trial and error. Interestingly, business spreadsheets are automated to perform these calculations. Mathematically, IRR is:

0 = P0 + P1/(1 + IRR) + P2/(1 + IRR)2 + …. + Pn/(1 + IRR)n

, where P0, P1,…, Pn are cash flows in periods of time 1, 2, …, n. 

 With a likelihood of venture capital and private equity expected in Sagarmala accompanied with multiple cash investments over the life-cycle of the project, IRR could come in handy for an IPO. 

     4. Discounted Cash Flow – this calculates the present value of an investment’s future            cash flows in order to arrive at  current fair value estimate for an investment. Mathematically, 

DCF =  CF1/(1 + r) + CF2/(1 + r)2 + CF3/(1 + r)3 + … + CFn/(1 + r)n

, where CFn are cash flows in respective n periods, and r is discount rate of return. 

DCF accounts for the fact that money received today can be invested today, while money we have to wait for cannot. DCF accounts for the time value of money and provides an estimate of what e should spend today to have an investment worth a certain amount of money at a specific point in the future. 

       5. Payback period – mathematically, this is defined as: 

Payback Period = Investment required/Annual Project Cash flow

This occurs the year plus a number of months before the cash flow turns positive. Though seemingly important, payback period does not consider the time value of investment/money, and is quite inept at handling projects with uneven cash flows. 

As a recap (and here, here, here)

Sagarmala is a 3-tier SPV structure

untitled

Private Players/PPPs OR EPCs/Turnkey – the latter are used for projects with high social impact or low IRR. 

Expenses incurred for project development will be treated as part of equity contribution by SDCL, or, in case SDCL does not have any equity, or expenses incurred are more than the stake of SDCL, SPV will defray SDCL. Divestment possibilities cannot be ruled out in order to recoup capital for future projects. 

Conjuncted: Integer Pivoting as a Polynomial-Time Algorithm

hqdefault1

The Lemke-Howson Algorithm follows the edges of a polyhedron, which is implemented algebraically by pivoting as used by the simplex algorithm for solving a linear program. Let us see, if there is an efficient implementation that has no numerical errors by storing integers of arbitrary precision. The constraints defining the polyhedron are thereby represented as linear equations with nonnegative slack variables. For the polytopes P and Q in

P = {x ∈ RM| x ≥ 0, Bx ≤ 1},

Q = {y ∈ RN |Ay ≤ 1, y ≥ 0}

these slack variables are nonnegative vectors s ∈ RN and r ∈ RM so that x ∈ P and y ∈ Q iff

Bx + s = 1, r + Ay = 1 —– (1)

and

x ≥ 0, s ≥ 0, r ≥ 0, y ≥ 0 —— (2)

A binding inequality corresponds to a zero slack variable. The pair (x, y) is completely labeled iff xiri = 0 ∀ i ∈ M and yjsj = 0 ∀ j ∈ N, which by (2) can be written as the orthogonality condition

xr = 0, ys = 0

A basic solution to (1) is given by n basic (linearly independent) columns of Bx + s = 1 and m basic columns of r + Ay = 1, where the nonbasic variables that correspond to the m respectively n other (nonbasic) columns are set to zero, so that the basic variables are uniquely determined. A basic feasible solution also fulfills (2), and defines a vertex x of P and y of Q. The labels of such a vertex are given by the respective nonbasic columns.

Pivoting is a change of the basis where a nonbasic variable enters and a basic variable leaves the set of basic variables, while preserving feasibility (2).

Integer pivoting always maintains an integer matrix (or “tableau”) of coefficients of a system of linear equations that is equivalent to the original system Bx + s = 1, in the form

CBx + Cs = C1 —– (3)

In (3), C is the inverse of the basis matrix given by the basic columns of the original system, multiplied by the determinant of the basis matrix. The matrix C is given by the (integer) cofactors of the basis matrix; the cofactor of a matrix entry is the determinant of the matrix when the row and column of that element are deleted. When each entry has a bounded number of digits (by at most a factor of n log n compared to the original matrix entries), then integer pivoting is a polynomial-time algorithm. It is also superior to using fractions of integers or rational numbers because their cancelation requires greatest common divisor computations that take the bulk of computation time.