*If a strategy-proof, onto rule does not pick x∗ when it is the median of peaks and y*_{m}‘s, then a contradiction is reached using preferences with peaks at p_{i}^{L} and p_{i}^{H}.

Let us take a look at single-peaked preferences over one-dimensional policy spaces. This domain can be used to model political policies, economic decisions, location problems, or any allocation problem where a single point must be chosen in an interval. The key assumption is that agents’ preferences are assumed to have a single most-preferred point in the interval, and that preferences are “decreasing” as one moves away from that peak.

Formally, the allocation space (or policy space) is the unit interval A = [0, 1]. An outcome is a single point x ∈ A. Each agent i ∈ N has a preference ordering ≽_{i}, which is a weak order over the outcomes in [0, 1]. The preference relation ≽_{i} is single-peaked if ∃ a point p_{i} ∈ A (the peak of ≽_{i}) such that ∀ x ∈ A\{p_{i}} and all λ ∈ [0,1), (λx +(1−λ)p_{i}) ≻_{i} x. Let R denote the class of single-peaked preferences.

We denote the peaks of preference relations ≽_{i}, ≽′_{i}, ≽_{j}, etc., respectively by p_{i}, p_{i}′, p_{j}, etc. Denote a profile (n-tuple) of preferences as ≽ ∈ R^{n}.

One can imagine this model as representing a political decision such as an income tax rate, another political issue with conservative/liberal extremes, the location of a public facility on a road, or even something as simple as a group of people deciding on the temperature setting for a shared office. Here, the agents have an ideal preferred policy in mind, and would prefer that a decision be made as close as possible to this “peak.”

A rule f : R^{n} → A assigns an outcome f(≽) to any preference profile ≽. A rule is strategy-proof if it is a dominant strategy for each agent to report his preferences truthfully when the rule is being used to choose a point.

Our purpose then is to see that this class of problems admits a rich family of strategy-proof rules whose ranges include more than two alternatives. In fact, the family of such rules remains rich even when one restricts attention to rules that satisfy the following condition.

We say that a rule f is onto if ∀ x ∈ A ∃ ≽ ∈ R^{n} such that f(≽) = x. An onto rule cannot preclude an outcome from being chosen *ex ante*. It is not without loss of generality to impose this condition. For instance, fix two points x, y ∈ [0, 1] and consider a rule that chooses whichever of the two points is preferred to the other by a majority of agents (and where x is chosen in case of a tie). Such a rule is strategy-proof, but not onto. Similar strategy-proof rules can even break ties between x and y by using preference information about other points x′, y′, . . ., in [0, 1], even though x′, etc., are not in the range of the rule.

The onto condition is even weaker than what is called unanimity, which requires that whenever all agents’ preferences have the same peak (p_{i} = p_{j} ∀ i, j), the rule must choose that location as the outcome. In turn, unanimity is weaker than Pareto-optimality: ∀ ≽ ∈ R^{n}, ∃ no point x ∈ [0, 1] such that x ≽_{i} f(≽) ∀ i ∈ N.

As it turns out, these three requirements are all equivalent among strategy-proof rules. Suppose f is strategy-proof. Then f is onto iff it is unanimous iff it is Pareto-optimal.

It is clear that Pareto-optimality implies the other two conditions. Suppose f is strategy-proof and onto. Fix x ∈ [0, 1] and let ≽ ∈ R^{n} be such that f(≽) = x. Consider any “unanimous” profile ≽′ ∈ R^{n} such that p_{i}′ = x for each i ∈ N. By strategy-proofness, f (≽′_{1}, ≽_{2}, . . . , ≽_{n}) = x, otherwise agent 1 could manipulate f. Repeating this argument, f (≽′_{1}, ≽′_{2}, ≽_{3}, . . . , ≽_{n}) = x, . . . ,f(≽′) = x. That is, f is unanimous.

In order to derive a contradiction, suppose that f is not Pareto-optimal at some profile ≽ ∈ R^{n}. This implies that either (i) f(≽) < p_{i} ∀ i ∈ N or (ii) f(≽) > p_{i} ∀ i ∈ N . Without loss of generality, assume (i) holds. Furthermore, assume that the agents are labeled so that p_{1} ≤ p_{2} ≤ ··· ≤ p_{n}.

If p_{1} = p_{n} then unanimity is violated, completing the proof. Otherwise, let j ∈ N be such that p_{1} = p_{j} < p_{j+1}; that is, j < n agents have the minimum peak. ∀ i > j, let ≽′_{i} be a preference relation such that both p_{i}′ = p_{1} and f(≽)≽′_{i} p_{i}.

Let x_{n} = f(≽_{1},…, ≽_{n−1}, ≽′_{n}). By strategy-proofness, x_{n} ∈ [f(≽), p_{n}], otherwise agent n (with preference ≽′_{n}) could manipulate f by reporting preference ≽_{n}. Similarly, x_{n} ∉ (f(≽), p_{n}], otherwise agent n (with preference ≽_{n}) could manipulate f by reporting preference ≽′_{n}. Therefore x_{n} = f(≽).

Repeating this argument as each i > j replaces ≽_{i} with ≽′_{i}, we have f(≽_{1},…, ≽_{j}, ≽′_{j+1},…, ≽′_{n}) = f(≽), which contradicts unanimity. Since a strategy-proof, onto rule must be unanimous, this is a contradiction.

The central strategy-proof rule on this domain is the simple median-voter rule. Suppose that the number of agents n is odd. Then the rule that picks the median of the agents’ peaks (p_{i} ’s) is a strategy-proof rule.

It is easy to see why this rule is strategy-proof : If an agent’s peak p_{i} lies below the median peak, then he can change the median only by reporting a preference relation whose peak lies above the true median. The effect of this misreport is for the rule to choose a point even further away from p_{i}, making the agent worse off. A symmetric argument handles the case in which the peak is above the median. Finally, an agent cannot profitably misreport his preferences if his peak is the median one to begin with.

More generally, for any number of agents n and any positive integer k ≤ n, the rule that picks the k^{th} highest peak is strategy-proof for precisely the same reasons. An agent can only move the k^{th} peak further from his own. The median happens to be the case where k = (n + 1)/2.

The strategy-proofness of such rules stands in contrast to the incentives properties of rules that choose average-type statistics. Consider the rule that chooses the average of the n agents’ peaks. Any agent with peak p_{i} ∈ (0, 1) that is not equal to the average can manipulate the rule by reporting preferences with a more extreme peak (closer to 0 or 1) than his true peak.

This would also hold for any weighted average of the agents’ peaks, with one exception. If a rule allocated all of the weight to one agent, then the resulting rule simply picks that agent’s peak always. Such a dictatorial rule is strategy-proof and onto.

In addition to favorable incentives properties, rules based on order statistics have the property that they require little information to be computed. Technically a rule requires agents to report an entire preference ordering over [0, 1]. There is a need to transcend the rules, which, only require agents to report their most preferred point, i.e., a single number. In fact, under the onto assumption, this informational property is a consequence of the strategy-proofness requirement; that is, all strategy-proof and onto rules have the property that they can be computed solely from information about the agents’ peaks.

Let us generalize the class of “k^{th}-statistic rules”. Consider a fixed set of points y_{1}, y_{2}, . . . , y_{n−1} ∈ A. Consider the rule that, for any profile of preferences ≽, chooses the median of the 2n − 1 points consisting of the n agents’ peaks and the n − 1 points of y. This differs in that, for some choices of y and some profiles of preferences, the rule may choose a point that is not the peak of any agent’s preferences. Yet, such a rule is strategy-proof.

Such rules compose the entire class of strategy-proof and onto rules that treat agents symmetrically. To formalize this latter requirement, we call a rule anonymous if for any ≽ ∈ R^{n} and any permutation ≽′ of ≽, f(≽′) = f (≽). This requirement captures the idea that the agents’ names play no role in the behavior of a rule. Dictatorial rules are examples of rules that are strategy-proofand onto, but not anonymous.

A rule f is strategy-proof, onto, and anonymous iff ∃ y_{1}, y_{2},…, y_{n−1} ∈ [0,1] such that ∀ ≽ ∈ R^{n},

f(≽) = med{p_{1}, p_{2},…, p_{n}, y_{1}, y_{2},…, y_{n−1}} —– (1)

Suppose f is strategy-proof, onto, and anonymous. We make extensive use of the two (extreme) preference relations that have peaks at 0 and 1 respectively. Since preferences relations are ordinal, there is only one preference relation with a peak at 0 and only one with a peak at 1. Denote these two preference relations by ≽^{0}_{i} and ≽^{1}_{i} respectively.

For any 1 ≤ m ≤ n − 1, let y_{m} denote the outcome of f when m agents have preference relation ≽^{1}_{i} and the remainder have ≽^{0}_{i}:

y_{m} = f(≽^{0}_{1},…, ≽^{0}_{n−m}, ≽^{1}_{n−m+1},…, ≽^{1}_{n})

By anonymity, the order of the arguments of f is irrelevant; if precisely m agents have preference relation ≽^{1}_{i} and the rest have ≽^{0}_{i} then the outcome is y_{m}.

With a profile of preferences ≽ ∈ R^{n} with peaks p_{1}, . . ., p_{n}, and without loss of generality (by anonymity), once one assumes that p_{i} ≤ p_{i+1} for each i ≤ n−1, then,

f(≽) = x∗ ≡ med{p_{1},…, p_{n}, y_{1},…, y_{n−1}}.

If the median is some y_{m}, then suppose x∗ = y_{m} for some m. By monotonicity of the peaks and y_{m}‘s, since x∗ is the median of 2n−1 points this implies p_{n−m} ≤ x∗ = y_{m} ≤ p_{n−m+1}. By assumption,

x∗ = y_{m} = f(≽^{0}_{1},…, ≽^{0}_{n−m}, ≽^{1}_{n−m+1},…, ≽^{1}_{n}) —– (2)

Let x_{1} = f(≽_{1}, ≽^{0}_{2},…, ≽^{0}_{n−m}, ≽^{1}_{n−m+1},…, ≽^{1}_{n}). Strategy-proofness implies x_{1} ≥ x∗, otherwise agent 1 with preference ≽^{0}_{1} could manipulate f. Similarly, since p_{1} ≤ y_{m}, we cannot have x_{1} > x∗, otherwise agent 1 with preference ≽_{1} could manipulate f. Hence x_{1} = x∗. Repeating this argument for all i ≤ n − m, x∗ = f(≽_{1},…,≽_{n−m}, ≽^{1}_{n−m+1},…, ≽^{1}_{n}). The symmetric argument for all i > n−m implies

f(≽_{1},…, ≽_{n}) = x∗ —– (3)

If the median is an agent’s peak, then the remaining case is that y_{m} < x∗ < y_{m+1} for some m. (The cases where x∗ < y_{1} and x∗ > y_{n−1} are similar, denoting y_{0} = 0 and y_{n} = 1). In this case, since the agents’ peaks are in increasing order, we have x∗ = p_{n−m}. If

f(≽^{0}_{1},…, ≽^{0}_{n−m−1}, ≽_{n−m}, ≽^{1}_{n−m+1},…, ≽^{1}_{n}) = x∗ = p_{n−m} —– (4)