Read sections 10.1 and 10.2 of the text and the following example:
First, let us look at an example.
Alice lives in a house with four other housemates. They are thinking about paying someone to paint the exterior of their house and have decided to hold a vote where everyone will vote either Yes, if they want the house painted, or No, if they don’t. The votes will be public and the set of people who vote for painting will share equally in the cost of the painters, as long as two or more people vote Yes. The people who vote against painting will pay nothing. Also, since the paint covers the whole outside of the house, everyone will be able to enjoy the new, cleaner-looking house. Each person knows whether or not they want the house to be painted. Their desires are shown in the following table:
i | θi | ui(Paint, θi) | ui(NoPaint, θi) |
Alice | WantPaint | 10 | 0 |
Bob | DontWantPaint | 0 | 0 |
Caroline | WantPaint | 10 | 0 |
Donald | WantPaint | 10 | 0 |
Emily | WantPaint | 10 | 0 |
Alice wants the house painted, but let us assume that she does not want to pay for it. She realizes that if only two people vote yes, the house will be painted. As such, she has an incentive to vote against painting—that is, lie about her true preferences—in the hope that some other two agents will vote for it, and the house will get painted anyway. This means that Alice’s strategy will be to try to determine what the others are likely to vote to see if there are enough Yes votes so that she can safely lie. Unfortunately, all that scheming is very inefficient from a system’s perspective. It would be much easier if everyone wanted to tell the truth.
According to Definition 10.2.1 Bayesian game setting,
N = {Alice, Bob, Caroline, Donald, Emily};
Θi = {WantPaint, DontNeedPaint}(i = 1, 2, 3, 4, 5) since there are only two types of agents: those that want the house painted and those that think it does not need paint. Each agent I has a type θi ∈ Θi, which is private. That is, only the agent knows its type; no one else does. Θ = Θ1 × ⋯ Θn.
O = {Paint, NoPaint};
The agents’ utilities for all possible actions: ui(Paint, θi), ui(NoPaint, θi)(i = 1, 2, 3, 4, 5), as shown in the above table.
We assume that we want to maximize social welfare. That is, our social choice function (SCF) is
We assume that the cost of painting the house is $20. The problem is how to design a protocol that will decide whether or not to paint the house and how to pay for it.
Solution 1: Tell all the agents vote Yes or No. Then count the votes and if a majority voted for painting the house, then the house will be painted and the cost ($20) will be divided evenly among the five agents. That is, each agent must pay $4 no matter what. This scheme works fine for all agents except Bob, who did not want the house painted and must now pay $4. We are imposing a tax on Bob for something he does not want. This might not be a desirable solution.
Solution 2: Let all agents vote Yes or No, and then split the cost of painting the house among those who voted Yes. The downside to this is that it gives all the agents, except Bob, an incentive to lie. They want to lie in the hopes that someone else would vote Yes and spare them having to pay for it.
The revelation principle tells us that if there exists a complicated mechanism that implements a given social choice function, there is also a much simpler mechanism that just asks the agents to reveal their types.
Exercise 3 below shows how to check that a mechanism that implements a particular social choice function can be truthfully implemented in dominant strategies. However, we did not show how we came up with the mechanism itself or how we decided to use Vickrey payments. The question is whether or not we can have a general formula that can be used to calculate the agents’ payments, no matter what social choice function is given to us. Unfortunately, such a formula does not appear to exist.
Do the following exercises:
Imagine that you want to sell an item. There are a bunch of prospective buyer agents. You want to sell it to the agent who wants it the most, but you can’t trust them to tell you the truth. The problem can be formally modelled as follows:
Question: how to design a desired mechanism g to implement social choice function and with the mechanism g telling the truth is the dominant strategy for all the agents? (Hint: consider the Vickrey auction.)
Updated June 05 2018 by FST Course Production Staff