Dominantly Truthful Multi-task Peer Prediction with a Constant Number of Tasks
In the setting where participants are asked multiple similar possibly subjective multi-choice questions (e.g. Do you like Panda Express? Y/N; do you like Chick-fil-A? Y/N), a series of peer prediction mechanisms are designed to incentivize honest reports and some of them achieve dominantly truthfulness: truth-telling is a dominant strategy and strictly dominate other "non-permutation strategy" with some mild conditions. However, a major issue hinders the practical usage of those mechanisms: they require the participants to perform an infinite number of tasks. When the participants perform a finite number of tasks, these mechanisms only achieve approximated dominant truthfulness. The existence of a dominantly truthful multi-task peer prediction mechanism that only requires a finite number of tasks remains to be an open question that may have a negative result, even with full prior knowledge. This paper answers this open question by proposing a new mechanism, Determinant based Mutual Information Mechanism (DMI-Mechanism), that is dominantly truthful when the number of tasks is at least 2C and the number of participants is at least 2. C is the number of choices for each question (C=2 for binary-choice questions). In addition to incentivizing honest reports, DMI-Mechanism can also be transferred into an information evaluation rule that identifies high-quality information without verification when there are at least 3 participants. To the best of our knowledge, DMI-Mechanism is the first dominantly truthful mechanism that works for a finite number of tasks, not to say a small constant number of tasks.
READ FULL TEXT