After the noisy 2020 election season in the United States, journalists wrote extensively about the inaccuracy of preelection polls. They weren’t the only ones. According to a report by the American Association for Public Opinion Research entitled 2020 Pre-Election Polling: An Evaluation of the 2020 General Election Polls, 2020 polls were off by the largest magnitude in decades at both the federal and state levels. For example, a CNN poll predicted that Joe Biden would lead Donald Trump by 12 percentage points. Biden won by 4.5 points.
The writers of the report suggested some possible reasons why polls were so off that year. Among them:
- Trump decried many polls as fake, likely discouraging his supporters from responding. Pollsters might not have overweighted a pro-Trump response enough compared to a pro-Biden response, giving Biden a higher apparent lead.
- It might be that Democrats who responded to polls were more favorable to Biden than those who didn’t. Similarly, Republicans who responded might have been less favorable to Trump.
The report noted that polls not only overestimated Biden’s support; the polls also overestimated Democratic support in the Senate. This could point to a “shy Republican” phenomenon, which might have two causes. It could be that some Republicans didn’t trust the polling institutions, as Nathaniel Rakich of FiveThirtyEight told me. This lack of trust could be partly rectified by having polling organizations come from across the political spectrum. But it also could be that the people participating didn’t want to state their true preferences to a stranger, even about an overall Democratic/Republican choice.
With the 2022 election year underway, one way to counteract the “shyness” problem is for pollsters to give the people they survey plausible deniability. That is, a person should be able to respond to a question honestly, while preventing the pollster from knowing whether the answer is that person’s actual opinion. This could be especially useful in countries having autocratic leaders who might wreak vengeance on dissenters.
The question is how to achieve deniability without sacrificing accuracy? Surprisingly, flipping a coin a couple of times might help. Here I‘m borrowing ideas from Stanley Warner’s work in 1965 and differential privacy. Such methods have been used for surveys.
Here’s an example.
A fictitious country has an upcoming election in which the Tin Man is running against the Scarecrow. Shy voters do not want to admit to supporting the Scarecrow even though many secretly like him.
So the pollsters say to each participant: “Please don‘t respond right away. Instead move to a place where you‘re all alone and flip a coin. If it’s heads, then come back and tell me your true preference. If it’s tails, then please tell me Scarecrow regardless of your true preference.” This was basically the scheme Warner came up with for surveys that might ask embarassing questions such as whether a respondent had for example evaded taxes, slept with a prostitute, etc.
Let’s say, two hundred people answer the poll: 140 pick Scarecrow and 60 pick Tin Man. Given what the pollsters have told the pollees, roughly 100 people will respond “Scarecrow” regardless of their preference just because of the 50-50 probability of a coin toss; they flipped tails. Of the remaining 100 who flipped heads, 40 prefer Scarecrow and 60 prefer Tin Man, so Tin Man is the favorite.
What‘s the privacy advantage? If a person states a preference for Scarecrow, then that might have resulted just from a flip of the coin. Note also that the coin is just one way to introduce chance. Another might be to ask a participant to think of his or her best friend and to answer “Scarecrow” if that best friend’s age were odd and honestly if even.
A few weeks later a scandal so much hurts Tin Man‘s reputation that, in some neighborhoods, it‘s dangerous for people to say they prefer Tin Man. So now the question is whether it‘s possible to conduct a new poll in such a way that any answer given by a participant enjoys plausible deniability.
Suppose the pollster gives the following instructions to people taking a new poll: “Please don‘t respond right away. Instead move to a place where you’re all alone and flip a coin twice. If it comes up heads both times, then tell me ‘Tin Man.’ If it comes up tails both time, then tell me ‘Scarecrow.’ If you get one of each, please tell me your true preference.”
There are 200 pollees. They sum up to 122 for Tin Man and 78 for Scarecrow. But of the 122 for Tin Man, 50 polled for Tin Man because of coin flips (two heads). Similarly, 50 polled for Scarecrow because of coin flips (two tails). So the actual polling result is 72 for Tin Man and 28 for Scarecrow. Tin Man is even more the favorite.
What has this puzzle taught us? For one thing, people in polls can enjoy plausible deniability on all sides of an issue (for the two candidates in our example). For another, the poll results will be accurate if the participants follow the rules, though more people may have to be polled to get the same statistical strengths. What remains to be seen is whether people who might take part in polls will even respond to a pollster and, if they do, respond honestly when the coin flips suggest they should. Primary season has begun; it’s worth a try.