Estimating Turnout with Self-Reported Survey Data

There’s long been a debate about the accuracy of voter participation estimates that use self-reported survey data. The seminal research paper on this topic, by Rosenstone and Wolfinger, was published in 1978 (available here for those of you with JSTOR access). They pointed out a methodological problem in the Current Population Survey data they used in their early and important analysis: there seemed to be more people in the survey reporting that they voted, than likely voted in the federal elections they studied.

In the years since the publication of Rosenstone and Wolfinger’s paper, there’s been a lot of debate among academic researchers about this apparent misreporting of turnout in survey self-reports of behavior, much more than I can easily summarize here. But many survey researchers have been using “voter validation” to try to alleviate these potential biases in their survey data, which involves matching survey respondents who say they voted to administrative voter history record (after the election); this approach has been used in many large-scale academic surveys of political behavior, including many of the American National Election Studies.

In an important new study, recently published in Public Opinion Quarterly, Berent, Krosnick and Lupia, set out to test the validation of self-reports of turnout against post-election voter history data. Their paper, “Measuring Voter Registration and Turnout in Surveys: Do Official Government Records Yield More Accurate Assessments”, is one that people interested in studying voter turnout using survey data should read. Here’s the important results from their paper’s abstract:

We explore the viability of turnout validation efforts. We find that several apparently viable methods of matching survey respondents to government records severely underestimate the proportion of Americans who were registered to vote. Matching errors that severely underestimate registration rates also drive down “validated” turnout estimates. As a result, when “validated” turnout estimates appear to be more accurate than self-reports because they produce lower turnout estimates, the apparent accuracy is likely an illusion. Also, among respondents whose self-reports can be validated against government records, the accuracy of self-reports is extremely high. This would not occur if lying was the primary explanation for differences between reported and official turnout rates.

This is an important paper, which deserves close attention. As it is questioning one of the common means of trying to validate self-reported turnout, not only do we need additional research to confirm their results, we need new research to better understand how we can best adjust self-reported survey participation to get the most accurate turnout estimate that we can, using survey data.