There’s a new VTP working paper now available, “Practical End-to-End Verifiable Voting via Split-Value Representations and Randomized Partial Checking”, by Ron Rivest and Michael Rabin.
Over the years, various VTP reports and publications have discussed the need for a more vibrant and innovative voting systems industry (for a recent example, see the 2012 VTP report, Voting: What Has Changed, What Hasn’t, and What Needs Improvement). In fact, we will be discussing this in my undergraduate elections class at Caltech this week.
It was interesting to see in the news this morning that Scytl, one of the few voting systems companies, is reporting that they are receiving a $40 million investment from Paul Allen’s Vulcan Capital. That’s a pretty significant investment, at a time when there will be increasing demand for voting systems and technologies throughout the world.
The deadline for submissions for the next issue of JETS is rapidly approaching! See this URL for more information: https://www.usenix.org/jets/issues/0203/call-for-articles
The Presidential Commission on Election Administration’s report is getting a lot of attention and praise following its release on Wednesday. One aspect of the report I want to highlight is the degree to which the Commission aimed to ground their findings in the best available research, academic and otherwise. It renews my faith that it may be possible to build a field of election administration that is more technocratic than it currently is.
I want to lift up an important subset of that appendix, which is a collection of white papers written by a collection of scholars, drawn from a variety of fields and perspectives, that summarized the large literatures that were relevant to the commission’s work. A collection of those papers has been assembled in one place, on the VTP web site, so that others might have easy access to them. Here are the authors and subjects:
- Barry C. Burden and Jeffrey Milyo, “The Recruitment and Training of Poll Workers: What We Know from Scholarly Research”
- Barry C. Burden and Brian J. Gaines, “Administration of Absentee Ballot Programs”
- Charles Stewart III and Daron Shaw, “Lessons from the 2012 Election Administration and Voting Survey”
- Charles Stewart III and Stephen Ansolabehere, “Waiting in Line to Vote”
- Daron Shaw and Vincent Hutchings, “Report on Provisional Ballots and American Elections”
- Lisa Schur, “Reducing Obstacles to Voting for People with Disabilities”
- Robert M. Stein, “Election Administration during Natural Disasters and Emergencies: Hurricane Sandy and the 2012 Election”
- Stephen Ansolabehere and Charles Stewart III, “Report on Registration Systems in American Elections”
- Donald S. Inbody, “Voting by Overseas Citizens and Deployed Military Personnel”
Much of this research effort was assisted by the Democracy Fund, though of course, the research is all the work and opinions of the authors. Speaking personally, I greatly appreciate the support and encouragement of the Fund through these past few months.
In the minds of some, the Presidential Commission on Election Administration was President Obama’s “long lines commission.” While that is an overly narrow description of the commission’s mandate, it identifies the most salient of the motivations behind appointing the commission — reports of voters waiting to vote in the 2012 election. In the words of President Obama, “we have to fix that.”
The commission — rightfully, in my view — didn’t weigh in with diagnosis about what causes all lines, nor did it prescribe a magic bullet to fix them. It did pronounce that a maximum of 30 minutes should be the upper bound of acceptable waiting which, again, is defensible and achievable.
One reason for long lines is that resources are sometimes misallocated to polling places (either on Election Day or in Early Voting). The commission encourages the development of computerized tools to help local jurisdictions figure out how many resources — people, voting machines, poll books, etc. — need to be allocated to each voting location. The encouragement is so strong that a link to a collection of such tools appears on the Commission’s web site, right next to the link one clicks on to download its final report. (In addition, a little down the page, the Commission’s web site has a link to the Caltech/MIT Voting Technology-maintained site that will host these tools in perpetuity, hopefully adding more as time goes by.)
I encourage people to give the tools a look. They include resource calculators developed by MIT Sloan School Professor Steve Graves, election geek Aaron Strauss, and software developer Mark Pelczarski, and various online voter registration tools developed by Rock the Vote. The tools range from efforts that have already proven themselves in past elections (the Pelczarski and RtV tools) to more notional examples that I trust will continue to develop in the coming months.
Here is the most important part of the online tool kit, in my view: Most local election officials are flying blind, when it comes to knowing how many voting machines (and similar devices) they should have in order to serve their communities well, and how to spread those devices among their precincts. They will tell you, as they have told me, that they have rules they follow, based on state law and past elections. But, as far as I can tell, the reigning rules of thumb about resource allocation are unrelated to machine performance.
Most election directors in large jurisdictions, where lines were the biggest problem, could not tell you (within a reasonable degree of certainty) how many voting machines and poll books they would need to meet the commission’s 30-minute standard, because they generally don’t have access to engineering-based tools to compute the right answer.
To some degree, these tools do exist, and the online tool kit web site is an effort to begin collecting them. Still, even the existing tools need to be refined, in light of the needs of local election officials. It is my hope that the online tool kit site, hosted by the VTP, will be the focus of tool development in the coming years. I encourage people to give them a look, to try and improve them, and to contribute to the collection.
With the release of the final report of the Presidential Commission on Election Administration, readers may be interested in a set of resources that were produced in response to the commission’s charge. All of these are mentioned somewhere in the commission’s report (and appendix) and on its web site, but here is a convenient listing. More information about each of them will be forthcoming over the next 24 hours.
Survey of Local Election Officials: Results of a nationwide survey of all local election officials (response rate around 50%) about their work and the challenges they face. (Please note that the data file still needs a little more cleaning, but should be of interest for those interested in exploring these topics from the perspective of local officials.)
Election Toolkit: The beginning of a collection of computer tools that can be used to help manage various aspect of elections. (We encourage further contributions.)
White papers on election administration: Papers written by a collection of top social scientists in the election administration field about aspects of the commission’s charge. (These are VTP working papers 111-119.)
I am in the middle of end-of-semester grades meetings at MIT today, which is preventing me from blogging more about these resources and issues raised by the commission report. So, stay tuned!
More new research on voter turnout: Hur and Achen in POQ on “Coding Voter Turnout Responses in the CPS”
Well, when it rains it pours!
Just as I got done writing the post earlier today on the new paper by Hanmer et al. on voter turnout I discovered that a new paper by Aram Hur and Christopher H. Achen was just published in Public Opinion Quarterly. The Hur and Achen paper discusses an important issue that has long been noted by students of voter turnout in the United States: assumptions that the CPS makes about non-responses to their voter turnout question, how they code those non-responses, and what implications those assumptions and coding decisions have for the levels of turnout that CPS reports.
Here’s the abstract of the Hur and Achen paper, “Coding Voter Turnout Responses in the Current Population Survey”:
The Voting and Registration Supplement to the Current Population Survey (CPS) employs a large sample size and has a very high response rate, and thus is often regarded as the gold standard among turnout surveys. In 2008, however, the CPS inaccurately estimated that presidential turnout had undergone a small decrease from 2004. We show that growing nonresponse plus a long-standing but idiosyncratic Census coding decision was responsible. We suggest that to cope with nonresponse and overreporting, users of the Voting Supplement sample should weight it to reflect actual state vote counts.
This paper should be on the reading list of anyone who uses CPS voter turnout data in their research.
New research on the over-reporting of turnout in surveys: Hanmer, Banks and White in Political Analysis
There’s an interesting analysis of over-reporting of turnout in the most recent issue of Political Analysis (the journal that I co-edit with Jonathan Katz). This paper, by Michael J. Hanmer, Antoine J. Banks, and Ismail K. White, looks at different means of asking survey questions about turnout. Here’s the abstract for “Experiments to Reduce the Over-Reporting of Voting: A Pipeline to the Truth.”
Voting is a fundamental part of any democratic society. But survey-based measures of voting are problematic because a substantial proportion of nonvoters report that they voted. This over-reporting has consequences for our understanding of voting as well as the behaviors and attitudes associated with voting. Relying on the “bogus pipeline” approach, we investigate whether altering the wording of the turnout question can cause respondents to provide more accurate responses. We attempt to reduce over-reporting simply by changing the wording of the vote question by highlighting to the respondent that: (1) we can in fact find out, via public records, whether or not they voted; and (2) we (survey administrators) know some people who say they voted did not. We examine these questions through a survey on US voting-age citizens after the 2010 midterm elections, in which we ask them about voting in those elections. Our evidence shows that the question noting we would check the records improved the accuracy of the reports by reducing the over-reporting of turnout.
Just a reminder (and note to self!), submissions for JETS Volume 2, Number 3 are due on April 8, 2014! Here’s a link for additional information.
Right before the holidays, a new book came out by Jan E. Leighley and Jonathan Nagler, Who Votes Now? Demographics, Issues, Inequality, and Turnout in the United States. The book is published by Princeton University Press, and I strongly encourage everyone who is interested in voter participation in the United States to read this new book. Jan and Jonathan have studied voter participation in the U.S. for a considerable period, and their book presents a variety of new analyses that examine the effects of various election reform efforts on voter participation.