“Ballot Secrecy Concerns and Voter Mobilization”, new paper by Gerber, Huber, Biggers and Hendry

There’s an interesting paper now in early access at American Political Quarterly, by Alan Gerber, Gregory Huber, Daniel Biggers and David Hendry, “Ballot Secrecy Concerns and Voter Mobilization: New Experimental Evidence about Message Source, Context, and the Duration of Mobilization Effects.” Here’s the paper’s abstract:

Recent research finds that doubts about the integrity of the secret ballot as an institution persist among the American public. We build on this finding by providing novel field experimental evidence about how information about ballot secrecy protections can increase turnout among registered voters who had not previously voted. First, we show that a private group’s mailing designed to address secrecy concerns modestly increased turnout in the highly contested 2012 Wisconsin gubernatorial recall election. Second, we exploit this and an earlier field experiment conducted in Connecticut during the 2010 congressional midterm election season to identify the persistent effects of such messages from both governmental and non-governmental sources. Together, these results provide new evidence about how message source and campaign context affect efforts to mobilize previous non-voters by addressing secrecy concerns, as well as show that attempting to address these beliefs increases long-term participation.

euandi.eu and Voting Advice Applications

My colleague Alexander Trechsel at the European University University and the European Union Democracy Observatory has just launched a new Voting Advice Application (VAA) for the 2014 European Parliament Elections, euandi.eu. If you are in a nation participating in these elections, check out euandi!

VAAs have proliferated in recent years, especially in European elections. They are widely used by voters, and increasingly used by researchers to study political communications, the use of new technologies in politics, voting behavior and electoral politics. For example, I recently published a paper with Ines Levin, Alexander Trechsel and Kristjan Vassil in the Journal of Information Technology and Politics, “Voting Advice Applications: How Useful and for Whom?”. We have another paper on VAAs, in Party Politics, “Party preferences in the digital age: The impact of voting advice applictions”, (work that we did with the late Peter Mair).

There’s a lot of excellent work new work on VAAs that has been published, or is now forthcoming. For example, Diego Garzia and Stefan Marschall have an edited volume forthcoming from ECPR Press, “Matching Voters with Parties and Candidates.” There’s much, much more; VAAs are proliferating and many researchers are studying both their use and the data they yield in their work.

New VTP working paper by Rivest and Rabin

There’s a new VTP working paper now available, “Practical End-to-End Verifiable Voting via Split-Value Representations and Randomized Partial Checking”, by Ron Rivest and Michael Rabin.

The State of the Voting Systems Industry: Scytl Receives Major Investment

Over the years, various VTP reports and publications have discussed the need for a more vibrant and innovative voting systems industry (for a recent example, see the 2012 VTP report, Voting: What Has Changed, What Hasn’t, and What Needs Improvement). In fact, we will be discussing this in my undergraduate elections class at Caltech this week.

It was interesting to see in the news this morning that Scytl, one of the few voting systems companies, is reporting that they are receiving a $40 million investment from Paul Allen’s Vulcan Capital. That’s a pretty significant investment, at a time when there will be increasing demand for voting systems and technologies throughout the world.

JETS submission deadline

The deadline for submissions for the next issue of JETS is rapidly approaching! See this URL for more information: https://www.usenix.org/jets/issues/0203/call-for-articles

PCEA research white papers

The Presidential Commission on Election Administration’s report is getting a lot of attention and praise following its release on Wednesday. One aspect of the report I want to highlight is the degree to which the Commission aimed to ground their findings in the best available research, academic and otherwise.  It renews my faith that it may be possible to build a field of election administration that is more technocratic than it currently is.

The report’s appendix, available through the supportthevoter.gov web site, is a valuable resource on the available research about each aspect of the commission’s charge.

I want to lift up an important subset of that appendix, which is a collection of white papers written by a collection of scholars, drawn from a variety of fields and perspectives, that summarized the large literatures that were relevant to the commission’s work.  A collection of those papers has been assembled in one place, on the VTP web site, so that others might have easy access to them.  Here are the authors and subjects:

Much of this research effort was assisted by the Democracy Fund, though of course, the research is all the work and opinions of the authors. Speaking personally, I greatly appreciate the support and encouragement of the Fund through these past few months.

Election toolkits and the PCA report

In the minds of some, the Presidential Commission on Election Administration was President Obama’s “long lines commission.” While that is an overly narrow description of the commission’s mandate, it identifies the most salient of the motivations behind appointing the commission — reports of voters waiting to vote in the 2012 election. In the words of President Obama, “we have to fix that.”

The commission — rightfully, in my view  — didn’t weigh in with diagnosis about what causes all lines, nor did it prescribe a magic bullet to fix them.  It did pronounce that a maximum of 30 minutes should be the upper bound of acceptable waiting which, again, is defensible and achievable.

One reason for long lines is that resources are sometimes misallocated to polling places (either on Election Day or in Early Voting).  The commission encourages the development of computerized tools to help local jurisdictions figure out how many resources  — people, voting machines, poll books, etc.  — need to be allocated to each voting location.  The encouragement is so strong that a link to a collection of such tools appears on the Commission’s web site, right next to the link one clicks on to download its final report.  (In addition, a little down the page, the Commission’s web site has a link to the Caltech/MIT Voting Technology-maintained site that will host these tools in perpetuity, hopefully adding more as time goes by.)

I encourage people to give the tools a look.  They include resource calculators developed by MIT Sloan School Professor Steve Graves, election geek Aaron Strauss, and software developer Mark Pelczarski, and various online voter registration tools developed by Rock the Vote.  The tools range from efforts that have already proven themselves in past elections (the Pelczarski and RtV tools) to more notional examples that I trust will continue to develop in the coming months.

Here is the most important part of the online tool kit, in my view:  Most local election officials are flying blind, when it comes to knowing how many voting machines (and similar devices) they should have in order to serve their communities well, and how to spread those devices among their precincts.  They will tell you, as they have told me, that they have rules they follow, based on state law and past elections.  But, as far as I can tell, the reigning rules of thumb about resource allocation are unrelated to machine performance.

Most election directors in large jurisdictions, where lines were the biggest problem, could not tell you (within a reasonable degree of certainty) how many voting machines and poll books they would need to meet the commission’s 30-minute standard, because they generally don’t have access to engineering-based tools to compute the right answer.

To some degree, these tools do exist, and the online tool kit web site is an effort to begin collecting them.  Still, even the existing tools need to be refined, in light of the needs of local election officials.  It is my hope that the online tool kit site, hosted by the VTP, will be the focus of tool development in the coming years.  I encourage people to give them a look, to try and improve them, and to contribute to the collection.

VTP resources relevant to the PCEA report

With the release of the final report of the Presidential Commission on Election Administration, readers may be interested in a set of resources that were produced in response to the commission’s charge.  All of these are mentioned somewhere in the commission’s report (and appendix) and on its web site, but here is a convenient listing.  More information about each of them will be forthcoming over the next 24 hours.

Survey of Local Election Officials:  Results of a nationwide survey of all local election officials (response rate around 50%) about their work and the challenges they face.  (Please note that the data file still needs a little more cleaning, but should be of interest for those interested in exploring these topics from the perspective of local officials.)

Election Toolkit:  The beginning of a collection of computer tools that can be used to help manage various aspect of elections. (We encourage further contributions.)

White papers on election administration:  Papers written by a collection of top social scientists in the election administration field about aspects of the commission’s charge.  (These are VTP working papers 111-119.)

I am in the middle of end-of-semester grades meetings at MIT today, which is preventing me from blogging more about these resources and issues raised by the commission report.  So, stay tuned!

 

More new research on voter turnout: Hur and Achen in POQ on “Coding Voter Turnout Responses in the CPS”

Well, when it rains it pours!

Just as I got done writing the post earlier today on the new paper by Hanmer et al. on voter turnout I discovered that a new paper by Aram Hur and Christopher H. Achen was just published in Public Opinion Quarterly. The Hur and Achen paper discusses an important issue that has long been noted by students of voter turnout in the United States: assumptions that the CPS makes about non-responses to their voter turnout question, how they code those non-responses, and what implications those assumptions and coding decisions have for the levels of turnout that CPS reports.

Here’s the abstract of the Hur and Achen paper, “Coding Voter Turnout Responses in the Current Population Survey”:

The Voting and Registration Supplement to the Current Population Survey (CPS) employs a large sample size and has a very high response rate, and thus is often regarded as the gold standard among turnout surveys. In 2008, however, the CPS inaccurately estimated that presidential turnout had undergone a small decrease from 2004. We show that growing nonresponse plus a long-standing but idiosyncratic Census coding decision was responsible. We suggest that to cope with nonresponse and overreporting, users of the Voting Supplement sample should weight it to reflect actual state vote counts.

This paper should be on the reading list of anyone who uses CPS voter turnout data in their research.

New research on the over-reporting of turnout in surveys: Hanmer, Banks and White in Political Analysis

There’s an interesting analysis of over-reporting of turnout in the most recent issue of Political Analysis (the journal that I co-edit with Jonathan Katz). This paper, by Michael J. Hanmer, Antoine J. Banks, and Ismail K. White, looks at different means of asking survey questions about turnout. Here’s the abstract for “Experiments to Reduce the Over-Reporting of Voting: A Pipeline to the Truth.”

Voting is a fundamental part of any democratic society. But survey-based measures of voting are problematic because a substantial proportion of nonvoters report that they voted. This over-reporting has consequences for our understanding of voting as well as the behaviors and attitudes associated with voting. Relying on the “bogus pipeline” approach, we investigate whether altering the wording of the turnout question can cause respondents to provide more accurate responses. We attempt to reduce over-reporting simply by changing the wording of the vote question by highlighting to the respondent that: (1) we can in fact find out, via public records, whether or not they voted; and (2) we (survey administrators) know some people who say they voted did not. We examine these questions through a survey on US voting-age citizens after the 2010 midterm elections, in which we ask them about voting in those elections. Our evidence shows that the question noting we would check the records improved the accuracy of the reports by reducing the over-reporting of turnout.