We have a blog post on our Voter ID and Discretion article out on the LSEUSA blog site. Poll workers often are influenced by their own biases when implementing voter identification laws but this problem can be mitigated in part by having better educated poll workers.
I did a Q&A recently with Lonna Atkeson, which is now available on the OUPblog, “Improving Survey Methodology: a Q&A with Lonna Atkeson.” This Q&A builds off of a recent Symposium on Advances in Survey Methodology that Lonna and I co-edited in Political Analysis.
A paper by Lonna Atkeson, Yann Kerevel, Thad Hall and myself, “Who Asks for Voter Identification? Explaining Poll-Worker Discretion” is now available in Journal of Politics Early View. Here is the abstract:
As street-level bureaucrats, poll workers bear the primary responsibility for implementing voter identification requirements. Voter identification requirements are not implemented equally across groups of voters, and poll workers exercise substantial discretion in how they apply election law. In states with minimal and varying identification requirements, poll workers appear to treat especially minority voters differently, requesting more stringent voter identification. We explain why poll workers are different from other street-level bureaucrats and how traditional mechanisms of control have little impact on limiting poll-worker discretion. We test why many poll workers appear not to follow the law using a post-election survey of New Mexico poll workers. We find little evidence that race, training, or partisanship matters. Instead, poll worker attitudes toward photo-identification policies and their educational attainment influences implementation of voter-identification laws.
On Monday, May 19, this event will take place in Chicago, and a number of VTP folks will be there — including Charles Stewart, Steve Graves and myself. Looks like it will be an interesting event, and I’ll try to write more about it on Monday!
The headline says it all — Charles testified at a hearing of the US Senate Rules and Administration committee earlier this week. This link will take you to his written testimony and the webcast.
Here’s a Q&A that I recently did with Daniel Oberski on the OUPblog, who has recently developed a helpful software package (Survey Quality Prediction) that is getting an award at AAPOR this week.
There is a great story in the NYTimes today about new British rules related to auditing. Specifically, under the new rules:
Auditors are supposed to comment on the particular risks that companies face and to say what they did to deal with those risks.
They are supposed to discuss how much of the company they actually audited, to disclose what figure they deemed to be the lower limit for materiality [the importance/significance of an amount, transaction, or discrepancy], and to explain how they arrived at that number.
Imagine if we did this in elections! What if, in every election, we knew the particular risks that were evident in each jurisdiction — based on an audit of the election, processes, and procedures in the jurisdiction — and what the jurisdiction had done to mitigate the risk? It would provide excellent data on management and allow people to know how well a jurisdiction is working to minimize problems, reduce the possibility of malfeasance, and ensure elections are of the highest quality.
There’s an interesting paper now in early access at American Political Quarterly, by Alan Gerber, Gregory Huber, Daniel Biggers and David Hendry, “Ballot Secrecy Concerns and Voter Mobilization: New Experimental Evidence about Message Source, Context, and the Duration of Mobilization Effects.” Here’s the paper’s abstract:
Recent research finds that doubts about the integrity of the secret ballot as an institution persist among the American public. We build on this finding by providing novel field experimental evidence about how information about ballot secrecy protections can increase turnout among registered voters who had not previously voted. First, we show that a private group’s mailing designed to address secrecy concerns modestly increased turnout in the highly contested 2012 Wisconsin gubernatorial recall election. Second, we exploit this and an earlier field experiment conducted in Connecticut during the 2010 congressional midterm election season to identify the persistent effects of such messages from both governmental and non-governmental sources. Together, these results provide new evidence about how message source and campaign context affect efforts to mobilize previous non-voters by addressing secrecy concerns, as well as show that attempting to address these beliefs increases long-term participation.
My colleague Alexander Trechsel at the European University University and the European Union Democracy Observatory has just launched a new Voting Advice Application (VAA) for the 2014 European Parliament Elections, euandi.eu. If you are in a nation participating in these elections, check out euandi!
VAAs have proliferated in recent years, especially in European elections. They are widely used by voters, and increasingly used by researchers to study political communications, the use of new technologies in politics, voting behavior and electoral politics. For example, I recently published a paper with Ines Levin, Alexander Trechsel and Kristjan Vassil in the Journal of Information Technology and Politics, “Voting Advice Applications: How Useful and for Whom?”. We have another paper on VAAs, in Party Politics, “Party preferences in the digital age: The impact of voting advice applictions”, (work that we did with the late Peter Mair).
There’s a lot of excellent work new work on VAAs that has been published, or is now forthcoming. For example, Diego Garzia and Stefan Marschall have an edited volume forthcoming from ECPR Press, “Matching Voters with Parties and Candidates.” There’s much, much more; VAAs are proliferating and many researchers are studying both their use and the data they yield in their work.
There’s a new VTP working paper now available, “Practical End-to-End Verifiable Voting via Split-Value Representations and Randomized Partial Checking”, by Ron Rivest and Michael Rabin.