Thoughts on the GAO report on wait times

On Tuesday the Government Accountability Office issued its long-awaited report on wait times at polling places.  I recommend it to all who are interested in this topic.

It is no criticism of the report to say that much of what is contained within it has appeared elsewhere.  The report provides one-stop shopping for those interested in the established research on the topic. More importantly, the independent verification of existing research — in the way that only the meticulous, scrupulously nonpartisan GAO can do it — underscores that certain facts about long waiting times are actually facts.

Most importantly, long lines are not universal.  They are concentrated in particular places — certain states, cities, and areas with large minority populations.

While this is bad news for these particular places, it is good news for doing something about long wait times.  Assuming that the jurisdictions beset by wait times are dedicated to doing something about the problem, the policy response can be focused on a half-dozen states and a relatively limited number of large jurisdictions.  (The flip side of this conclusion is that a scatter-shot approach to long wait times would be a mistake.)

The report contains one new finding that deserves attention.  This finding is contained on the very first page of the report:

Estimates from our nationwide survey of local election jurisdictions indicate that most jurisdictions did not collect data that would allow them to calculate voter wait times at individual polling places on the November 2012 General Election Day.

The GAO research team conducted a survey of local election officials, and asked them about the data they did collect that might help with the management of wait times.  This is what they found:

  • 36% of jurisdictions recorded “observations by election officials of voter wait times at polling places”
  • 31% recorded “the number of votes cast at a precinct during a specific time period”
  • 18% recorded the “length of time polling places remained open after designated closing times”
  • 17% recorded the “time individuals checked into a polling place, recorded by an electronic poll book”
  • 16% recorded “voter complaints about wait times at polling places”

My only criticism of the report is that it credits too readily the utility of these data gathering efforts.  Data such as voter complaints and after-hours closings are better than nothing.  They are indicators that local election officials are taking the problem seriously.  But, they are still blunt instruments for helping to manage problems of polling place congestion.  (Imagine, for instance, if the only statistic a Walmart manager had to judge whether to add another cashier line was how long it took to check-out the last customer when the store closed at night.)

Both queuing theory and the application of line-management techniques in retail and manufacturing teach us that specific types of data are needed to manage lines effectively.  Mostly importantly, we need to know when people arrive (not when they get to the front of the line to check in) and how long it takes to complete all the tasks required of them.  The percentage of election jurisdictions gathering this data is effectively zero.

The report does mention an example of two jurisdictions that have taken it upon themselves to gather the type of data that is needed for the proper management of the polls using standard techniques that are common in the private sector:

In at least one election, 1 of these jurisdictions distributed time-stamped cards to every 15th voter upon arrival. Poll workers then recorded the time on each card at various stages of the voting process and collected the cards when voting was complete. In the other jurisdiction, officials stated that they began measuring wait times from arrival to check-in in the August 2014 election by distributing cards to voters upon arrival and then collecting those cards at the check-in station, where they recorded the time of check-in in an electronic poll book.

This is exactly what needs to be done.  It probably doesn’t need to be done in every election and at every precinct.  But, if managers of the nation’s largest jurisdictions began conducting these exercises in representative precincts on a regular basis, they would reap great dividends.

The final question — not covered in the GAO report — is what to do with this data?  I close with some shameless self-promotion.  Earlier this year, at the request of the Presidential Commission on Election Administration, the VTP posted three election management tools that can take input that comes from data-gathering exercises and convert it into output to help guide decisions about the allocation of resources (poll books, privacy booths, etc.) in polling places.  With the support of the Democracy Fund, we are working hard to fine-tune these tools.  If you haven’t checked out the tools, please do.  I am looking forward to sharing the results of our R&D efforts in the coming months.

Voter Identification and Discretion

We have a blog post on our Voter ID and Discretion article out on the LSEUSA blog site.  Poll workers often are influenced by their own biases when implementing voter identification laws but this problem can be mitigated in part by having better educated poll workers.

Improving survey quality — and implications for research on election administration

I did a Q&A recently with Lonna Atkeson, which is now available on the OUPblog, “Improving Survey Methodology: a Q&A with Lonna Atkeson.” This Q&A builds off of a recent Symposium on Advances in Survey Methodology that Lonna and I co-edited in Political Analysis.

New research on Voter ID

A paper by Lonna Atkeson, Yann Kerevel, Thad Hall and myself, “Who Asks for Voter Identification? Explaining Poll-Worker Discretion” is now available in Journal of Politics Early View. Here is the abstract:

As street-level bureaucrats, poll workers bear the primary responsibility for implementing voter identification requirements. Voter identification requirements are not implemented equally across groups of voters, and poll workers exercise substantial discretion in how they apply election law. In states with minimal and varying identification requirements, poll workers appear to treat especially minority voters differently, requesting more stringent voter identification. We explain why poll workers are different from other street-level bureaucrats and how traditional mechanisms of control have little impact on limiting poll-worker discretion. We test why many poll workers appear not to follow the law using a post-election survey of New Mexico poll workers. We find little evidence that race, training, or partisanship matters. Instead, poll worker attitudes toward photo-identification policies and their educational attainment influences implementation of voter-identification laws.

Blueprint to Implementation: Election Administration Reform for 2014, 2016, and Beyond

On Monday, May 19, this event will take place in Chicago, and a number of VTP folks will be there — including Charles Stewart, Steve Graves and myself. Looks like it will be an interesting event, and I’ll try to write more about it on Monday!

VTP’s Charles Stewart Testifies Before US Senate Rules and Administration

The headline says it all — Charles testified at a hearing of the US Senate Rules and Administration committee earlier this week. This link will take you to his written testimony and the webcast.

Improving the quality of surveys

Here’s a Q&A that I recently did with Daniel Oberski on the OUPblog, who has recently developed a helpful software package (Survey Quality Prediction) that is getting an award at AAPOR this week.

Auditing Risks

There is a great story in the NYTimes today about new British rules related to auditing.  Specifically, under the new rules:

Auditors are supposed to comment on the particular risks that companies face and to say what they did to deal with those risks.

They are supposed to discuss how much of the company they actually audited, to disclose what figure they deemed to be the lower limit for materiality [the importance/significance of an amount, transaction, or discrepancy], and to explain how they arrived at that number.

Imagine if we did this in elections!  What if, in every election, we knew the particular risks that were evident in each jurisdiction — based on an audit of the election, processes, and procedures in the jurisdiction — and what the jurisdiction had done to mitigate the risk?  It would provide excellent data on management and allow people to know how well a jurisdiction is working to minimize problems, reduce the possibility of malfeasance, and ensure elections are of the highest quality.

“Ballot Secrecy Concerns and Voter Mobilization”, new paper by Gerber, Huber, Biggers and Hendry

There’s an interesting paper now in early access at American Political Quarterly, by Alan Gerber, Gregory Huber, Daniel Biggers and David Hendry, “Ballot Secrecy Concerns and Voter Mobilization: New Experimental Evidence about Message Source, Context, and the Duration of Mobilization Effects.” Here’s the paper’s abstract:

Recent research finds that doubts about the integrity of the secret ballot as an institution persist among the American public. We build on this finding by providing novel field experimental evidence about how information about ballot secrecy protections can increase turnout among registered voters who had not previously voted. First, we show that a private group’s mailing designed to address secrecy concerns modestly increased turnout in the highly contested 2012 Wisconsin gubernatorial recall election. Second, we exploit this and an earlier field experiment conducted in Connecticut during the 2010 congressional midterm election season to identify the persistent effects of such messages from both governmental and non-governmental sources. Together, these results provide new evidence about how message source and campaign context affect efforts to mobilize previous non-voters by addressing secrecy concerns, as well as show that attempting to address these beliefs increases long-term participation.

euandi.eu and Voting Advice Applications

My colleague Alexander Trechsel at the European University University and the European Union Democracy Observatory has just launched a new Voting Advice Application (VAA) for the 2014 European Parliament Elections, euandi.eu. If you are in a nation participating in these elections, check out euandi!

VAAs have proliferated in recent years, especially in European elections. They are widely used by voters, and increasingly used by researchers to study political communications, the use of new technologies in politics, voting behavior and electoral politics. For example, I recently published a paper with Ines Levin, Alexander Trechsel and Kristjan Vassil in the Journal of Information Technology and Politics, “Voting Advice Applications: How Useful and for Whom?”. We have another paper on VAAs, in Party Politics, “Party preferences in the digital age: The impact of voting advice applictions”, (work that we did with the late Peter Mair).

There’s a lot of excellent work new work on VAAs that has been published, or is now forthcoming. For example, Diego Garzia and Stefan Marschall have an edited volume forthcoming from ECPR Press, “Matching Voters with Parties and Candidates.” There’s much, much more; VAAs are proliferating and many researchers are studying both their use and the data they yield in their work.