This working paper was recently made available, Charles Stewart III, “Election Technology and the Voting Experience in 2008.”
Here is the paper’s abstract:
The 2000 election brought the issue of voting machine performance to national attention. According to the Caltech/MIT Voting Technology Project (2001), up to 2 million votes were lost in 2000 owing to problems associated with faulty voting machines and confusing ballots. Stewart (2006) estimated that one million votes were “recovered” in the 2004 presidential election because of the Help America Vote Act’s (HAVA) requirement that punch card ballots and lever machines be replaced by more modern optically scanned ballots and direct recording electronic (DRE) voting machines.
The role of technology in guarding the franchise in the United States has grown even more controversial since 2000. Most notably, a large number of computer scientists and election reform activists have identified what they perceive to be inherent security vulnerabilities associated with DREs (Mercuri 1992; Neumann 1985, 1990, 1993; Howland 2004; Dill 2003; Rubin 2003; Kohno, et al 2004). This alarm has spread more broadly to a large portion of the electorate, leading to efforts nationwide to ban electronic voting that lacks a “paper trail” (Alvarez and Hall 2008). More broadly, regular citizens, activists, and election professionals have become concerned with the performance of different voting technologies from a time-andmotion and/or human factor perspective. Among these concerns are issues like the lifetime cost of different technologies, the ease of use of technologies, and the throughput capacity of different types of voting machines.
Given the concerns that have been raised about the performance of voting technologies, it is remarkable how little empirical evidence has been adduced concerning the performance of voting machines nationwide (Stewart 2008; Alvarez and Hall 2008; Gerken 2009). This is not to say that there is no evidence about voting system performance, only that the evidence is surprisingly thin. There is now a line of “residual vote” scholarship, which uses over- and
under-votes as a proxy for the ease-of-use of different equipment (Ansolabehere and Stewart 2005; Herron and Sekhon 2005; Stewart 2006; Leib and Dittmer 2002; Ansolabehere 2002; Buchler, Jarvis, and McNulty 2004; Brady 2004; Kimball and Kropf 2005; Frisina, Herron, Honaker, and Lewis 2008). Some have studied human factors issues as they pertain to voting machines in experimental and quasi-experimental settings (Herrnson, et al; Everett, Byrne,
Greene, and Houston 2006; Byrne, Greene, and Everett 2007; Lausen 2007). And yet others have used survey techniques to explore the satisfaction of voters with different types of voting technologies (Alvarez, Hall, and Llewelyn 2004, 2008).
The purpose of this paper is to add to the growing literature about how well voting technologies perform in elections, using survey research to gather direct voter feedback. In particular, I use the 2008 Survey on the Performance of American Elections, combined with data about the voting machines used by voters, to assess whether different machines led voters to experience more problems voting or to have less confidence in how elections were run in 2008.
I explore two issues that pertain to the voter experience and voter technologies. The first is whether users of specific voting machines encountered more problems than the users of other types of machines. Practically speaking, this reduces to the question of whether voters who used optical scanning technologies to vote had more (or fewer) problems than those who used DREs in 2008. The second issue is whether voter confidence in the quality of the vote-count varied with the use of different voting machines.
I find that voters who used both DREs and optical scanners had very few problems with voting equipment in 2008, and that the experience of both sets of voters was similar, as far as encountering problems is concerned. The one problem that affected users of voting machines at different rates was in how long they waited in line to vote. DRE voters waited an average of 21 minutes to vote on Election Day, compared to 12 minutes for optical scan voters. There is evidence that most of this difference was not due to the DREs themselves, but to the fact that
DREs tend to be used more often in cities and communities that have large African American populations — areas that may already be suffering from problems with the delivery of government services. I also find that users of DREs were less confident that their votes were counted as cast, compared to users of other voting equipment. There was also an interaction between political ideology and voting machine type in influencing one’s confidence in the quality of the vote count. Liberal voters who used DREs were particularly skeptical that their votes had been
counted as cast.
The rest of the paper proceeds as follows. First, I briefly describe the 2008 Survey of the Performance of American Elections. The following section explores the relationship between voting machine usage and the qualitative experience voting. Then I examine the influence that voting machine type had on voter confidence in the quality of the vote count. The final section concludes.