Charles Stewart on better measurement of election performance

Charles Stewart’s paper for the upcoming APSA conference, “Measuring the Improvement (or Lack of Improvement) in Voting since 2000 in the US” is a work that all interested in election reform should read. Stewart has been in the election reform trenches since 2000, and there are few scholars who know as much as Stewart about what data is and is not available to study election performance — and how good (or bad) that data is.

In his recent paper, Stewart first provides an analytic structure of the election process, which he uses to talk about what data is (and is not) available for studying the performance of the American electoral process. Second, he gives some useful principles to guide data collection. And last, he talks about some of the constraints on our ability to collect large amounts of high quality data to better understand the performance of our voting system.

His analytic structure is summarized in Table 1 of his paper — he focuses on registration, voter authentication at the polls, how voters cast their ballots, and how the votes are counted. The only minor issue here is that two other aspects of the electoral process that I argue are also very important are not well covered here. One is the growing incidence of voting outside the traditional polling place, either through absentee, early or voting by mail. As perhaps as many as 25% of voters in 2004 were cast before election day, and as these procedures likely result in different voting experiences and different potential performance issues, it would be good in future work to see more discussion of performance measures for this relatively new and clearly increasing alternative channel of voting. The other area that is neglected is the broader set of behind the scenes election administration procedures, that likely have a strong impact on the quality of elections in a particular jurisdiction: personnel, training, quality control, data on the chain of custody of voting equipment, detailed expenditure information, and so on. Having performance measures for election administration is also important for future studies of the quality of the American electoral process.

Stewart then moves to a discussion of four basic principles to guide how we might think about data collection in the future: uniformity, transparency, expedition, and multiple sources. The only of these principles that is not immediately apparent is expedition, by which Stewart simply means that election data should be made available as quickly as possible.

After talking about these principles, Stewart turns to three obstacles he sees that stand in the way of better election data gathering; federalism, state and local election officials, and disagreements over the nature of elections. To these I would add what might be immediately apparent — resources. We simply lack necessary resources for the collection, cleaning, analysis, and distribution of detailed election data.

In his conclusion, Stewart provides an overview of the major categories of election performance data: election returns, “systematic” surveys of voters, “systematic” observation of election processes, and “systematic” surveys of election officials. Stewart spends considerable time talking about how we can do better in terms of collecting each type of election performance data. One very interesting twist here is the use of the term “systematic” in three of these categories — while Stewart does not have the space in his paper to go into great detail how one might really undertake a high quality survey of voter satisfaction (for example), it would be interesting for others to fill in these important methodological questions in the future, and thus build upon Stewart’s work. Those of us who have tried to survey voters regarding their opinions about election administration (for example, see “Public Attitudes About Election Governance” or “American Attitudes About Electronic Voting”), or for that matter election officials, know that we need to know much more about ways to improve these measurement tools.

Stewart’s paper is interesting, informative and provocative. Scholars will find it of interest, and so will advocates, election officials and policymakers. By writing about these issues, hopefully Stewart has helped us all move toward a better common undertanding of what measurement strategies we should now put in place to better understand the 2006 and 2008 elections. In addition to listing what data sources are really necessary to understand the performance of election processes (for example, see the VTP report “Insuring the Integrity of the Electoral Process: Recommendations for Consistent and Complete Reporting of Election Data”) we need more analysis, like Stewart’s, into how we can develop better methods of collecting such detailed data.