Now the EAC should use its data to create explicit, comprehensive rankings of states and localities, shaming those local governments that are doing a poor job of running elections and rewarding those that are excelling. It’s a strategy taken directly from the playbook of some human rights and environmental organizations, which have long used rankings to prod nations into improving their practices. The strategy works for a simple reason: No one wants to be at the bottom of a list.
In short, rankings produce improvement and would force election officials to improve their game. Gerken suggests that the report would become sort of a high-stakes “No Child Left Behind” test, and everyone would know your score.
In response to Gerken’s opinion piece, Michael McDonald, who helped collect and evaluate the data, wrote in the electionlaw listserv that:
As a voluntary survey, not all local election jurisdictions provided data on all question items to the EAC. Some questions have better coverage than others and I have some confidence in the patterns that we observe among those jurisdictions that did report as they are often similar to other academic work. But these data are not a perfectly valid snapshot of the 2004 election.
Furthermore, states and even local jurisdictions vary their definitions on basic things such as what constitutes a poll worker or a polling place. Let me take just one example that came up in discussions with commissioners after the EAC meeting: Maine counted 100% of its provisional ballots cast. However, Maine has election day registration and uses provisional ballots only in instances where a voter is contested at the polls. This was a rare event in Maine, and thus there were few provisional ballots cast in the state. Thus, to be meaningful, ranking the states on percent provisional ballots counted would need to take into account the varying definitions of what constitutes a provisional ballot and under what circumstances they are counted.
Both positions have some merit. Obviously, now that the data are available in Excel spreadsheets, which have a “sort” function, states will be ranked. This is just a fact of life. The key question is, what to do with these data?
Michael’s point about missing data is relevant, but for those jurisdictions who did not complete the data, being ranked last might make them take these surveys seriously. Of course, the other alternative is that localities may never complete another survey! These data are critical for understanding and approving elections; why after 2000 and 2004 would a jurisdiction think that not completing an EAC survey was a good idea? (After all, I doubt these jurisdictions are turning down EAC dollars!)
Hopefully, the survey will also encourage jurisdictions to keep better data. One problem here is there are not standard definitions for election facts. Here, the EAC needs to develop standard definitions for election data, something Mike and I continue to promote.
One effective use of these data would be for states to use them to identify local jurisdictions that need help–additional training, additional resources, etc.–and help these jurisdictions improve. And as Michael points out in his listserv posting, researchers do need time to see what these data actually tell us about elections. We currently know far too little about a wide range of election activities. This survey gets us on the right track, but we need even more studies to improve things before 2008.
One question about doing rankings is that it may focus election officials on the wrong factors. For example, there is nothing in the survey data about training, quality of poll workers, and similar human factors that are key to elections. These small things may be the critical factors for elections. (I have just completed a survey that suggests that poll workers are the key to publc confidence in elections). The worst outcome would be the right idea–holding people accountable–creating the bad outcome.