States moving forward with their own testing regimes

In the fall, there was some discussion after the “Voting Systems Testing Summit” that states were likely to move forward with developing their own voting systems testing regimes; this was something that I wrote about on December 1, 2005, in “Voting system testing by states in the future?”. Well, it appears that, quietly, states are moving ahead with testing efforts.

One of these initiatives has been in Maryland, and the results for technical and usability tests conducted there recently have been made available to the public. There are three documents that have been issued:

  1. “Executive Summary”.
  2. “A Study of Vote Verification Technologies: Part I: Technical Study.”
  3. “A Study of Vote Verification Technology Conducted For the Maryland State Board of Elections, Part II: Usability Study” (January 2006).

The technical analysis compared four voter-verifiable voting systems (VoteHere Sentinel, SCYTL Pnyx.DRE; Selker’s VVAATT; Diebold’s VVPAT), by a method in which the systems were rated (often subjectively) on a number of dimensions on 0-5 scales. The summary evaluative ratings then were used to produce the main technical recommendation (summarized also on page 63 of the report in a table): “we cannot recommend that the State of Maryland adopt any of the vote verification products that we examined at this time” (page 5). Given that none of these “products” are actually market-ready products, fully developed and ready for the market, these results are perhaps not too surprising.

The usability study looked at the same voter verifiable systems, and reached the same basic conclusions. The usability methodology involved expert usability analysis, field tests, and some consideration of the impact of the voting systems on election administration. One concern here is the field tests, involving over 800 participants ; as best as I could determine from the report, these field tests did not involve random selection of respondents, random assignment to treatments, nor any attempt to statistically deal with the non-random nature of the study design. The non-experimental design of the field tests leaves these results open to some question, and it would be good in the future to see either a stronger experimental design or an attempt to deal with the non-random nature of these field trials using some statistical method (true election geeks will want to refer here to Paul R. Rosenbaum, “Observational Studies”, Springer, second edition, 2002).

Perhaps not too surprisingly, California is also moving forward with it’s own testing regime, quite distinct from the Maryland approach. If you go to the following link, and scroll down to the material under the “Pending Certification” heading, you’ll see a series of lengthy reports of testing of Sequoia, Hart, and ESS voting systems. The California approach differs from the Maryland approach in that California is putting these voting systems through a rigorous volume testing methodology, unlike the broader testing regime used in Maryland for the voter verifiability testing.

Obviously, Maryland and California’s testing methods differ partly because the two states have been trying to answer different research questions; California appears to want to know how these voting systems will operate in conditions approximating their actual use. Maryland seems to want to know the answers to a broader set of questions, with a very heavy focus on voter and election administrator usability.

This does open the door for one important research agenda: developing testing methods for voting systems, and also developing and implementing the appropriate statistical methods for analyzing the evaluation data obtained from these voting systems tests. I hope that we will hear more from the research community on both of these issues in the near future, as clearly election officials in the states are moving ahead with ambitious testing efforts!