National Academies workshop: fraud and audits

As Doug Chapin from has noticed, I am blogging live from the National Academies workshop, “Developing a Sound Analytical Basis for Improving Public Participation and Confidence in 21st Century Elections.” It’s been quite an interesting morning, with a lot of informed and productive discussion.

My panel this morning was on “Fraud prevention and election audits in the new electoral environment.” Fellow panelists were Dan Wallach, Gary Cox, Walter Mebane, and George Gilbert.

My slides are available here, in power point format and in pdf format. Thad’s session is later on this afternoon, on the subject of “Intermediaries” in the election process — and I suspect that once he is done with his session that he’ll put his slides up here as well.

My discussion focused primarily on the issues of making sure that we talk about the entire voting system when we talk about fraud and auditing, to focus some attention on security and contingency planning, and to get us thinking about developing rigorous and thorough protocols for testing of voting systems. These comments parallel and build upon what I talked about last week in Sacramento at the California Secretary of State’s “Voting System Testing Summit.”

During our panel, Walter Mebane presented some interesting results, developing a new method for trying to detect voting device anomalies using data from two Florida counties in the 2004 election (Miami-Dade and Pasco Counties). What Walter does in this analysis is use ballot image data from these two counties, which apparently includes information on the precinct and voting device that recorded each ballot image. Walter than tests to see if the distribution of votes differs across the voting devices used in each precinct, under the assumption that the distribution of votes in a particular race (say the presidential vote distribution) should be the same for each voting device in a particular precinct. If a voting device is systematically malfunctioning, or has been manipulated, it should show up as a deviation from the other voting devices used in the precinct. I’ll talk to Walter and see whether his draft analysis is ready for distribution.

Another interesting thought during the panel discussion came from Dan Wallach, who argued that one important change we could make in the existing testing and certification process would be to alter it from the simple binary “pass or fail” system we now have, to a categorical “grade” format. Of course, voting devices could get a failing grade under Dan’s proposal, but we might get more information about just how close a voting device came to meet certain standards under such a scheme. It might be interesting to consider such a scheme, but where we don’t just get back a single grade for the voting device, but grades for how close the device comes to meeting a whole range of testing standards (imagine a “report card” for the outcome of a particular voting device’s certification process).

More soon …