Caltech’s Magazine has a feature about our election integrity project, which we are working on in collaboration with Orange County. Read on!
When we first started the Voting Technology Project, in the immediate aftermath of the 2000 presidential election, there was very little known in the research literature about the administration of polling places. We quickly learned, as part of the initial research we did in 2000 and 2001, that polling place problems might have produced a large number of “lost votes” in the 2000 presidential election, but we really had no precise methodology for then producing a reliable estimate of the number of votes lost to polling place problems in 2000, nor a good methodology for understanding what was going on in polling places that might have generated lost votes in that same election. The data and tools we had available to us back then led us to estimate that up to a million votes may have been lost in the 2000 presidential election due to problems in polling places.
In our search for new ways to understand what was going on in polling places that might be generating lost votes, we realized that we needed to do some qualitative, in-person, analysis of polling place administration and operations. Early in 2001, I did my first in-person observation of polling places, which was an eye-opening experience. This led to a number of working papers and research articles, for example the paper that I published with Thad Hall, “Controlling Democracy: The Principal-Agent Problems in Election Administration.”. We found that by working collaboratively with state and local election officials, we could gain access to polling places during elections and thereby learn a great deal about how elections are administered, from them and their polling place workers.
Over the years, these polling place observation efforts have become quite routine for me, and I’ve been involved in polling place observation efforts in many states and countries. Each time I go into a polling place I learn something new, and these qualitative studies have given me an invaluable education about election administration, polling place practices, and election security.
As part of my polling place observations, I early on began to involve graduate students from my research group, and also to involve Caltech undergraduates. I integrated visits to actual polling places into the curriculum of my courses; we would discuss election administration before Election Day, we would then engage in polling place observation on Election Day, and then we would discuss what they observed and what we learned from this activity. In general, this has been wildly successful — for students, to actually see the process as it really works, to meet polling place workers and election officials, and to learn the practical details of administering large and complex elections, is an invaluable part of their education. A number of graduate students who where part of these efforts have gone on to themselves continue to observe elections in their area, and to also build these sort of efforts into their curriculum.
But beyond my anecdotal evidence about the effectiveness of teaching students about election administration through polling place observations, I’ve always wondered about how we can try to better measure the education effect of projects like these, and to from there learn more about how to improve our education of each generation of students about election administration and democracy.
That’s why I was very excited to see the recent publication of “Pedagogic Value of Polling-Place Observation by Students”, by Christopher Mann and a number of colleagues. I urge colleagues who are interested in adding an activity like this to their curriculum to read this paper closely, as it has a number of lessons for all of us.
Here’s the paper’s abstract, for interested readers:
Good education requires student experiences that deliver lessons about practice as well as theory and that encourage students to work for the public good—especially in the operation of democratic institutions (Dewey 1923; Dewy 1938). We report on an evaluation of the pedagogical value of a research project involving 23 colleges and universities across the country. Faculty trained and supervised students who observed polling places in the 2016 General Election. Our findings indicate that this was a valuable learning experience in both the short and long terms. Students found their experiences to be valuable and reported learning generally and specifically related to course material. Postelection, they also felt more knowledgeable about election science topics, voting behavior, and research methods. Students reported interest in participating in similar research in the future, would recommend other students to do so, and expressed interest in more learning and research about the topics central to their experience. Our results suggest that participants appreciated the importance of elections and their study. Collectively, the participating students are engaged and efficacious—essential qualities of citizens in a democracy.
My experience has been that student polling place observation can be a very valuable addition to undergraduate and graduate education. I know that every time I enter a polling place to observe, I learn something new — and helping students along that journey can really have an important effect on their educational experience.
The LA Times reported this week that another 1,500 registration errors have been identified in the DMV “motor voter” process. This time, the errors are being blamed on “data entry” errors.
At this point, given that the general elections are only weeks away, it would be fantastic to see if the type of registration database forensics methods that our research group has been building and testing in our collaboration with the Orange County Registrar of Voters might be applied statewide. While there’s never any guarantees in life, it’s likely that the methods we have been developing might identify some of the errors that DMV seems to be generating, in particular potential duplicate records and sudden changes to important fields in the registration database (like party registration). We’d need to test this out soon, to see if how the methods that we’ve been working on with Orange County might work with the statewide database.
Third-party forensic analysis might help identify some of these problems in the voter database, and could help provide some transparency into the integrity of the database during the important 2018 midterm elections.
In recent months, Americans have become somewhat more confident that election officials are taking the steps necessary to guard against “computer hacking” in the upcoming election. At the same time, likely voters have become no more (or less) confident that their votes will be counted as intended this coming November.
These findings are based on answers to questions posed to a representative national sample of 1,000 adults by YouGov last weekend. These questions, about computer hacking and overall voter confidence, were identical to ones asked last spring. The results suggest that despite a fairly steady stream of negative journalistic reports and opinion pieces implying that election officials are unprepared for the November election (like here, here, and here), the public’s overall evaluations have remained steady, and certainly haven’t gotten worse.
A deeper dive into the data show many of the same traces of partisanship that are now common in attitudes about election administration. For instance, Republicans are more confident about the upcoming election, both from a cybersecurity and general perspective.
Worries about election security
Concern about election security was measured by a question that read:
How confident are you that election officials in your county or town will take adequate measures to guard against voting being interfered with this November, due to computer hacking?
On net, the 9.5-point increase in the “very confident” response came in roughly equal portions from the two “not confident” categories. Of course, because we don’t have a panel of respondents, just two cross-sections, it’s impossible to know how much individual opinion shifted over the five months. Still, it is clear that the net opinion shift is in a positive direction.
The partisan divide over election security preparedness
Who shifted the most? Only one demographic category really stands out upon closer inspection when we examine the change: party. Although confidence in protecting against election hacking rose among all party groups, the rise in the “very confident” response was greater among Republicans than among Democrats. Independents also became more confident, but they were still more subdued than partisans.
The interesting case of political interest
One demographic had an interesting effect in the cross-section, but not in the time series: interest in the news.
In both June and in October, respondents who reported that they followed news and public affairs “most of the time” were more confident that election hacking would be fended off at the local level than those who followed the news less often.
For instance, in June, 70.9% of Republican respondents who reported they followed the new and politics “most of the time” were either “very” or “somewhat” confident that local officials were prepared to fend off hacking in the upcoming election. Republicans not so engaged in political news were less likely to report confidence, at 58.9%. The comparable percentages for Independents were 54.5% and 35.2%, and for Democrats they were 53.5% and 49.0%.
In October, high-interest respondents of all strips were more confident than they had been in June. However, neither the high- nor the low-interest groups grew more confident faster than the other. That’s what I mean when I write that the effect is “in the cross-section, but not in the time series.”
(One might read the previous table as suggesting that high- and low-information Democrats became more confident at different rates over the past four months. However, the number of observations is so small in these subgroups that I wouldn’t make such fine distinctions with these data.)
What do I, and the respondents, mean by “computer hacking?”
Before moving on to voter confidence more generally, I want to address one question that I know some people are asking themselves: What is meant by “computer hacking” in the upcoming election? In March, I wrote about what election hacking means to voters. You can read that post here.
I wrote back then that Republicans were more likely to define the general phrase “election hacking” in terms of domestic actors committing fraud of some sort, while Democrats were more likely to define it in terms of foreigners messing with our elections.
Assuming that this differential framing of the issue remains true today, we can imagine that the more sanguine view about computer security in the upcoming election means different things to the two sets of partisans. It is likely that Republicans are becoming more convinced that state and local election officials have traditional election administration under control for the upcoming election. Democrats, on the other hand, have most likely become slightly more convinced that election officials will be effective in fending off foreign intrusions.
Let’s see what they think when the election is over.
Coda: Voter confidence more generally
The slight improvement in confidence about preparations to defend elections against cyber-attacks is in contrast with the lack of change in attitudes about overall voter confidence.
In addition to asking the cyber-preparedness question, I also recently asked respondents my two standard voter confidence questions. The first, asked of all respondents, was:
How confident are you that votes nationwide will be counted as intended in the 2018 general election?
The second question, asked of respondents who said they planned to vote in November, was:
How confident are you that your vote in the general election will be counted as you intended?
These are commonly asked questions. Others have asked them recently, such as the NPR/Marist poll in September. Here, I take advantage of the fact that I regularly ask the question in the same way, using the same method, to see whether there have been any shifts as the election approaches.
There has been virtually no change in overall responses to either question since May, the last time I asked this question. In May, 58.6% gave either a “very” or “somewhat” confident answer to the nationwide question, compared to 60.5% in October. The comparable percentages for confidence in one’s own vote were 81.7% and 84.4%. The changes across the five months are not large enough to conclude that anything has changed.
Drilling down more deeply into partisanship, we also see few changes that distinguish the parties. Republicans gave more confident responses to both questions, but both parties’ partisans were virtually unchanged since May.
There is now a considerable literature on the tendency of survey respondents to express confidence in the overall quality of the vote count, either in prospect or in retrospect. The findings I report here, therefore, are not path-breaking. They do stand in contrast to attitudes about a newly prominent piece of election administration, computer security. That piece is new to most Americans, and they are still getting their bearings when it comes to assessing the difference between hyped alarm and serious worry in the field. It will be interesting to see how all this plays out in the next month, and in the weeks to follow.
Doug Chapin would, of course, say it more simpy: stay tuned.
Late this past week, there were stories in California newspapers about yet another snafu by the DMV in their implementation of the state’s “motor voter” process. This time, DMV seems to have incorrectly put people into the voter registration system — though exactly how that happened is unclear.
For example, the LA Times reported about the new snafus in their coverage, “California’s DMV finds 3,000 more unintended voter registrations”:
Of the 3,000 additional wrongly enrolled voters, DMV officials said that as many as 2,500 had no prior history of registration and that there’s no clear answer as to what mistake was made that caused registration data for them to be sent to California’s secretary of state.
The Secretary of State’s Office is reportedly going to drop these unintended registrations from the state’s database.
As we are nearing the November 2018 midterm elections, and as there is a lot of energy and enthusiasm in California about these elections, there’s no doubt that the voter registration system will come under some stress as we get closer and closer to election day.
Our advice is that if you are concerned about your voter registration status, check it. The Secretary of State provides a service that you can use to check if you are registered to vote. Or if you’d rather not use that service, you can contact your county election official directly (many of they have applications on their websites to verify your registration status).
There’s a story circulating today that another round of voter registration snafus have surfaced in California. This story in today’s LA Times, “More than 23,000 Californians were registered to vote incorrectly by state DMV” has some details about what appears to have happened:
“The errors, which were discovered more than a month ago, happened when DMV employees did not clear their computer screens between customer appointments. That caused some voter information from the previous appointment, such as language preference or a request to vote by mail, to be “inadvertently merged” into the file of the next customer, Shiomoto and Tong wrote. The incorrect registration form was then sent to state elections officials, who used it to update California’s voter registration database.”
This comes on the heels of reports before the June 2018 primary in California of potential duplicate voter registration records being produced by the DMV, as well as the snafu in Los Angeles County that left approximately 118,000 registered voters off the election-day voting rolls.
These are the sorts of issues in voter registration databases that my research group is looking into, using data from the Orange County Registrar of Voters. Since earlier this spring, we have been developing methodologies and applications to scan the County’s voter registration database to identify situations that might require additional examination by the County’s election staff. Soon we’ll be releasing more information about our methodology, and some of the results. For more information about this project, you can head to our Monitoring the Election website, or stay tuned to Election Updates.
Recently my colleague and co-blogger, Charles Stewart, wrote a very interesting post, “Voters Think about Voting Machines.” His piece reminds me of something a point that Charles and I have been making for a long time — that election officials should focus attention on the opinions of voters in their jurisdictions. After all, those voters are one of the primary customers for the administrative services that election officials provide.
Of course, there are lots of ways that election officials can get feedback about the quality of their administrative services, ranging from keeping data on interactions with voters to doing voter satisfaction and confidence surveys.
But as election officials throughout the nation think about upcoming technological and administrative changes to the services they provide voters, they might consider conducting proactive research, to determine in advance of administrative or technological change what voters think about their current service, to understand what changes voters might want, and to see what might be causing their voters to desire changes in administrative services or voting technologies.
This is the sort of question that drove Ines Levin, Yimeng Li, and I to look at what might drive voter opinions about the deployment of new voting technologies in our recent paper, “Fraud, convenience, and e-voting: How voting experience shapes opinions about voting technology.” This paper was recently published in American Politics Research, and we use survey experiments to try to determine what factors seem to drive voters to prefer certain types of voting technologies over others. (For readers who cannot access the published version at APR, here is a pre-publication version at the Caltech/MIT Voting Technology Project’s website.)
Here’s the abstract, summarizing the paper:
In this article, we study previous experiences with voting technologies, support for e-voting, and perceptions of voter fraud, using data from the 2015 Cooperative Congressional Election Study. We find that voters prefer systems they have used in the past, and that priming voters with voting fraud considerations causes them to support lower-tech alternatives to touch-screen voting machines — particularly among voters with previous experience using e-voting technologies to cast their votes. Our results suggest that as policy makers consider the adoption of new voting systems in their states and counties, they would be well-served to pay close attention to how the case for new voting technology is framed.
This type of research is quite valuable for election officials and policy makers, as we argue in the paper. How administrative or technological change is framed to voters — who are the primary consumers of these services and technologies — can really help to facilitate the transition to new policies, procedures, and technologies.
Andrew Menger, Bob Stein, and Greg Vonnahme have an interesting paper that is now forthcoming at American Politics Research, “Reducing the Undervote With Vote by Mail.” Here’s the APR version, and here’s a link to the pre-publication (ungated) version.
The key result in their analysis of data from Colorado is that they find a modest increase in ballot completion rates in VBM elections in that state, in particular in higher-profile presidential elections. Here’s their abstract:
We study how ballot completion levels in Colorado responded to the adoption of universal vote by mail elections (VBM). VBM systems are among the most widespread and significant election reforms that states have adopted in modern elections. VBM elections provide voters more time to become informed about ballot choices and opportunities to research their choices at the same time as they fill out their ballots. By creating a more information-rich voting environment, VBM should increase ballot completion, especially among peripheral voters. The empirical results show that VBM elections lead to greater ballot completion, but that this effect is only substantial in presidential elections.
This is certainly a topic that needs further research, in particular, determining how to further increase ballot completion rates in lower-profile and lower-information elections.
Two of my best friends and closest confidants in this business, Paul Gronke and David Becker, just exchanged tweets about using early and absentee voting as an early warning device. What this exchange brought to mind was the Florida congressional district 13 race in 2006, which I played a small part in as an expert witness for one of the candidates, Christine Jennings. (You can see my old expert report here.)
First, the setting: The 2006 Florida 13th congressional district race was, at the time, the most expensive congressional election in American history. It pitted Republican Vern Buchanan against the Democrat Christine Jennings. Buchanan was eventually declared winner by 369 votes, out of over 238,000 cast for the candidates.
What drew this election to national attention was the undervote rate for this race and, in particular, the undervote rate in Sarasota County, where Jennings had her strongest support. In Sarasota County, 12.9% of the ballots were blank for the 13th CD race. In the rest of the district, the undervote rate was 2.5%. In the end, it was estimated that the number of “lost votes” in Sarasota County was between 13,209 and 14,739. Because the excessive undervotes were in precincts that disproportionately favored Jennings, it was clear that the excess undervotes in Sarasota County caused Buchanan’s victory.
(As an aside, this was my first court case. The biggest surprise to me, among many, was that the other side of the case — which consolidated the county, ES&S, and Buchanan — pretty much conceded that Buchanan’s victory was due to the undervote problem in Sarasota County. But, that’s a story for another day.)
Here’s another piece of background, which gets us closer to the Becker/Gronke exchange: Part of the evidence that there was something wrong with the voting machines, and not just Sarasota County voters choosing to abstain, was that the undervote rate in early voting and on Election Day was much greater than the absentee voting in that county. This is important because early voting and Election Day voting were conducted on paperless iVotronic machines, whereas absentee voting was conducted on scanned paper.
The absentee undervote rate in Sarasota County was 2.5%, which was close to that of Charlotte County (3.1%), which was also in the district. The early voting undervote rate was 17.6%, compared to 2.3% in Charlotte; the Election Day undervote rate was 13.9%, compared to Charlotte’s 2.4%.
Here’s the factiod from this case that the Becker/Gronke exchange brought to mind. Note the difference in the undervote rate in Sarasota County between early voting (17.6%) and Election Day (13.9%). The Election Day rate wasn’t dramatically lower than the early voting rate, but it was lower, and probably not by chance.
During the early voting period, voters complained to poll workers and the county election office that (1) they hadn’t seen the Jennings/Buchanan race on the computer screen and (2) they had had a hard time getting their vote to register for the correct candidate when they did see the race on the screen. This led the county to instruct precinct poll workers on Election Day to remind voters of the Jennings/Buchanan race, and to be careful in making their selections on the touchscreen.
Of course, the fact that the undervote rate on Election Day didn’t get back down to the 2%-3% range points out the limitations of such verbal warnings. And, I know that Jennings supporters believed that the county’s response was inadequate. But, the big point here is that this is one good example of how early voting can serve as a type of rehearsal for Election Day, and how election officials can diagnose major unanticipated problems with the ballots or equipment. It’s happened before.
Thus, I agree with David Becker, that early voting can definitely help election officials gain early warning about problems with their equipment, systems, or procedures. I would amend the point — and I think he would agree — that this is true even if we’re not concerned about cybersecurity. Preparing for elections requires that millions of small details get done correctly, and early voting can provide confirmation that the preparations are in order.
I don’t know of evidence that absentee voting serves as quite the same type of early-warning system, but it makes intuitive sense, and I would love to hear examples.
Two final cautionary thoughts about the “early voting as early warning idea,” as attractive an idea as it is. First, I’m not convinced that many, or even any, voters will vote early because they want to help shake-out the system. Indeed, there’s the possibility that if a voter believes there are vulnerabilities that will only become visible during early voting, but that are likely to be fixed by Election Day, it would drive them to wait until Election Day. Let other people shake out the system and risk discovering that something needs to be tweaked or fixed.
Second, we always need to be aware of the “Robinson Crusoe fallacy” in thinking about how to respond to risk. The Robinson Crusoe fallacy, a term coined by game theorist George Tsebelis in a classic political science article, refers to the mistakes one can make when we think we are playing a game against nature, rather than playing a game against a rational opponent. If the game is against nature, the strategies you choose don’t influence the strategies the opponents choose. (Think about the decision whether to bring an umbrella with you if there is a possibility of rain. Despite what my wife and I joke about all the time, bringing the umbrella doesn’t lower the chance of rain.) If the opponent is rational, your actions will affect the opponent’s actions. (Tsebellis’s example is the decision to speed when you’re in a hurry and the police might be patrolling.)
A bad guy trying to disrupt the election will probably not want to tip his hand until as late as possible, to have maximal effect. Thus, “early voting as early warning” is probably most effective as a strategy to ensure against major problems on Election Day that occur due to honest mistakes or unanticipated events.
I don’t know if “early voting as early warning” is the best justification for voting early, but it’s not a bad one, either. It’s probably best at sussing out mistakes, and probably will be of limited use in uncovering attacks intended to hurt Election Day.
But, that’s OK. I continue to be convinced that if any voter is going to run into a roadblock in 2018 in getting her vote counted as intended, it will probably be because of a problem related to good, old-fashioned election administration. The need to ensure that the blocking-and-tackling of election administration is properly attended to is reason enough for me to learn about the system from early voting.
“None of the above”, strategic abstention, and mis-marking ballots are sometimes indications of voter dissatisfaction with the choices available to them in an election. This phenomenon has been studied in the research literature, for example, Lucas Nunez, Rod Kiewiet, and I wrote a recent VTP working paper that discusses this at length (“A Taxonomy of Protest Voting“, also available in final published form in the Annual Review of Political Science).
I’m always looking for examples of these sorts of issues in contemporary elections, and this story in the New York Times caught my attention. According to the story (“In Cambodia, Dissenting Voters Find Ways to Say “None of the Above“”), in the recent election in Cambodia of the about 600,000 ballots cast, 8.6% of those ballots were “inadmissible”.
While it is difficult, without further information, to really discern the underlying rationale for all of these “inadmissible” ballots (as Lucas, Rod, and I argue in our paper), this seems like a high rate of problematic ballots, which when combined with the qualitative reports from actual Cambodian voters quoted in the New York Times article indicates that voter dissatisfaction is likely behind many of this problematic ballots.
Though it would be quite interesting to get either voting-station level or even some other micro-data to better understand possible voter intent with respect to these “inadmissible” ballots that were cast in this election.