Author Archives: Michael Alvarez

It’s happened again — more DMV-generated registration snafus in California

Late this past week, there were stories in California newspapers about yet another snafu by the DMV in their implementation of the state’s “motor voter” process. This time, DMV seems to have incorrectly put people into the voter registration system — though exactly how that happened is unclear.

For example, the LA Times reported about the new snafus in their coverage, “California’s DMV finds 3,000 more unintended voter registrations”:

Of the 3,000 additional wrongly enrolled voters, DMV officials said that as many as 2,500 had no prior history of registration and that there’s no clear answer as to what mistake was made that caused registration data for them to be sent to California’s secretary of state.

The Secretary of State’s Office is reportedly going to drop these unintended registrations from the state’s database.

As we are nearing the November 2018 midterm elections, and as there is a lot of energy and enthusiasm in California about these elections, there’s no doubt that the voter registration system will come under some stress as we get closer and closer to election day.

Our advice is that if you are concerned about your voter registration status, check it. The Secretary of State provides a service that you can use to check if you are registered to vote. Or if you’d rather not use that service, you can contact your county election official directly (many of they have applications on their websites to verify your registration status).

Voter registration snafus in California

There’s a story circulating today that another round of voter registration snafus have surfaced in California. This story in today’s LA Times, “More than 23,000 Californians were registered to vote incorrectly by state DMV” has some details about what appears to have happened:

“The errors, which were discovered more than a month ago, happened when DMV employees did not clear their computer screens between customer appointments. That caused some voter information from the previous appointment, such as language preference or a request to vote by mail, to be “inadvertently merged” into the file of the next customer, Shiomoto and Tong wrote. The incorrect registration form was then sent to state elections officials, who used it to update California’s voter registration database.”

This comes on the heels of reports before the June 2018 primary in California of potential duplicate voter registration records being produced by the DMV, as well as the snafu in Los Angeles County that left approximately 118,000 registered voters off the election-day voting rolls.

These are the sorts of issues in voter registration databases that my research group is looking into, using data from the Orange County Registrar of Voters. Since earlier this spring, we have been developing methodologies and applications to scan the County’s voter registration database to identify situations that might require additional examination by the County’s election staff. Soon we’ll be releasing more information about our methodology, and some of the results. For more information about this project, you can head to our Monitoring the Election website, or stay tuned to Election Updates.

Let’s not forget the voters

Recently my colleague and co-blogger, Charles Stewart, wrote a very interesting post, “Voters Think about Voting Machines.” His piece reminds me of something a point that Charles and I have been making for a long time — that election officials should focus attention on the opinions of voters in their jurisdictions. After all, those voters are one of the primary customers for the administrative services that election officials provide.

Of course, there are lots of ways that election officials can get feedback about the quality of their administrative services, ranging from keeping data on interactions with voters to doing voter satisfaction and confidence surveys.

But as election officials throughout the nation think about upcoming technological and administrative changes to the services they provide voters, they might consider conducting proactive research, to determine in advance of administrative or technological change what voters think about their current service, to understand what changes voters might want, and to see what might be causing their voters to desire changes in administrative services or voting technologies.

This is the sort of question that drove Ines Levin, Yimeng Li, and I to look at what might drive voter opinions about the deployment of new voting technologies in our recent paper, “Fraud, convenience, and e-voting: How voting experience shapes opinions about voting technology.” This paper was recently published in American Politics Research, and we use survey experiments to try to determine what factors seem to drive voters to prefer certain types of voting technologies over others. (For readers who cannot access the published version at APR, here is a pre-publication version at the Caltech/MIT Voting Technology Project’s website.)

Here’s the abstract, summarizing the paper:

In this article, we study previous experiences with voting technologies, support for e-voting, and perceptions of voter fraud, using data from the 2015 Cooperative Congressional Election Study. We find that voters prefer systems they have used in the past, and that priming voters with voting fraud considerations causes them to support lower-tech alternatives to touch-screen voting machines — particularly among voters with previous experience using e-voting technologies to cast their votes. Our results suggest that as policy makers consider the adoption of new voting systems in their states and counties, they would be well-served to pay close attention to how the case for new voting technology is framed.

This type of research is quite valuable for election officials and policy makers, as we argue in the paper. How administrative or technological change is framed to voters — who are the primary consumers of these services and technologies — can really help to facilitate the transition to new policies, procedures, and technologies.

Voting by mail and ballot completion

Andrew Menger, Bob Stein, and Greg Vonnahme have an interesting paper that is now forthcoming at American Politics Research, “Reducing the Undervote With Vote by Mail.” Here’s the APR version, and here’s a link to the pre-publication (ungated) version.

The key result in their analysis of data from Colorado is that they find a modest increase in ballot completion rates in VBM elections in that state, in particular in higher-profile presidential elections. Here’s their abstract:

We study how ballot completion levels in Colorado responded to the adoption of universal vote by mail elections (VBM). VBM systems are among the most widespread and significant election reforms that states have adopted in modern elections. VBM elections provide voters more time to become informed about ballot choices and opportunities to research their choices at the same time as they fill out their ballots. By creating a more information-rich voting environment, VBM should increase ballot completion, especially among peripheral voters. The empirical results show that VBM elections lead to greater ballot completion, but that this effect is only substantial in presidential elections.

This is certainly a topic that needs further research, in particular, determining how to further increase ballot completion rates in lower-profile and lower-information elections.

“None of the above” in Cambodia

“None of the above”, strategic abstention, and mis-marking ballots are sometimes indications of voter dissatisfaction with the choices available to them in an election. This phenomenon has been studied in the research literature, for example, Lucas Nunez, Rod Kiewiet, and I wrote a recent VTP working paper that discusses this at length (“A Taxonomy of Protest Voting“, also available in final published form in the Annual Review of Political Science).

I’m always looking for examples of these sorts of issues in contemporary elections, and this story in the New York Times caught my attention. According to the story (“In Cambodia, Dissenting Voters Find Ways to Say “None of the Above“”), in the recent election in Cambodia of the about 600,000 ballots cast, 8.6% of those ballots were “inadmissible”.

While it is difficult, without further information, to really discern the underlying rationale for all of these “inadmissible” ballots (as Lucas, Rod, and I argue in our paper), this seems like a high rate of problematic ballots, which when combined with the qualitative reports from actual Cambodian voters quoted in the New York Times article indicates that voter dissatisfaction is likely behind many of this problematic ballots.

Though it would be quite interesting to get either voting-station level or even some other micro-data to better understand possible voter intent with respect to these “inadmissible” ballots that were cast in this election.

Research on instant-runoff and ranked-choice elections

Given the interest in Maine’s ranked-choice election tomorrow, I thought that this recent paper with Ines Levin and Thad Hall might be of interest. The paper was recently published in American Politics Research, “Low-information voting: Evidence from instant-runoff elections.” . Here’s the paper’s abstract:

How do voters make decisions in low-information contests? Although some research has looked at low-information voter decision making, scant research has focused on data from actual ballots cast in low-information elections. We focus on three 2008 Pierce County (Washington) Instant-Runoff Voting (IRV) elections. Using individual-level ballot image data, we evaluate the structure of individual rankings for specific contests to determine whether partisan cues underlying partisan rankings are correlated with choices made in nonpartisan races. This is the first time that individual-level data from real elections have been used to evaluate the role of partisan cues in nonpartisan races. We find that, in partisan contests, voters make avid use of partisan cues in constructing their preference rankings, rank-ordering candidates based on the correspondence between voters’ own partisan preferences and candidates’ reported partisan affiliation. However, in nonpartisan contests where candidates have no explicit partisan affiliation, voters rely on cues other than partisanship to develop complete candidate rankings.

There’s a good review of the literature on voting behavior in ranked-choice or instant-runoff elections in the paper, for folks interested in learning more about what research has been done so far on this topic.

“Fraud, convenience, and e-voting”

Ines Levin, Yimeng Li, and I, recently published our paper “Fraud, convenience, and e-voting: How voting experience shapes opinions about voting technology” in the Journal of Information Technology and Politics. Here’s the paper’s abstract:

In this article, we study previous experiences with voting technologies, support for e-voting, and perceptions of voter fraud, using data from the 2015 Cooperative Congressional Election Study. We find that voters prefer systems they have used in the past, and that priming voters with voting fraud considerations causes them to support lower-tech alternatives to touch-screen voting machines — particularly among voters with previous experience using e-voting technologies to cast their votes. Our results suggest that as policy makers consider the adoption of new voting systems in their states and counties, they would be well-served to pay close attention to how the case for new voting technology is framed.

The substantive results will be of interest to researchers and policymakers. The methodology we use — survey experiments — should also be of interest to those who are trying to determine how to best measure the electorate’s opinions about potential election reforms.

Our Orange County project

It’s been a busy few weeks here in California for election geeks, specifically for our research group at Caltech. We’ve launched a pilot test of an election integrity project, in collaboration with Orange County, where we have been using the recent primary here in California to test various methodologies for helping evaluate election administration.

At this point, our goal is to work closely with the Orange County Registrar of Voters to understand what evaluative tools they believe are most helpful to them, and to also determine what sorts of data we can readily obtain during the period immediately before and after a major statewide election.

We recently launched a website that describes the project, and where we are building a dashboard that summarizes the various research products as we produce them.

The website is Monitoring the Election, and if you navigate there you’ll see descriptions of the goals of this project, and some of the preliminary analytics we have produced regarding the June 5, 2018 primary in Orange County. At present, the dashboard has a visualization of the Twitter data we are collecting, an analysis of vote by mail ballot mailing and return, and our observations of early voting in Orange County. In the next day or two we will add some first-pass post-election forensics, a preliminary report on our election day observations, and an observation report regarding the risk-limiting audit that OCRV will conduct early this week.

Again, the project is in pilot phase. We will be evaluating these various analytic tools over the summer, and we will determine which we can produce quickly for the November 2018 general election in Orange County.

Stay tuned!

Deja vu? The National Academies of Science voter registration databases research

Over the past few months, I’ve had this strange sense of deja vu, with all of the news about potential attacks on state voter registration databases, and more recently the questions that have been asked about the security and integrity of state voter registries.

Why? Because many of the questions that are being asked these days about the integrity of US voter registration databases (in particular, by the “Presidential Commission on Election Integrity” (or “Pence commission”), have already been examined in the National Academies of Science (NAS) 2010 study of voter registration databases.

The integrity of state voter registries was exhaustively studied back in 2010, when I was a member of this NAS panel studying how to improve voter registries. In 2010 our panel issued it’s final report, “Improving State Voter Registration Databases”.

I’d call upon the members of the “Pence commission” to read this report prior to their first meeting next week.

I think that if the commission members read this report, they will find that many of the questions they seem to be asking about the security, reliability, accuracy, and integrity of statewide voter registration databases were studied by the NAS panel back in 2010.

The NAS committee had a all-star roster. It had world-renown experts on computer security, databases, record linkage and matching, and election administration; it also included a wide range of election administrators. The committee met frequently with a wide range of additional experts, consulted with a wide range of research, and produced the comprehensive report in 2010 on the technical considerations for voter registries (see Chapter 3 of the report, “Technical Considerations for Voter Registration Databases”). The committee also produced a series of short-term and long-term recommendations for improvement of state registries (Chapters 5 and 6 of the report).

At this point in time, the long-term recommendations from the NAS report bear repeating.

  • Provide funding to support operations, maintenance, and upgrades.
  • Improve data collection and entry.
  • Improve matching procedures.
  • Improve privacy, security, and backup.
  • Improve database interoperability.

As we look towards the 2018 election cycle, my assessment is that scholars and election administrators need to turn their attention to studying matching procedures, improving interoperability, and how to make these datafiles both more secure and more private. States need to provide the necessary funding for this research, and for these improvements. I’d love to see the “Pence commission” engage in a serious discussion of how to improve funding for research and technical improvements of voter registration systems.

So my reaction to the recent requests from the “Pence commission” is that there’s really no need to request detailed state registration and voter information from the states; the basic research on the strengths and weaknesses of state voter registries has been done. Just read the 2010 NAS report, you’ll learn all you need to know about the integrity of state voter registries and steps that are still needed to improve their security, reliability, and accuracy.

Pre-registration of 16 and 17 year olds in California

California has recently launched a program, which allows eligible 16 and 17 year old in California to pre-register to vote. When they turn 18, their registration becomes active. Here’s more information from the CA Secretary of State’s website.

It’s going to be quite interesting in 2018 and 2020 to evaluate how this initiative works. Will those who pre-register be more likely to turnout to vote than those who do not (obviously, controlling for all of the factors that might lead 18 year olds to register and vote)? Who uses this program, and does it have any consequences for how organizations and campaigns conduct voter registration drives and get-out-the-vote activities? Lots of interesting questions here for future study!