Voter registration snafus in California

There’s a story circulating today that another round of voter registration snafus have surfaced in California. This story in today’s LA Times, “More than 23,000 Californians were registered to vote incorrectly by state DMV” has some details about what appears to have happened:

“The errors, which were discovered more than a month ago, happened when DMV employees did not clear their computer screens between customer appointments. That caused some voter information from the previous appointment, such as language preference or a request to vote by mail, to be “inadvertently merged” into the file of the next customer, Shiomoto and Tong wrote. The incorrect registration form was then sent to state elections officials, who used it to update California’s voter registration database.”

This comes on the heels of reports before the June 2018 primary in California of potential duplicate voter registration records being produced by the DMV, as well as the snafu in Los Angeles County that left approximately 118,000 registered voters off the election-day voting rolls.

These are the sorts of issues in voter registration databases that my research group is looking into, using data from the Orange County Registrar of Voters. Since earlier this spring, we have been developing methodologies and applications to scan the County’s voter registration database to identify situations that might require additional examination by the County’s election staff. Soon we’ll be releasing more information about our methodology, and some of the results. For more information about this project, you can head to our Monitoring the Election website, or stay tuned to Election Updates.

Let’s not forget the voters

Recently my colleague and co-blogger, Charles Stewart, wrote a very interesting post, “Voters Think about Voting Machines.” His piece reminds me of something a point that Charles and I have been making for a long time — that election officials should focus attention on the opinions of voters in their jurisdictions. After all, those voters are one of the primary customers for the administrative services that election officials provide.

Of course, there are lots of ways that election officials can get feedback about the quality of their administrative services, ranging from keeping data on interactions with voters to doing voter satisfaction and confidence surveys.

But as election officials throughout the nation think about upcoming technological and administrative changes to the services they provide voters, they might consider conducting proactive research, to determine in advance of administrative or technological change what voters think about their current service, to understand what changes voters might want, and to see what might be causing their voters to desire changes in administrative services or voting technologies.

This is the sort of question that drove Ines Levin, Yimeng Li, and I to look at what might drive voter opinions about the deployment of new voting technologies in our recent paper, “Fraud, convenience, and e-voting: How voting experience shapes opinions about voting technology.” This paper was recently published in American Politics Research, and we use survey experiments to try to determine what factors seem to drive voters to prefer certain types of voting technologies over others. (For readers who cannot access the published version at APR, here is a pre-publication version at the Caltech/MIT Voting Technology Project’s website.)

Here’s the abstract, summarizing the paper:

In this article, we study previous experiences with voting technologies, support for e-voting, and perceptions of voter fraud, using data from the 2015 Cooperative Congressional Election Study. We find that voters prefer systems they have used in the past, and that priming voters with voting fraud considerations causes them to support lower-tech alternatives to touch-screen voting machines — particularly among voters with previous experience using e-voting technologies to cast their votes. Our results suggest that as policy makers consider the adoption of new voting systems in their states and counties, they would be well-served to pay close attention to how the case for new voting technology is framed.

This type of research is quite valuable for election officials and policy makers, as we argue in the paper. How administrative or technological change is framed to voters — who are the primary consumers of these services and technologies — can really help to facilitate the transition to new policies, procedures, and technologies.

Voting by mail and ballot completion

Andrew Menger, Bob Stein, and Greg Vonnahme have an interesting paper that is now forthcoming at American Politics Research, “Reducing the Undervote With Vote by Mail.” Here’s the APR version, and here’s a link to the pre-publication (ungated) version.

The key result in their analysis of data from Colorado is that they find a modest increase in ballot completion rates in VBM elections in that state, in particular in higher-profile presidential elections. Here’s their abstract:

We study how ballot completion levels in Colorado responded to the adoption of universal vote by mail elections (VBM). VBM systems are among the most widespread and significant election reforms that states have adopted in modern elections. VBM elections provide voters more time to become informed about ballot choices and opportunities to research their choices at the same time as they fill out their ballots. By creating a more information-rich voting environment, VBM should increase ballot completion, especially among peripheral voters. The empirical results show that VBM elections lead to greater ballot completion, but that this effect is only substantial in presidential elections.

This is certainly a topic that needs further research, in particular, determining how to further increase ballot completion rates in lower-profile and lower-information elections.

Blast from the Past: How Early Voting Can Serve as an Early Warning about Voting Problems.

Two of my best friends and closest confidants in this business, Paul Gronke and David Becker, just exchanged tweets about using early and absentee voting as an early warning device.  What this exchange brought to mind was the Florida congressional district 13 race in 2006, which I played a small part in as an expert witness for one of the candidates, Christine Jennings.  (You can see my old expert report here.)

First, the setting:  The 2006 Florida 13th congressional district race was, at the time, the most expensive congressional election in American history.  It pitted Republican Vern Buchanan against the Democrat Christine Jennings.  Buchanan was eventually declared winner by 369 votes, out of over 238,000 cast for the candidates.

What drew this election to national attention was the undervote rate for this race and, in particular, the undervote rate in Sarasota County, where Jennings had her strongest support.  In Sarasota County, 12.9% of the ballots were blank for the 13th CD race.  In the rest of the district, the undervote rate was 2.5%.  In the end, it was estimated that the number of “lost votes” in Sarasota County was between 13,209 and 14,739.  Because the excessive undervotes were in precincts that disproportionately favored Jennings, it was clear that the excess undervotes in Sarasota County caused Buchanan’s victory.

(As an aside, this was my first court case.  The biggest surprise to me, among many, was that the other side of the case — which consolidated the county, ES&S, and Buchanan — pretty much conceded that Buchanan’s victory was due to the undervote problem in Sarasota County.  But, that’s a story for another day.)

Here’s another piece of background, which gets us closer to the Becker/Gronke exchange:  Part of the evidence that there was something wrong with the voting machines, and not just Sarasota County voters choosing to abstain, was that the undervote rate in early voting and on Election Day was much greater than the absentee voting in that county.  This is important because early voting and Election Day voting were conducted on paperless iVotronic machines, whereas absentee voting was conducted on scanned paper.

The absentee undervote rate in Sarasota County was 2.5%, which was close to that of Charlotte County (3.1%), which was also in the district.  The early voting undervote rate was 17.6%, compared to 2.3% in Charlotte; the Election Day undervote rate was 13.9%, compared to Charlotte’s 2.4%.

Here’s the factiod from this case that the Becker/Gronke exchange brought to mind.  Note the difference in the undervote rate in Sarasota County between early voting (17.6%) and Election Day (13.9%).  The Election Day rate wasn’t dramatically lower than the early voting rate, but it was lower, and probably not by chance.

During the early voting period, voters complained to poll workers and the county election office that (1) they hadn’t seen the Jennings/Buchanan race on the computer screen and (2) they had had a hard time getting their vote to register for the correct candidate when they did see the race on the screen.  This led the county to instruct precinct poll workers on Election Day to remind voters of the Jennings/Buchanan race, and to be careful in making their selections on the touchscreen.

Of course, the fact that the undervote rate on Election Day didn’t get back down to the 2%-3% range points out the limitations of such verbal warnings.  And, I know that Jennings supporters believed that the county’s response was inadequate.  But, the big point here is that this is one good example of how early voting can serve as a type of rehearsal for Election Day, and how election officials can diagnose major unanticipated problems with the ballots or equipment.  It’s happened before.

Thus, I agree with David Becker, that early voting can definitely help election officials gain early warning about problems with their equipment, systems, or procedures.  I would amend the point — and I think he would agree — that this is true even if we’re not concerned about cybersecurity.  Preparing for elections requires that millions of small details get done correctly, and early voting can provide confirmation that the preparations are in order.

I don’t know of evidence that absentee voting serves as quite the same type of early-warning system, but it makes intuitive sense, and I would love to hear examples.

Two final cautionary thoughts about the “early voting as early warning idea,” as attractive an idea as it is.  First, I’m not convinced that many, or even any, voters will vote early because they want to help shake-out the system.  Indeed, there’s the possibility that if a voter believes there are vulnerabilities that will only become visible during early voting, but that are likely to be fixed by Election Day, it would drive them to wait until Election Day.  Let other people shake out the system and risk discovering that something needs to be tweaked or fixed.

Second, we always need to be aware of the “Robinson Crusoe fallacy” in thinking about how to respond to risk.  The Robinson Crusoe fallacy, a term coined by game theorist George Tsebelis in a classic political science article, refers to the mistakes one can make when we think we are playing a game against nature, rather than playing a game against a rational opponent.  If the game is against nature, the strategies you choose don’t influence the strategies the opponents choose.  (Think about the decision whether to bring an umbrella with you if there is a possibility of rain.  Despite what my wife and I joke about all the time, bringing the umbrella doesn’t lower the chance of rain.)  If the opponent is rational, your actions will affect the opponent’s actions.  (Tsebellis’s example is the decision to speed when you’re in a hurry and the police might be patrolling.)

A bad guy trying to disrupt the election will probably not want to tip his hand until as late as possible, to have maximal effect. Thus, “early voting as early warning” is probably most effective as a strategy to ensure against major problems on Election Day that occur due to honest mistakes or unanticipated events.

I don’t know if “early voting as early warning”  is the best justification for voting early, but it’s not a bad one, either.  It’s probably best at sussing out mistakes, and probably will be of limited use in uncovering attacks intended to hurt Election Day.

But, that’s OK.  I continue to be convinced that if any voter is going to  run into a roadblock in 2018 in getting her vote counted as intended, it will probably be because of a problem related to good, old-fashioned election administration.  The need to ensure that the blocking-and-tackling of election administration is properly attended to is reason enough for me to learn about the system from early voting.

 

 

“None of the above” in Cambodia

“None of the above”, strategic abstention, and mis-marking ballots are sometimes indications of voter dissatisfaction with the choices available to them in an election. This phenomenon has been studied in the research literature, for example, Lucas Nunez, Rod Kiewiet, and I wrote a recent VTP working paper that discusses this at length (“A Taxonomy of Protest Voting“, also available in final published form in the Annual Review of Political Science).

I’m always looking for examples of these sorts of issues in contemporary elections, and this story in the New York Times caught my attention. According to the story (“In Cambodia, Dissenting Voters Find Ways to Say “None of the Above“”), in the recent election in Cambodia of the about 600,000 ballots cast, 8.6% of those ballots were “inadmissible”.

While it is difficult, without further information, to really discern the underlying rationale for all of these “inadmissible” ballots (as Lucas, Rod, and I argue in our paper), this seems like a high rate of problematic ballots, which when combined with the qualitative reports from actual Cambodian voters quoted in the New York Times article indicates that voter dissatisfaction is likely behind many of this problematic ballots.

Though it would be quite interesting to get either voting-station level or even some other micro-data to better understand possible voter intent with respect to these “inadmissible” ballots that were cast in this election.

The Blue Shift Comes to the California Primary

Ned Foley alerted the world to the “blue shift” that has begun to characterize the trends in vote totals after the initial tranche of results are released on election night.    The blue shift is the tendency for presidential vote results to trend in a Democratic direction as the count proceeds from ballots counted on Election Day to ballots counted during the canvass period — both absentee and provisional ballots.

For instance, in 2016, the nationwide election-night returns had Clinton leading Trump 48.14% to 47.12%, or by 1.02 points among the 124.2 million votes accounted for on Election Day.  By the time all the votes were counted in all the states, Clinton ended up leading 48.02% to 45.93%, or 2.09 points, among the 137.1 million votes eventually counted.  The growth in Clinton’s lead was a blue shift of 1.07 points (i.e., 2.09-1.02).

(The election night totals are taken from the New York Times.  The final canvass totals are taken from Dave Leip’s Atlas of U.S. Presidential Elections.)

California is one of the biggest contributors to the nationwide blue shift — although there is a blue shift of some size in most states — because of the large number of provisional and mail ballots in the Golden State.  Although 14.2 million votes were eventually counted in California, only 8.8 million were accounted for on election night.  In the process, Clinton’s lead grew from 61.49%-33.22% to 61.48%-31.49%, for a blue shift of 1.72 points.

It’s not surprising, therefore, that a significant blue shift showed up in California’s recent top-two primary.  Let’s take a look.

The good students working for the MIT Election Data and Science Lab downloaded the election night returns from California and stashed them on our Github depository (where anyone can access them).  This means we can compare the early returns with the final results published by the state, which are available here.

The accompanying graph helps to illustrate the magnitude of the blue shift for each of the statewide races with party labels on the ballot. (Click on the graph to biggify it.) In every race except insurance commissioner, Democratic-affiliated candidates as a whole saw their share of the votes grow by over a point, whereas the Republican-affiliated candidates saw their aggregate vote shares shrink by more than a point.

In the gubernatorial primary, for instance, all the Democratic candidates added together accounted for 61.31% of the primary votes cast, compared to 37.43% for the Republicans, a 23.88-point lead.  In the final count, Democratic candidates received 62.51% of all the votes counted, compared to 36.17% for the Republicans, causing the lead to grow to 26.34 points.  The blue shift in this case was 2.46 points.

It should be noted that the partisan shfits associated with the minor-party and no-party candidates did not go systematically one way or the other.  The only non-major-party candidate who was a factor in any of the primaries was Steve Poizner, the former insurance commissioner, who ran without a party label to get his old job back.  In this one race, the two Democrats in the contest gained very little percentage-wise as the count progressed, and Poizner lost very little.

The aggregate blue shifts seen among Democratic and Republican candidates are clear, but do they benefit all candidates equally?  Not really.  To see this, take a look at the change in the vote shares enjoyed by all the Democratic and Republican candidates on the gubernatorial primary. (Click on the graph to enlargify it.)  For both the Republican and Democratic candidates, I have shown the magnitude of the shift in the vote share from election night to the final canvass.  The candidates are displayed with the top election-night vote-getter from each party at the top, and then the other candidates down below in the order of their votes.

All Republicans lost vote share during the canvass, with John Cox, who came in second overall, losing the most — nearly a full point.  Of course, he was far ahead of Travis Allen (and also Antonio Villaraigosa, the second-place Democrat), so he had plenty to lose.  Indeed, since he was the only Republican candidate whose vote share was in the double digits, it’s not surprising that virtually all of the down-side of the blue shift was heaped on him.

Most of the top-ranked Democrats gained vote share as counting progressed.  The exception at the top was Villaraigosa, who lost 0.16 points from election night to the final canvass.  Why he was along among the top vote-getters in losing vote share, I will leave to others to figure out.

California is not alone in having a lot of ballots left to count after election night, but it is by far the largest state with so much to do in the days after an election.  With California a deep blue state, its healthy blue shift has not really been much of a factor in national elections.  This might change in November.  I haven’t yet drilled down to analyze the magnitude of the blue shift in California’s congresisonal primaries, but I suspect the patterns are similar to what we see at the state level.  If so, then a blue shift of a point or two in those elections in November could not only have state significance, but could be a factor as we seek an answer to the question of whether the Democrats will gain control of the U.S. House.

Voters Think about Voting Machines

The annual State Certification Testing of Voting Systems National Conference was recently held in Raleigh, North Carolina.  This is one of my favorite annual meetings, because it brings together state and local officials who are responsible for the care and feeding of voting technologies.  I learn a lot every year.

Check out the program, including slides and other documents, here.

The price of attending is that every participant must give a presentation.  This gave me an opportunity this year to pull together work I have done over the past several years about public opinion related to voting technology and election security.  This is the first in a series of blogs in which I share some of the material I presented in Raleigh.

Today’s post is about attitudes toward voting machines.  The current nationwide attention to election security has led to a renewed interest in voting technologies and to two topics in particular:  (1) the use of computers to cast and count ballots and (2) the age of the equipment and the need to replace machines that were bought in the aftermath of the 2000 election.

Beginning in 2012 I started asking respondents to public opinion surveys what they think about different voting technologies.  Not surprisingly, opinions among the public about voting machines have changed in recent years, particularly as the drumbeat against DREs has grown louder, and as the security of voting technologies has become more salient.

Public opinion in 2012

To see how opinion has changed in the recent past, it is useful to start in 2012, when I first asked a series of questions about voting machines in the Cooperative Congressional Election Study (CCES).

(The CCES is the largest academic study of political opinions conducted every two years before and after the federal election.  One nice thing about the CCES is that it allows researchers to ask their own questions of a representative sample of adults within the context of a larger survey.)

The responses to the questions I asked revealed that DREs were clearly the technology of choice. in 2012

The bottom line was measured by asking respondents which voting technology they would prefer to vote on.  The technologies were defined as follows:

  • Paper ballots scanned and counted by a computer. (Opscans)
  • Electronic voting machines with a touch screen. (DREs)
  • Paper ballots counted by hand. (Paper)

Of the 2,000 respondents, 56% preferred DREs, 25% opscans, 7% paper, and 11% had no opinion. (See the table below.)

Especially interesting are attitudes of respondents based on the voting equipment used in their home county.  The table above shows how this breaks down.  Respondents from counties that used DREs preferred them over opscans, 74%-13%.  Surprisingly, respondents from opscan counties also preferred DREs, by a comfortable 50%-30% edge.

Lying behind the overall preference for DREs over opscans — and the strong preference for either of these technologies compared to hand-counted paper — was a belief in the functional superiority of DREs, especially to count ballots and for usability.

To probe these deeper attitudes about voting machines, I asked respondents what they thought about the three major types of voting technologies.  In particular, I asked whether the respondent thought it was easy (1) for dishonest people to steal votes, (2) for disabled voters to use, and (3) for election officials to count votes accurately.

As the table below shows, on the whole, DREs won out over opscans — they were virtually tied on the question of vote-stealing, whereas DREs won hands-down on usability and counting accuracy.  Both opscans and DREs won out over hand-counted paper.

Probably the most interesting results come in analyzing respondents based on the type of voting technology used in their communities.  Here, we find surprisingly little difference between users of opscans and DREs.  For instance, 26% of opscan users thought it was easy to steal votes using opscans, compared to 31% of DRE users.  Most importantly, in 2012, even users of opscans believed that DREs were easier to use by voters with disabilities, and were easier for election officials to count votes accurately.

Public opinion today

Opinions have changed since 2012.

In 2016, I had the opportunity to ask the same set of questions in the CCES.  In addition, suspecting that opinions were changing rapidly, I was able to put a couple of questions onto the YouGov Omnibus in the fall of 2017.  Here’s what I’ve found:

  1. Support for DREs has fallen since 2012 while support for opscans has risen. (See the accompanying figure. Click on it to enbiggen.)  A particularly sharp drop in support for DREs occurred in just one year, from 2016 to 2017.  As of last fall, DREs no longer had a commanding lead over opscans among respondents overall, and opscan users no longer prefer DREs over opscans.

 

  1. The perceived functional superiority of DREs is disappearing. This is illustrated in the figure below, which shows the percentage of people who believe it is easy to steal votes and to count votes, on opscans, DREs, and hand-counted paper.  (Click on the image to largify it.) There was a significant increase in the belief that it was easy to steal votes on all voting technologies between 2012 and 2016, but the increase was slightly greater for DREs than for opscans.  There was also a significant increase in the belief that it was easy to count votes on both opscans and DREs (but not hand-counted paper) between 2012 and 2016, with some pulling back from those positions in 2017.  Whether we take the 2016 or 2017 numbers, however, it is clear that DREs no longer are the clear winners on the vote-counting dimension.

Thus, as criticism of DREs has grown in public discourse, and computer security has become a more salient issue in election administration, the bloom has come off the DRE rose.  This is good news for those who have long advocated that DREs be abandoned for paper.  There is a caution here, however. Although support for DREs has declined significantly over the past five years, DRE users still believe it is the superior technology compared to opscans.  This suggests that as election administrators transition away from DREs over the next several years, they may find themselves needing to deal with local public opinion that may be skeptical of the move, and regard opscans as an inferior technology.

Research on instant-runoff and ranked-choice elections

Given the interest in Maine’s ranked-choice election tomorrow, I thought that this recent paper with Ines Levin and Thad Hall might be of interest. The paper was recently published in American Politics Research, “Low-information voting: Evidence from instant-runoff elections.” . Here’s the paper’s abstract:

How do voters make decisions in low-information contests? Although some research has looked at low-information voter decision making, scant research has focused on data from actual ballots cast in low-information elections. We focus on three 2008 Pierce County (Washington) Instant-Runoff Voting (IRV) elections. Using individual-level ballot image data, we evaluate the structure of individual rankings for specific contests to determine whether partisan cues underlying partisan rankings are correlated with choices made in nonpartisan races. This is the first time that individual-level data from real elections have been used to evaluate the role of partisan cues in nonpartisan races. We find that, in partisan contests, voters make avid use of partisan cues in constructing their preference rankings, rank-ordering candidates based on the correspondence between voters’ own partisan preferences and candidates’ reported partisan affiliation. However, in nonpartisan contests where candidates have no explicit partisan affiliation, voters rely on cues other than partisanship to develop complete candidate rankings.

There’s a good review of the literature on voting behavior in ranked-choice or instant-runoff elections in the paper, for folks interested in learning more about what research has been done so far on this topic.

“Fraud, convenience, and e-voting”

Ines Levin, Yimeng Li, and I, recently published our paper “Fraud, convenience, and e-voting: How voting experience shapes opinions about voting technology” in the Journal of Information Technology and Politics. Here’s the paper’s abstract:

In this article, we study previous experiences with voting technologies, support for e-voting, and perceptions of voter fraud, using data from the 2015 Cooperative Congressional Election Study. We find that voters prefer systems they have used in the past, and that priming voters with voting fraud considerations causes them to support lower-tech alternatives to touch-screen voting machines — particularly among voters with previous experience using e-voting technologies to cast their votes. Our results suggest that as policy makers consider the adoption of new voting systems in their states and counties, they would be well-served to pay close attention to how the case for new voting technology is framed.

The substantive results will be of interest to researchers and policymakers. The methodology we use — survey experiments — should also be of interest to those who are trying to determine how to best measure the electorate’s opinions about potential election reforms.

Our Orange County project

It’s been a busy few weeks here in California for election geeks, specifically for our research group at Caltech. We’ve launched a pilot test of an election integrity project, in collaboration with Orange County, where we have been using the recent primary here in California to test various methodologies for helping evaluate election administration.

At this point, our goal is to work closely with the Orange County Registrar of Voters to understand what evaluative tools they believe are most helpful to them, and to also determine what sorts of data we can readily obtain during the period immediately before and after a major statewide election.

We recently launched a website that describes the project, and where we are building a dashboard that summarizes the various research products as we produce them.

The website is Monitoring the Election, and if you navigate there you’ll see descriptions of the goals of this project, and some of the preliminary analytics we have produced regarding the June 5, 2018 primary in Orange County. At present, the dashboard has a visualization of the Twitter data we are collecting, an analysis of vote by mail ballot mailing and return, and our observations of early voting in Orange County. In the next day or two we will add some first-pass post-election forensics, a preliminary report on our election day observations, and an observation report regarding the risk-limiting audit that OCRV will conduct early this week.

Again, the project is in pilot phase. We will be evaluating these various analytic tools over the summer, and we will determine which we can produce quickly for the November 2018 general election in Orange County.

Stay tuned!