Author Archives: cstewart

New MIT Election Data and Science Lab

I’m very happy to announce this morning that we have launched at MIT an entity we’re calling the MIT Election Data and Science Lab.  The purpose of the lab is to generate, advance, and disseminate scientific knowledge about the conduct of elections in order to improve their performance.

Here is a link to the press release that has more information.

The idea of the lab grows out of a desire to provide a hub to help direct scholars, election officials, citizen groups, journalists, and the general public to the best research into the conduct of elections.  By the end of the year, we will have a fully functioning Website that will serve as a one-stop portal, pointing to scholars, research, and data sources that should be of use to the entire elections community.

While the Lab will be responsible for conducting its own original research, what I most look forward to is championing the excellent work that colleagues around the country are doing to bring rigorous social science to questions of election reform and election administration.  I also hope the Lab will become a venue for practitioners and scholars to meet and grapple with the the difficult empirical issues that face election administration and reform.

This new Lab would not be possible without the financial and moral support of the Madison Initiative of the Hewlett Foundation.  Hewlett is to be praised for stepping up, in this turbulent time, and investing in the long-term strength of American elections.

We have established a very basic Website here that will allow us to share our progress as we build our staff, expand our programming, publish our research, and eventually migrate over to a more sophisticated Website by the end of the year.

As Doug Chapin is fond of saying, stay tuned…

This just in: lines at the polls shorter in 2016 than in 2012

Last Thursday I helped kick off Pew’s Voting in America event by reviewing the preliminary findings from the 2016 Survey of the Performance of American Elections.  (My presentation begins at 1:06:55 in the YouTube video.) While there is a little more work to be done in wrapping up all the data gathering, we have enough responses in to begin painting a systematic view of the election process in 2016 from the perspective of the voters.

One important piece of news from that presentation is that average wait times to vote were down in 2016, compared to 2012, particularly in the states that had the longest waits in 2012.  The following graphs shows the overall perspective.  Here, I have plotted average 2016 wait times against average 2012 wait times.  States below the diagonal line had shorter lines in 2016.  I’ve labeled the states that had the biggest improvements.  Note particularly the drop in Florida times, from an average of 45 minutes to less than 10 minutes.



This is significant news, and a tribute to all the work that went into making the 2016 election run smoothly, despite public comments intended to cast doubt on the quality of American election administration.  The news is a feather in the cap of the bipartisan Presidential Commission on Election Administration, whose creation was spurred on by images of long lines in the 2012 election, both on Election Day itself and in early voting.

Despite this good news, there is still work to be done to achieve the PCEA benchmark that no voter wait longer than 30 minutes in line.  First, even with the significant reduction in wait times among the states with the longest times in 2012, a significant fraction of voters still waited longer than 30 minutes in many states.  (More than 10% of voters waited longer than 30 minutes in roughly half the states.)  Second, while the racial disparities reported in 2012 were diminished in 2016, they weren’t wiped out entirely; the disparity in wait times between whites and blacks in early voting narrowed only slightly.

Nonetheless, this is at least preliminary evidence that when the elections community puts its mind to it, real improvements can be made in the experiences of voters.

Some thoughts about the reports of supposed evidence of election irregularities in MI, PA, and WI

The Internet lit up on Monday over the news, reported in New York Magazine, that a team of computer scientists and lawyers had reported to the Clinton campaign that “they’ve found persuasive evidence that results in Wisconsin, Michigan, and Pennsylvania may have been manipulated or hacked. ”

A later posting in Medium by University of Michigan computer science professor J. Alex Halderman, who was quoted in the NY Mag piece,  stated that the reporter had gotten the point of the analysis wrong, along with some of the numbers.  As he notes, the important point is that all elections should be audited, and not only if you have statistics suggesting that something might be fishy.

Unfortunately, the cat is out of the bag.  Because of the viral spread of the NY Mag article, the conspiratorially minded now have something to hang their hats on, if they want to believe that the 2016 election (like the 2004 election) was stolen by hacked electronic voting machines.

Many of my friends who are not conspiratorially minded have been asking me if I believe the statistical analysis suggested by the NY Mag piece is evidence that something is amiss.  They’re not satisfied with me echoing Alex Halderman’s point that this is beside the point.  So, here are some thoughts about the statistical analysis.

  1. Some very good commentary about the statistical analysis has already appeared in and  Please read it.  (And, do read Halderman’s Medium post, referenced above.)
  2. I should start my own commentary by saying that I have not seen the actual statistical analysis alluded to by the NY Mag piece.  I know no one else who has seen it, either.  (I’ve asked.)  Therefore, I must make assumptions about what was done.  I’ve been doing analyses such as this for over 16 years, so I have a good idea about what was probably done, but without the actual study and the data on which the analysis was conducted, I can’t claim to be replicating the study.  (By the way, I’m also assuming that a “study” was done, but it’s also not at all clear that this was the case.  It could be that Halderman and his colleagues provided some thoughts to the Clinton campaign, and this communication was misconstrued by the public when word got out.)
  3. The gist of the analysis described by NY Mag appears to be comparing Clinton’s vote share across the types of voting machines used by voters in Michigan, Pennsylvania, and Wisconsin.  To attempt a replication of this analysis, it would be necessary to obtain election returns and voting machine data at the appropriate unit of analysis from these three states.
  4. Voting machine use.  Both Michigan and Wisconsin only use paper ballots for Election Day voting.*  Therefore, one simply cannot compare the performance of electronic and paper systems within these states.  This sentence in the NY Mag article must be false:  “The academics presented findings showing that in Wisconsin, Clinton received 7 percent fewer votes in counties that relied on electronic-voting machines compared with counties that used optical scanners and paper ballots.”  On the other hand, some counties in Pennsylvania do use electronic voting machines, known in the election administration field as “DREs” for “direct recording electronic” devices.  Pennsylvania, therefore, could be used to compare results for voters who used electronic machines with those who used paper.
  5. Voting machine data.  For many decades Kim Brace, the owner of Election Data Services, has collected data about the use of voting technologies as a part of his business.  Every four years I buy Kim’s updated data, which I have done for 2016.  Verified Voting also has a publicly available data set that reports voting machine use at the local level.  I tend to prefer Brace’s data because of his long track record of gathering it.  As I show below, both data sources tell similar stories about the use of voting machines in Pennsylvania.  The comparisons are the same, regardless of the voting machine data set.
  6. Election return data.  Here, I use county-level election return data I purchased from Dave Leip at his wonderful U.S. Election Atlas website. (This is from data release 0.5.)
  7. The Pennsylvania comparison.  Using the Brace voting machine data to classify counties, Clinton received 39.3% of the vote in Pennsylvania counties that used opscans and 49.0% of the vote in counties that used DREs.  However, when the standard statistical controls are included to account for the other factors that would predict the Clinton vote share in a county — race, population density, and education — the difference in vote share between Clinton and Trump is reduced to 0.095%.   Using the Verified Voting data to classify counties, Clinton received 40.2% of the vote in opscan counties and 52.4% of the vote in DRE counties.  (The Brace and Verified Voting data sets differ in reporting the machines used in four counties.)   In this case, when the statistical controls for race, population density, and education are included, the vote share difference between Clinton and Trump goes down to 0.6%.

To summarize:

  1. Virtually all Michigan and Wisconsin Election Day voters (and absentee voters, for that matter) use paper ballots.  In Michigan, these ballots are counted on scanners; in Wisconsin, some are counted by hand, but most by scanners.  Election returns from these states cannot be used to compare voting patterns using electronic machines and paper-based systems.  The core empirical claim in the NY Mag article that has the Internet all atwitter cannot be true.
  2. The difference in voting patterns between Pennsylvania voters who used  electronic machines and those who used optically scanned ballots is accounted for by the fact that voting machines are not randomly distributed in Pennsylvania.  Clinton received proportionally more votes in counties with electronic machines, but that is because these counties were disproportionately nonwhite and metropolitan — factors that are correlated with using DREs in Pennsylvania.
  3. The importance of advocating for post-election audits to ensure that the ballots were counted correctly is not a matter of electronic vs. paper ballots, or a matter of whether doing so will save the election for one’s favored candidate.  The reason all systems, regardless of technology, should be audited after every election is to ensure that the election was fair and that the equipment and counting procedures functioned properly.  This critical message was unfortunately garbled by playing to conspiratorial fears about the outcome of the 2016 election.
  4. My biggest fear in this episode is that election officials, state legislators, and voters will now regard advocates for post-election audits as part of the movement to discredit the election of Donald Trump as president.  I know that this is not the intention. My biggest hope is decisionmakers will look beyond the sensational headlines and recognize that post-election audits are simply  a good tool to make sure that the election system has functioned as intended.

*I have learned that between 5% and 10% of Wisconsin voters who are not physically disabled do use the so-called “accessibility machines,” rather than the regular opscan paper ballots.  However, I know of no election returns that have reported the results of ballots cast on these machines alone, nor do I believe that the reports discussed in the NY Mag article were referring to these ballots.

My experience with VoteCastr on Election Day

VoteCastr’s mixed record on election day providing useful information about turnout and the emerging vote totals in real time are now getting scrutiny from the press, including from its partner, Slate.  I was not involved in the development of VoteCastr, so I don’t have much to say about its difficulties in getting the numbers right.  However, I do have one direct anecdote of the VoteCastr operation, based on my observation work on Election, and a few reflections based on that experience.

The anecdote:  I spent election day travelling around Franklin and Delaware Counties in Ohio.  (That’s Columbus and the northern suburbs.)  I visited about 10 voting locations overall which accounted for something like 30 precincts.  At the first voting location I visited, a union hall on the west side of Columbus, I watched for an hour as the hard-pressed workers did their best to whittle down the line of 100 voters who had greeted them when the polls opened at 6:30. (By the end of the hour the line had grown, owing to the painfully slow work of the poll worker handling check-ins for last names ending in S-Z, but that’s another issue.)

At about 7:15, a young woman carrying an over-stuff backpack on which a VoteCastr badge had been affixed came in looking for the polling place manager.  She and he talked for a couple of minutes right next to where I was standing, so I listened in.  This was the dialogue, played out in a space the size of a large living room, stuffed full of voting equipment, folding tables, and about 30 people at any one time:

  • VoteCastr person:  I’m from VoteCastr.  I’m here to gather information about the number of people turning out to vote each hour.  How can I get that information?
  • Manager:  (Looking at the table where they are checking in voters using paper poll books):  I would love to help you, but I don’t know how we would do that.  We’d have to stop all operations and count up the number of signatures on all the poll books to get that.
  • VoteCastr person:  But, don’t you have a list of people who have voted attached to the wall over there?  (Pointing to a list of voters tacked to the wall.)
  • Manager:  Those are people who had previously voted absentee or early.  We don’t post the names of voters in real time.  We do issue reports back to the county a couple of times during the day about the number of people who voted, based on machine use.
  • VoteCastr person:  Could you get the count from looking at the machines more frequently than that?
  • Manager:  Maybe I could, but it would take one of my busy people several minutes to do that and, as you can see, we can’t spare anyone right now.
  • VoteCastr person:  Is there any other way you can think of that I could get the information?
  • Manager:  You’re welcome to count people as they come in the door.  I’m afraid that’s the only way you’re going to get the information you need on an hourly basis.

I can’t vouch for the empirical claims made by the manager or the VoteCastr person, but the manager seemed like an accommodating fellow (and amazingly poised) and the VoteCastr person was very professional and polite.  My conclusion was that they were honestly trying to make this work, but there was no easy solution.

The observation:  If the turnout reporting was so important to the VoteCastr model, why was it sending one of its data-gatherers into a precinct an hour after polls had opened with no idea about how the data and check-in processed worked?  This was either an example of poor training, poor advance knowledge among leadership about how Franklin County elections are administered, poor cooperation with local officials, or a combination of all three.

It brings to mind the work I have done for the past four years to gather precinct-level data about polling practices, for my own research and to provide advice to election officials.  One thing I’ve learned is that when you go into a precinct wanting to get data in the rush of an election, you over-prepare and you plan for each county, and indeed each precinct, to operate differently.  From what I observed, it appeared that the VoteCastr folks assumed that Franklin County had electronic poll books, like neighboring Delaware County.  With EPBs, there was a decent chance that hourly data could have been obtained.  With paper poll books, not so much.

I’m intrigued by VoteCastr and wish them well as they work out their business model.  One thing going against them — and everyone else in this space — is that presidential elections only come around every four years.  That’s bad for two reasons.

First, it’s hard raising funds and organizing a business (or a research project) during the 3.5 years before the next presidential election, because no one is thinking about it.  The right thing to do would be to be conducting endless trial runs on low-turnout elections, to work out the kinks and to gain the trust of election officials who, after all, are the gatekeepers to the precincts.

Second, presidential elections are qualitatively differently from all other elections.  The surge in activity is so much greater than even midterm congressional elections that you don’t know if you have it right until the onslaught hits; if you make mistakes, it’s an eternity until you know if you’ve make the right corrections.  This is a lesson known by election officials for decades, and now it’s a lesson being learned by the new data companies being formed to make sense of elections.

Ballots to be counted probably won’t help Clinton much

Ned Foley and I recently published two pieces of commentary, here and here, about ballots counted following election day.  Most people don’t realize this, but the election results released election night are unofficial, and are subject to updating and correcting.  Important to the updating is the counting of provisional ballots and mail-in ballots that are considered in the days leading up to the official certification of results.

In these two commentaries, we described the so-called “blue shift” that has been evident in vote counts since 2000.  The blue shift is a term given to the pattern we see, which is that the nationwide vote share has tended to shift a little bit toward the Democratic presidential candidate after election night.

A natural question to ask is whether the late-counted ballots are sufficient in this election to switch any of the states that have currently been called for the candidates.  The answer at this point seems to be “no.”  But,  New Hampshire — a state that is currently “too close to call” — has a margin so tight that the race could conceivably go either way.  (As of this writing, Clinton is ahead of Trump in the counting by 1,371 votes, out of roughly 700,000 cast.)

I have done some quick analysis, in which I’ve taken the current vote totals (as of 9:30 Wednesday morning).  I have then gone back to the 2012 presidential returns and compared the final, official results with the unofficial returns reported Wednesday morning following the election.  Taking this as an estimate of the “blue shift” we might expect in the coming days, I then add this to the current unofficial results to see how much the current preliminary tally might change in the coming days.  The following graph summarizes the results.


I’ve shown the ten states with the closest vote margins.  The arrows start with the current two-party vote share for Clinton and then add to it the fraction of the vote received by Obama in 2012 during the post-election counting period.

Note that only New Hampshire is close enough to 50/50 that the final count could flip the results from one candidate to the other.  The bad news for Clinton here is that New Hampshire actually experienced a “red shift” in 2012, so that this scenario predicts that the vote counted post-election would be to Trump’s advantage. (Because NH is an election day registration state, it’s later-counted ballots are dominated by absentees and late-arriving counts, not by provisional ballots.)

This is not to suggest that 2016 will be a repeat of 2012.  But, it is to suggest that the presidency in 2016 is probably not going to be decided in the canvass period.

“Rigged election” rhetoric is having an effect on voters — just not in the way you think.

Donald Trump’s relentless messaging about a “rigged election” is having an effect on the confidence voters have that their votes will be counted accurately.  But, it’s not the effect you think.

I came to this conclusion as I was considering yesterday’s Morning Consult poll results about confidence in the vote count.  It so happens that I asked almost exactly the same question on a national poll during the pre-election period in 2012.  (I can’t take all the credit.  My colleague at Reed College, Paul Gronke, joined me in sponsoring a “double-wide” module on the 2012 Cooperative Congressional Election Study.)  I decided to compare what Morning Consult found today with what we found almost exactly four years ago.

The results were surprising.  The percentage of respondents who say that they are “very confident” that their own votes will be counted accurately is virtually unchanged from 2012.  Confidence that votes nationwide will be counted accurately has, if anything, increased since 2012.  Trump’s rhetoric appears not to have reduced Republican confidence in the accuracy of the vote count over the past four years.  Rather, it has increased the confidence of Democrats.  The degree of party polarization over the quality of the vote count has increased since 2012, but it is Democratic shifts in opinion, not Republican, that are leading to this greater polarization.

Let me sketch out the background here.  In 2012, Gronke and I coordinated our modules in the CCES to ask a series of questions about election administration to a representative sample of 2,000 adults.  Two of these questions were:

  • How confident are you that your vote in the General Election will be counted as you intended?
  • How confident are you that votes nationwide will be counted as voters intend?

The first question was asked of respondents who reported they intended to vote in 2012; the second question was asked of all respondents.

The response categories for both questions were (1) very confident, (2) somewhat confident, (3) not too confident, (4) not at all confident, and (5) I don’t know.

The corresponding Morning Consult questions were:

  • How confident are you that your vote will be accurately counted in the upcoming election?
  • How confident are you that votes across the country will be accurately counted in the upcoming election?

The response categories were identical to ours, with the exception of an additional “no opinion” option with Morning Consult.

So, while the questions are not 100% identical, they are close enough to allow some meaningful comparisons.  (For those interested in a more systematic example of how similar survey research questions can be combined in this type of analysis, see the article I co-authored with Mike Sances, which appeared last year in Electoral Studies.)  Both the 2012 and 2016 studies were conducted about three weeks ahead of the general election, so the timing couldn’t be better.

In the table below, I compare Morning Consult’s 2016 results with Gronke’s and my results in 2012.  The numbers in the table are the percentages of the indicated respondents who gave the “very confident” response.

Your own vote Votes nationwide
2012 (Gronke/Stewart) 2016 (Morning Consult) 2012 (Gronke/Stewart) 2016 (Morning Consult)
All registered voters 41% 45% 16% 28%
Democrats 47% 59% 20% 43%
Republicans 42% 41% 13% 18%

The 2012 patterns were consistent with what my colleagues and I have regularly reported:  the “winning” party tends to be more confident than the “losing” party and voters tend to be much more confidence of their own votes being counted accurately than votes nationwide.


The 2016 patterns are similar, with a couple of major differences.  The most important similarity is that respondents in both 2012 and 2016 were more confident their own votes would be counted accurately than votes nationwide.  In 2012, the local-nationwide gap was 25 percentage points (41% vs. 16%); in 2016, the local-nationwide gap dropped to 17 percentage points (45% vs. 28%).


The most important changes come as we look down the table, at the Democratic-Republican differences.  Republican and Democratic opinions have changed in very different ways since 2012.  At the local level, Republicans remain about as confident as they were in 2012, but Democratic confidence has grown.  As a consequence, the Democratic-Republican gap in the confidence about local vote counting has grown from 5 percentage points to a much more substantial 18 percentage points.


In assessing the accuracy of the vote count nationwide, Republicans are actually a little more confident in 2016 than in 2012 (18% vs 13%), but this small change from 2016 is likely due to subtle differences between the two studies.  On the other hand, Democrats have become a lot more confident.  They are now a whopping 23 percentage points more confident than in 2012 that votes will be counted accurately nationwide (43% “very confident” vs 20%).


Much more work needs to be done on this issue, but a couple of tentative conclusions seem in order.  The first is that Donald Trump’s complaints about a “rigged” electoral system most clearly reminded his strongest supporters of what they already believed.  It is much less clear that Republicans who were not already convinced of the corruption of the election system have now had a change of heart.


The second conclusion is that Trump’s charges appear to have counter-mobilized Democratic opinion in novel ways.  Democrats have come to the defense of vote counting, not only in their own back yards, but even in other people’s back yards.


Either way, summary judgements about the legitimacy of the electoral process have become more polarized in 2016 than they were in 2012.  One possibility is that as time progresses, support for the electoral process as a whole will become associated with the Democratic Party in the public’s mind, with opposition associated with the Republican Party.  I am hoping that this is not the case, because we have seen important bipartisan improvements in the world of election administration over the past four years, despite continued partisan differences over voter ID laws and amending the Voting Rights Act.


We certainly need to be concerned about undermining the legitimacy of elected officials, especially in circumstances where there is no hard evidence of election rigging going on.  But, we also need to recall that once the November election is done and gone, elections will continue to be administered at the state and local levels.  The danger for election administration with all this unsubstantiated talk about fraud is that it will undermine the comity that has often existed in handling the day-to-day details of running elections.  In other words, the failure to institute improvements to local election administration will become collateral damage of this heightened polarization.




Two new election science pieces in Political Analysis

Two new methodological pieces that will be of interest to students of election administration just came out in Political Analysis, (which is edited by my VTP-co-conspirator, Mike Alvarez).

(Warning to my non-academic followers:  serious math is involved in these papers.)

The first, by Kosuke Imai and Kabir Khanna, is entitled “Improving Ecological Inference by Predicting Individual Ethnicity from Voter Registration Records.”  In a nutshell, there are a lot of times when we need to know the race of registered voters, but we don’t have race as a data field in the voter file.  (This is true in all but a handful of states.)  Some people have dealt with this problem by relying on proprietary modeling techniques, such as that employed by Catalist, and others have simply used Census Bureau lists that classify last names by (likely) ethnicity.  Imai and Khanna have developed a technique, based on Bayes’s rule, to combine a variety of information, ranging from the surname list to geocoded information, to produce an improved method for modelling a voter’s ethnicity.  The technique is tested using the Florida voter file, which has race already coded, to make “ground truth” comparisons.

The second, by Gabriel Cepaluni and F. Daniel Hidalgo, is entitled “Compulsory Voting Can Increase Political Inequality:  Evidence from Brazil.”  This article will definitely be relevant for those interested in proposals to institute mandatory voting in the US.  Brazil is the largest country in the world with mandatory voting, which makes this case of particular interest.  Cepaluni and Hidalgo show that the causal effect of making voting mandatory is to increase SES disparities in turnout.  The reason is that the non-monetary penalties for non-voting primarily affect voters with higher incomes.

Happy reading!


The mystery of the Brooklyn voter reg “purge”

Reports from Brooklyn about the “purge” of  over 125,000 voters between last November and the recent presidential primary has turned the spotlight on the maintenance of voter lists. Today’s news brings word that the Kings County Board of Elections’ chief clerk apparently erred by removing voters from the rolls contrary to law.

Pam Fessler’s excellent NPR report on Wednesday about the rules governing removing voters from the rolls makes the point that the laws governing voter list maintenance are pretty clear.  Voters (and reporters) don’t always understand those rules, and when they do, they don’t necessarily agree with them.  For that reason, I’m going to sit back and wait for the reports of the New York City Comptroller and state Attorney General before passing judgement on what exactly happened and who was at fault.

That said, the whole story remains a bit of a mystery, first, because statistics about New York’s list maintenance activities are opaque and, second, no one really knows how many people “should be” on the voting rolls and, therefore, how many people “should be” removed when list maintenance activities are done.

New York’s murky voter registration statistics

On the issue of statistical opacity  Every two years, the U.S. Elections Assistance Commission is required by the National Voter Registration Act (NVRA) to issue a report about voter registration activities at the state level.  (Here is a link to the post-2014 report.)  To prepare the report, the EAC sends a survey to the states asking them to report, at the county level, statistics that describe the number of voters removed from the rolls, and why they were removed.  (The three major categories of removals are “failure to vote,” “moved from jurisdiction,” and “death.”)  In recent years, most states have complied with the request to provide this detailed information, but not New York.

As recently as 2008, New York only reported statistics for the whole state, not for individual counties.  In 2010 and 2012 New York finally started providing county-level statistics to the EAC, but the state backslid in 2014, providing no detailed breakdown for why voters were removed from any county in the state.  Not only that, but New York reported that between the 2012 and 2014 elections, only 47,634 voters had been removed from the rolls statewide, which is approximately the same number removed by Delaware.  (To provide further perspective, Florida removed over 484,000 voters and Pennsylvania removed over 853,000.)

Over the past few days, many people have asked me if the number of voters removed from the rolls in Brooklyn was unusual, to which I have to answer, “who knows?” because the relevant list maintenance statistics from New York (meaning the whole state, not just the city or one borough) are not being made public, as they are for most of the rest of the nation.

We don’t know how many people “should be” on the rolls

On the issue of how many people “should be” on the rolls and how many “should be” removed by list maintenance activities every year:  It turns out that this is a very hard question to answer. One attempt to answer this question was made in a recent conference paper that I wrote with a Harvard graduate student, Stephen Pettigrew.  (You can download the paper at this link.)  Because there is no national registry of all eligible adults (at least one that is available to the public) and no single national voter registration list, we don’t know the “true” number of registered voters.  (By “true number,” I mean people who are eligible to vote in the state in which they are registered, which excludes people on the rolls who have moved or died.)  Thus, official voter registration lists are, to some extent, “too big,” but by how much is currently unknown (and hotly contested among various groups).

Even so, it is possible to get an approximate sense of how many voters should be removed from the rolls on an annual basis, since there are two reasons that dominate all others:  moving out of a jurisdiction and dying.  Let’s see where Brooklyn (Kings County) stands on those measure.

WARNING:  Detailed calculations involving math follow

Deaths are easy.  The Centers for Disease Control maintain a database that records the number of deaths in each county of the United States, broken down by age.  In 2014 (the most recent year for which statistics are available), Kings County recorded 15,347 deaths among those 20 years and older.  (Unfortunately, the CDC database breaks down population groupings in five-year intervals, so we can’t add the deaths of 18- and 19-year-olds.  But, given the nature of death statistics, this is not a large number of people.)

Moving is a little more tricky, because there isn’t a national registry of movers, and the Census Bureau data is cumbersome to use to estimate how many people have moved out of a county or state.  However, the IRS (who knew?) provides data about county-to-county migration, based on income tax filings.  It can be used to estimate how many people move out of Kings County each year.

From what I can tell, between 2013 and 2014 (the most recent data available), about 110,000 people moved from Brooklyn — over 59,000 moving to other counties in New York and over 50,000 moving to other states.  Not all of these are registered voters, of course, or are all of them eligible.  The Census Bureau tells us that there were roughly 2.0 million residents in Brooklyn in 2014 who were 18 and older, out of the borough’s 2.6 million residents.  If all of these adults were registered, my back-of-the-envelope calculation suggests that you would have about 60,000 registered voters from Brooklyn moving somewhere else in New York each year and about 51,000 registered voters moving out-of-state.  The out-of-state movers should certainly be removed from the rolls (eventually); the in-state movers would presumably be removed from the Kings County rolls eventually, but would reappear on the rolls of another county.

However, the most recent official reports from the state indicates that there are only between 1.3 and 1.4 million registered voters in Kings County, depending on which set of statistics you trust (last November or this April).  Either way, my back-of-the-envelope calculations suggest that with this more reasonable estimate of how many registered voters there actually are in Brooklyn, you probably have only about 39,000 registered voters moving within New York in any given year and 33,000 moving out-of-state.  And, if people who die are registered at the same rate as those who survive another year, that gives us only about 10,000 deaths that need to be taken care of each year.

This is a long way of saying that the only way you could get 125,000 voters removed from the rolls in a year (assuming that list maintenance happens annually) is if everyone eligible to vote is registered and if everyone who moves and dies is then taken off the rolls.  More likely, if only about 60% of eligible voters are registered in Brooklyn, then the expected number of removals would be in the range of 40,000 to 80,000 voters each year.

As a side note, in 2014, Kings County reported to the EAC that it removed only 4,548 voters from the rolls for all reasons between the 2012 and 2014 elections.  Thus, it is reasonable to infer that Brooklyn (and the rest of New York state) isn’t even removing voters who die, which should be the easiest part of the removal process to manage.

If you’ve read this far, you deserve a medal, but you should also now have a sense about why the question of how many voters we should expect to be removed via regular list maintenance activities is so unclear.  It would help if New York’s counties started reporting the same detailed list maintenance statistics as the rest of the nation.  If they did, then at least we would have a better sense about the efforts being undertaken to keep the rolls reasonably free of deadwood.  Until then, no one outside the state board of elections and the county boards will be able to judge the efforts that are going into making sure the voter rolls in New York are accurate.


Competing Lessons from the Utah Republican Caucus

If you want a case that illustrates the clash of expectations in the presidential nomination process, you need look no further than Utah’s Republican caucuses that have just been held.

These problems were well illustrated in two postings that recently came across my computer screen (h/t to Steve Hoersting via Rick Hasen’s Election Law Blog).  I make no claims about the accuracy of the claims (especially in Post # 1), but the sentiments expressed are certain genuine and representative.

Post # 1, a very interesting (to say the least) description of one person’s experience at the caucus, is a classic clash-of-expectations account.  In this posting, we learn that the lines to check in were long, ballots were given out in an unsecured fashion, those running the event didn’t always seem to know what’s going on, one-person-one-vote may have been violated, the ballot wasn’t exactly secret, and those counting the ballots didn’t want too many people looking over their shoulders.  About which I think, “sounds like a caucus to me.”

Caucuses are a vestige of early 19th century America, intended to pick nominees, to be sure, but with other goals in mind as well, such as rewarding the party faithful with meaningful activity and instilling control over the party base.  What we moderns value about primaries — that they are run by professionals, are designed to minimize coercion, and value access and security simultaneously — is precisely what caucuses are not.  Primaries were not gifted to Americans by a benevolent God, but were fought for by reformers over many years.  Primaries have their problems (among which is the fact that primary laws also had the [intended] effect of killing off minor parties), but it is a mistake to judge caucuses as if they were primaries.

Post # 2 is a story in Wired about the Utah Republican Party’s use of an online elections vendor to run an absentee voting process for the caucuses over the Internet. The writer’s point of view is that online voting in an election is an outrage because of the well-known problems with security and auditability of voting over the Internet.  Fair enough.  But, this is not a secret ballot election, it is a caucus.  If there is outrage to be expressed along these lines, it is for Republican leaders lending the appearance of a secret ballot election to a different sort of proceeding.

The Wired story also uncovers frustration among many thousands (probably) of would-be Internet voters that they were unable to vote because their party registration could not be verified, which may be another way of saying they were not eligible to vote in the first place, and would have been turned away from a physical caucus if they had appeared there instead.  Thus, we have another mismatch of expectations, pitting party leaders, who have every right to guard the associational rights of the party organization, against party voters, whose affiliation with the parties is one of identity rather than organizational membership.

This presidential nomination season has been infinitely interesting, one that will go down in history.  As the process drags on, moving from the high-profile early states to the low-profile middle and later states, we are seeing more and more examples of inconsistent expectations between process organizers and voters.  I suspect this will lead to an interesting round of reform activity (The Republican Party Meets McGovern-Fraser anyone?) once the dust has settled in November.


VTP released new report on polling place resources

coverJust as the one-year count-down for the 2016 presidential election has begun, the Caltech/MIT Voting Technology Project has released a new report today about managing polling place resources.  Click here for the executive summary, and here for the full report.

This report serves as a companion to a set of Web-based tools that the VTP developed and posted at the request of the bipartisan Presidential Commission on Election Administration (PCEA), to facilitate the recommendation that local jurisdictions “develop models and tools to assist them in effectively allocating resources across polling places.”

The report takes several new steps in the effort to spread the word about the usefulness of applying queuing theory to improve polling place practices.  First, it provides a single source of facts about lines at polling places in 2012 (with some updating to 2014).  Second, it provides a brief, intuitive introduction to queuing theory as applied to polling places — with a brief list of suggested readings for those who would like to learn more.  Finally, the report uses data from two actual local election jurisdictions and walks through “what-if analyses” that rely on the application of the resource allocation tools.

The report released today provides basic facts about where long lines were experienced in 2012 and which voters — based on race, voting mode, and residence — waited longer than others.  Information about the 2014 election updates previous research, and underscores how long lines tend to be more prevalent in on-year (presidential) elections than in midterm elections.  Beyond providing basic facts about the location of lines in American elections, the report provides a basic introduction to the science of line management, queuing theory, and a list of further readings for those who wish to delve more deeply into the subject.  Finally, this report demonstrates how the Web-based tools might be used, by working through actual data from two local jurisdictions.

The report is part of the Polling Place of the Future Project (PPOTF) of the VTP, which has been generously supported by the Democracy Fund.  Since the release of the PCEA report, the VTP calculator website has been visited thousands of times by users across the country (and around the world.)  We have received feedback from numerous jurisdictions about the utility of these calculators, as state and local officials try to effectively allocate their limited resources.

In recent months, two of the resource calculators have been updated, and those updates have been posted on the site.  The new versions include improvements to the user interfaces and the ability to upload data from multiple precincts, which allows the simultaneous analysis of hundreds of polling places for large jurisdictions.

With the one-year countdown to Election Day 2016 already underway, some might say that it is too late to make use of such analytical tools to make a difference in the next presidential election.  However, my experience is that most election administrators are always looking for ways to improve the experience for voters; thus the publication of a report that highlights how existing tools might help them prepare for November 2016 comes at the right time for those election administrators who are looking to fine-tune their plans for next year.