Two new research articles on elections

This morning two new research articles on elections were published electronically by Political Analysis, one on election forensics and the other on measuring the competitiveness of elections. Both should be of interest to Election Updates readers.

The first is by Jacob Montgomery, Santiago Olivella, Joshua Potter and Brian Crisp, “An Informed Forensics Approach to Detecting Vote Irregularies.” Here’s the abstract of their paper:

Electoral forensics involves examining election results for anomalies to efficiently identify patterns indicative of electoral irregularities. However, there is disagreement about which, if any, forensics tool is most effective at identifying fraud, and there is no method for integrating multiple tools. Moreover, forensic efforts have failed to systematically take advantage of country-specific details that might aid in diagnosing fraud. We deploy a Bayesian additive regression trees (BART) model—a machine-learning technique—on a large cross-national data set to explore the dense network of potential relationships between various forensic indicators of anomalies and electoral fraud risk factors, on the one hand, and the likelihood of fraud, on the other. This approach allows us to arbitrate between the relative importance of different forensic and contextual features for identifying electoral fraud and results in a diagnostic tool that can be relatively easily implemented in cross-national research.

This paper contributes to a series of papers published in PA that develop and text new methodologies for the detection of election irregularities and potential fraud.

The second paper is by Kai Quek and Michael Sances, “Closeness Counts: Increasing Precision and Reducing Errors in Mass Election Predictions.” Measuring the closeness of election contests is important for those who study elections, so this is a paper that readers of this blog should find of considerable interest. Here is the paper’s abstract:

Mass election predictions are increasingly used by election forecasters and public opinion scholars. While they are potentially powerful tools for answering a variety of social science questions, existing measures are limited in that they ask about victors rather than voteshares. We show that asking survey respondents to predict voteshares is a viable and superior alternative to asking them to predict winners. After showing respondents can make sensible quantitative predictions, we demonstrate how traditional qualitative forecasts lead to mistaken inferences. In particular, qualitative predictions vastly overstate the degree of partisan bias in election forecasts, and lead to wrong conclusions regarding how political knowledge exacerbates this bias. We also show how election predictions can aid in the use of elections as natural experiments, using the effect of the 2012 election on partisan economic perceptions as an example. Our results have implications for multiple constituencies, from methodologists and pollsters to political scientists and interdisciplinary scholars of collective intelligence.