We are entering the final stretches of a US presidential campaign in which the incumbent president has all but declared in advance that he will not accept the results of the election if he appears to be losing, challenging the validity of voting by mail. Republican party officials in Pennsylvania, with its 20 electoral college votes, are contemplating having the state legislature appoint Trump electors regardless of the vote tally. In response, the challenger has raised the possibility of US military personnel forcibly escorting Mr. Trump from the White House on January 20. In the midst of the pandemic, a recession, a reckoning around racial justice, and a Supreme Court vacancy, the American political scientists in your life who study authoritarian regimes aren’t sleeping very well.
As the election unfolds, scholars of electoral politics and electoral manipulation will increasingly feel called upon to offer their assessments of the fairness of the process. And they should! Along those lines, though, the below tweet by Professor Aditya Dasgupta urges caution. Professor Dasgupta is right, of course, but I want to raise a defense of election forensics before offering my own thoughts on how to evaluate any quick-reaction election-forensic claims about the 2020 election.
The OAS report on Bolivia
The above thread recaps some discussion around the truly awful story of the Organization of American States’ assessment of the 2019 general election in Bolivia. The author of the statistical analysis, Professor Irfan Nooruddin, alleges that there was a noticeable increase in the vote-share for the governing MAS party in late-day votes–a pattern he argues should be attributed to fraud. Following these allegations of fraud (and others, it should be noted, including by the EU and other outside experts), the opposition party took power backed by the military and police. The new government has yet to hold fresh elections, citing the covid-19 threat. The key figure from the OAS report is copied here.
I don’t know about you, but the alleged discontinuity looks an awful lot like random noise to me. And indeed that turns out to have been the case. To his credit, Prof. Nooruddin posted his replication code, which led to the discovery that the timestamps for results were coded alphanumerically rather than as dates, fully scrambling the times for the results and making the above figure representative of nothing but randomness.
The thought of making a coding error like this, let alone having any kind of impact on world affairs as a result, is literally a nightmare. Given the much-deserved negative attention the report is receiving, though, I think it’s necessary to stick up for election forensics as a field.
What is election forensics?
I’m not sure whether Prof. Dasgupta meant the quotes around ‘election forensics’ in the above tweet as scare quotes, but election forensics is a real thing! And it’s extremely useful! I’ll briefly lay out the advantages below; this section is based on a lecture I gave at UW-Madison’s Center for Russia, Eastern Europe, and Central Asia (available here) as well as the excellent USAID report by Allen Hicken and Walter Mebane.
Before the development of election-forensic tools, if we wanted to know about the quality of an election, we had to rely on direct observation. Election monitoring is great for developing a qualitative understanding of how elections are (mis)managed, but election observation missions suffer from important limitations. They can be biased based on several factors, including:
- Limited geographical and temporal coverage
- Limits on media freedom
- Strategic decisions to play up or play down evidence
Election monitors can’t be everywhere, and there is evidence that their presence in one precinct simply displaces manipulation to other precincts. As social scientists, election observation reports are important but not sufficient to fully assess the quality of an election, as a result.
Election forensics help address these limitations. The term refers to a set of statistical tools that can be used to detect patterns in election results that would be unlikely to occur in a clean election, based on some set of assumptions. By using the official election results, election forensic analysis eliminates bias due to selective coverage and strategic reporting. Helpfully, as statistical tools, they provide both point estimates of the severity of manipulation and measures of uncertainty. And, in my own area of work, they can be disaggregated to the subnational level to aid in hypothesis testing–pushing us forward in the study of why elections are manipulated differently across time and space.
This is an active area of research, and there are a large number of established forensic techniques for analyzing election integrity.
- Mixture models (Kalinin, 2019)
- Machine learning (Levin et al., 2016; Cantu and Saiegh, 2011)
- Relationship between absolute vote-share and turnout (Myagkov et al. 2009)
- Relationship between absolute vote-share and absentee / mobile turnout (Harvey, 2016)
- Uniform distribution of last digits (Beber and Scacco, 2012), using:
- Chi-square test
- Deviance (Skovoroda and Lankina, 2017)
- Latent class model (Medzihorsky, 2015)
- Distribution of second digits by Benford’s Law
- ‘Spikes’ in share of precincts reporting a particular vote-share or turnout (Rozenas, 2017; Kobak et al., 2016)
Notice something missing from this list: any mention of using the time at which ballots were cast a la the OAS report. I have never seen a peer-reviewed paper using this method (they may exist, and please share them with me if they do). Let’s not tar this research agenda using work that falls outside its scope!
Examples of election forensic techniques
What does this look like in practice? I think the most intuitive example is provided by Beber and Scacco (2012). Imagine taking the results for a given political party across all the precincts in a US state, and dividing them up into stacks based on the 1s digit of the result. If the party earns 200 votes in Precinct A, that precinct goes in the 0s stack. If it gets 201 votes in Precinct B, into the 1s stack. Across all precincts, in a clean election, the distribution of 1s digits should be random noise. The resulting stacks should look like this.
But it turns out that human beings are poor random number generators, and systematically favor certain numbers. So if human beings are involved in inflating a party’s vote-share, it is likely that there will be too many 0s and 5s, too many low numbers, and not enough large numbers. The resulting stacks often look something like this:
But this requires us to feel confident in the assumption that a clean election will result in an even distribution; other non-malicious factors could result in too many zeroes, for example. Perhaps the election workers are poorly trained or understaffed, and simply round their tallies up or down?
Thinking ahead to the 2020 election
There is a worryingly large possibility that the 2020 election will be a democratic disaster for the United States. To the extent that you see election-forensic analyses of the results in short order after the election, you should treat them with caution. Here are some things to consider.
- US elections are almost entirely free of fraud, falsification, ballot-stuffing, vote-buying, and the other techniques that election forensics are designed to detect. If an analysis purports to show evidence of such malfeasance, skepticism is in order.
Without appropriate statistical controls, natural patterns in the data may look suspicious to the model. For example, several of the techniques described above assume that precincts are homogenous within the territory studied. But a US state may be highly heterogenous–perhaps a low-turnout rural area overwhelmingly supports Trump, while a high-turnout urban area overwhelmingly supports Biden. This will look like pro-Biden tampering to the model in the absence of appropriate controls.
- To the extent that US elections are manipulated, this is accomplished through the law.
Bias in US elections largely stems from procedurally legitimate tools like gerrymandering, the geographical allocation of polling places, or the challenging of signature on mail-in ballots. Election forensics is not well suited to detecting this kind of manipulation. Other quantitative tools may be useful here (such as the efficiency gap for measuring gerrymandering) instead.
- As always, interrogate the modelers’ assumptions about what a clean election should look like.
Election forensic models live or die based on how well their underlying assumption matches reality. For this, we need extensive case knowledge about how electoral procedures are manipulated in a particular country. We don’t have much of this in the United States, at least when it comes to illegal manipulation techniques or novel threats like efforts to slow down the mail. For this we need to turn to election experts and those with practical expertise in election management.
Concluding thoughts
Election forensics are an incredible tool for social scientists who study election integrity. But like all tools, they can be dangerously misapplied. To the extent they are useful in the immediate aftermath of November 2020, they are likely to show limited if any illegal electoral manipulation like fraud or ballot stuffing–especially if models include appropriate statistical controls. Models that show otherwise should be treated with skepticism, to ensure that their formal assumptions match reality.
Nevertheless, citizens and researchers are right to be concerned about the integrity of the US election and its aftermath. However, existing election forensic models are less well suited to telling us when the legal structure in which the election takes place are unfair or biased. And this is largely where the problem lies.