This article was originally posted at The Huffington Post on July 12, 2013.
How do we know whether the reported winners of an election really won?
There’s no perfect way to count votes. To paraphrase Ulysses S. Grant and Richard M. Nixon, “Mistakes will be made.” Voters don’t always follow instructions. Voting systems can be mis-programmed, as they were last year in Palm Beach, Florida. Ballots can be misplaced, as they were last year in Palm Beach, Florida, and in Sacramento, California. And election fraud is not entirely unknown in the U.S.
Computers can increase the efficiency of elections and make voting easier for people who cannot read English or who have disabilities. But the more elections depend on technology, the more vulnerable they are to failures, bugs, and hacking. Foreign attacks on elections also may be a real threat.
Even if we count votes by hand, there will be mistakes. How can we have confidence in the results?
To check accuracy, there needs to be something to check against — an audit trail. Currently, the best audit trail is paper, either marked by the voter or printed by voting machines. If voters check that the paper records their choices correctly, auditors can use the paper to check election results. Unfortunately, roughly a quarter of U.S. voters use machines that don’t produce a paper trail. There’s no way to be sure their votes count.
A paper trail doesn’t help if it’s incomplete or corrupt; current curation procedures could be better. And paper doesn’t help if nobody looks at it.
That’s where chance comes in. Statistics means never having to say you’re certain, but it can justify confidence. Examining a small number of paper records — selected properly — can provide strong evidence.
You can tell if a pot of soup is too salty by tasting a tablespoon. You don’t have to drink the whole pot. Stirring is key. If you don’t stir the soup, all the salt could be at the bottom. Taking a tablespoon after stirring amounts to taking a random sample. Random is not the same as haphazard; stirring is not the same as sticking in the spoon without looking.
We can’t literally stir paper, but we can do something with the same effect. Imagine numbering the paper records. Select a number at random, for instance, by rolling fair 10-sided dice. Repeat. The random sample is every paper record whose number comes up.
Typical audit laws require hand-counting votes in a specified percentage of precincts, selected at random. This is odd to statisticians. It’s like saying you have to taste a specific percentage of a pot of soup when, in fact, you just need a tablespoon, no matter how big the pot. The size of the sample matters — a drop wouldn’t be enough — but the size as a percentage of the population doesn’t matter much. Moreover, the narrower the margin, the more a small problem might matter, so it makes sense to audit more when margins are slim.
If a large enough random sample of paper records shows a large enough margin for the reported winner, that’s strong statistical evidence that the reported winner really won. For instance, the chance that 100 ballots selected at random show 57 or more votes for a losing candidate is less than 10 percent. So, if 100 randomly selected ballots have 57 or more votes for the reported winner, there is 90 percent “confidence” in the outcome.
But what if the ballots show 48 votes for the reported winner and 52 for the runner-up? That would hardly be evidence that the outcome is right. It doesn’t seem satisfactory to stop the audit.
I had an “aha” moment in 2007. If the outcome is right, an audit should require as little work as possible. But if the outcome is wrong — no matter why — an audit should guarantee a large known chance of correcting the outcome.
An audit with that guarantee is called “risk-limiting.” The risk-limit is the largest chance a wrong outcome will slip through. Efficient risk-limiting audits are like incremental recounts that stop when there’s strong evidence that continuing is pointless. If the audit doesn’t find strong evidence, it leads to a full hand-count, correcting the outcome if it was wrong.
Risk-limiting audits are endorsed by the American Statistical Association, The League of Women Voters, Common Cause, Verified Voting and many other organizations concerned with election integrity. Colorado has a law requiring risk-limiting audits starting next year (implementation may be postponed). California passed a law to pilot risk-limiting audits in 2011 and 2012.
My collaborators and I have developed a variety of methods for risk-limiting audits. With the helpful cooperation of 10 jurisdictions, we’ve audited 16 elections in California. Others have done risk-limiting audits in Colorado and Ohio.
One of the simplest methods — ballot polling — could be used in any jurisdiction that has a paper trail and keeps good track of that paper. Ballot polling would let most states with paper trails confirm presidential elections at 10 percent risk by auditing a few hundred ballots, on average. For margins as small as 4 percent, at 10 percent risk, ballot-polling rarely requires looking at more than a few thousand ballots from the contest — if the outcome is right. For statewide contests, that’s a tiny additional expense.
For smaller margins, ballot polling can require auditing many more ballots; however, there are more efficient risk-limiting methods. The most efficient method checks how the voting system interpreted individual ballots. It requires checking no more than a few thousand randomly selected ballots for margins down to two tenths of a percent — if the voting system did not make many errors. Such a method could reduce the need for expensive, contentious recounts. Unfortunately, current federally certified voting systems don’t report how they interpret individual ballots. But Travis County, Texas, and Los Angeles County, California, are developing new voting systems designed for efficient risk-limiting audits. That will be a giant step forward for U.S. election integrity.
Elections are too important not to leave to chance. In (fair) dice we trust.
Stark is professor and chair of the department of statistics at the University of California, Berkeley and a member of the Verified Voting Board of Advisors.