The image “” cannot be displayed, because it contains errors.


National Issues

The Latest Spin From Voting Machine Makers: What Problems? PDF  | Print |  Email
By Dan Wallach, Rice University   
July 08, 2008

This article was posted at AlterNet and is reposted here with permission of the author.

Last week, I testified before the Texas House Committee on Elections (you can read my testimony). I've done this many times before, but I figured this time would be different. This time, I was armed with the research from the California "Top to Bottom" reports and the Ohio EVEREST reports. I was part of the Hart InterCivic source code team for California's analysis. I knew the problems. I was prepared to discuss them at length. Wow, was I disappointed. Here's a quote from Peter Lichtenheld, speaking on behalf of Hart InterCivic:

Security reviews of the Hart system as tested in California, Colorado, and Ohio were conducted by people who were given unfettered access to code, equipment, tools and time and they had no threat model. While this may provide some information about system architecture in a way that casts light on questions of security, it should not be mistaken for a realistic approximation of what happens in an election environment. In a realistic election environment, the technology is enhanced by elections professionals and procedures, and those professionals safeguard equipment and passwords, and physical barriers are there to inhibit tampering. Additionally, jurisdiction ballot count, audit, and reconciliation processes safeguard against voter fraud.
You can find the whole hearing online (via RealAudio streaming), where you will hear the Diebold/Premier representative, as well as David Beirne, the director of their trade organization, saying essentially the same thing. Since this seems to be the voting system vendors' party line, let's spend some time analyzing it.

Did our work cast light on questions of security? Our work found a wide variety of flaws, most notably the possibility of "viral" attacks, where a single corrupted voting machine could spread that corruption, as part of regular processes and procedures, to every other voting system. In effect, one attacker, corrupting one machine, could arrange for every voting system in the county to be corrupt in the subsequent election. That's a big deal. At this point, the scientific evidence is in, it's overwhelming, and it's indisputable. The current generation of DRE voting systems have a wide variety of dangerous security flaws. There's simply no justification for the vendors to be making excuses or otherwise downplaying the clear scientific consensus on the quality of their products.

Were we given unfettered access?
The big difference between what we had and what an attacker might have is that we had some (but not nearly all) source code to the system. An attacker who arranged for some equipment to "fall off the back of a truck" would be able to extract all of the software, in binary form, and then would need to go through a tedious process of reverse engineering before reaching parity with the access we had. The lack of source code has demonstrably failed to do much to slow down attackers who find holes in other commercial software products. Debugging and decompilation tools are really quite sophisticated these days. All this means is that an attacker would need additional time to do the same work that we did.

Did we have a threat model? Absolutely! See chapter three of our report, conveniently titled "Threat Model." The different teams working on the top to bottom report collaborated together to draft this chapter. It talks about attackers' goals, levels of access, and different variations on how sophisticated an attacker might be. It is hard to accept that the vendors can get away with claiming that the reports did not have a threat model, when a simple check of the table of contents of the reports disproves their claim.

Was our work a "realistic approximation" of what happens in a real election? When the vendors call our work "unrealistic", they usually mean one of two things:
  1. Real attackers couldn't discover these vulnerabilities
  2. The attackers can't be exploited in the real world.

Both of these arguments are wrong. In real elections, individual voting machines are not terribly well safeguarded. In a studio where I take swing dance lessons, I found a rack of eSlates two weeks after the election in which they were used. They were in their normal cases. There were no security seals. (I didn't touch them, but I did have a very good look around.) That's more than sufficient access for an attacker wanting to tamper with a voting machine. Likewise, Ed Felten has a series of Tinker posts about unguarded voting machines in Princeton. Can an attacker learn enough about these machines to construct the attacks we described in our report? This sort of thing would need to be done in private, where a team of smart attackers could carefully reverse engineer the machine and piece together the attack. I'll estimate that it would take a group of four talented people, working full time, two to three months of effort to do it. Once. After that, you've got your evil attack software, ready to go, with only minutes of effort to boot a single eSlate, install the malicious software patch, and then it's off to the races. The attack would only need to be installed on a single eSlate per county in order to spread to every other eSlate. The election professionals and procedures would be helpless to prevent it. (Hart has a "hash code testing" mechanism that's meant to determine if an eSlate is running authentic software, but it's trivial to defeat. See issues 9 through 12 in our report.)

What about auditing, reconciliation, "logic and accuracy" testing, and other related procedures? Again, all easily defeated by a sophisticated attacker. Generally speaking, there are several different kinds of tests that DRE systems support. "Self-tests" are trivial for malicious software to detect, allowing the malicious software to either disable and fake the test results, or simply behave correctly. Most "logic and accuracy" tests boil down to casting a handful of votes for each candidate and then doing a tally. Malicious software might simply behave correctly until more than a handful of votes have been received. Likewise, malicious software might just look at the clock and behave correctly unless it's the proper election day. Parallel testing is about pulling machines out of service and casting what appears to be completely normal votes on them while the real election is ongoing. This may or may not detect malicious software, but nobody in Texas does parallel testing. Auditing and reconciliation are all about comparing different records of the same event. If you've got a voter-verified paper audit trail (VVPAT) attachment to a DRE, then you could compare it with the electronic records.

Texas has not yet certified any VVPAT printers, so those won't help here. (The VVPAT printers sold by current DRE vendors have other problems, but that's a topic for another day.) The "redundant" memories in the DREs are all that you've got left to audit or reconcile. Our work shows how this redundancy is unhelpful against security threats; malicious code will simply modify all of the copies in synchrony.

Later, the Hart representative remarked:

The Hart system is the only system approved as-is for the November 2007 general election after the top to bottom review in California.
This line of argument depends on the fact that most of Hart's customers will never bother to read our actual report. As it turns out, this was largely true in the initial rules from the CA Secretary of State, but you need to read the current rules, which were released several months later. The new rules, in light of the viral threat against Hart systems, requires the back-end system ("SERVO") to be rebooted after each and every eSlate is connected to it. That's hardly "as-is". If you have thousands of eSlates, properly managing an election with them will be exceptionally painful. If you only have one eSlate per precinct, as California required for the other vendors, with most votes cast on optical-scanned paper ballots, you would have a much more manageable election.

What's it all mean? Unsurprisingly, the vendors and their trade organization are spinning the results of these studies, as best they can, in an attempt to downplay their significance. Hopefully, legislators and election administrators are smart enough to grasp the vendors' behavior for what it actually is and take appropriate steps to bolster our election integrity.

Until then, the bottom line is that many jurisdictions in Texas and elsewhere in the country will be using e-voting equipment this November with known security vulnerabilities, and the procedures and controls they are using will not be sufficient to either prevent or detect sophisticated attacks on their e-voting equipment. While there are procedures with the capability to detect many of these attacks (e.g., post-election auditing of voter-verified paper records), Texas has not certified such equipment for use in the state. Texas's DREs are simply vulnerable to and undefended against attacks.

CORRECTION: In the comments, Tom points out that Travis County (Austin) does perform parallel tests.  Other Texas counties don't. This means that some classes of malicious machine behavior could potentially be discovered in Travis County.

Comment on This Article
You must login to leave comments...
Other Visitors Comments
You must login to see comments...
< Prev   Next >
National Pages
Federal Government
Federal Legislation
Help America Vote Act (HAVA)
Election Assistance Commission (EAC)
Federal Election Commission
Department of Justice - Voting Section
Non-Government Institutions
Independent Testing Authority
The Election Center
Carter Baker Commission
Voting System Standards
Electoral College
Open Source Voting System Software
Proposed Legislation
Voting Rights
Campaign Finance
Overseas/Military Voting
Electronic Verification
: mosShowVIMenu( $params ); break; } ?>