The image “http://www.votetrustusa.org/images/votetrust-small2.jpg” cannot be displayed, because it contains errors.

 

   

New York Times Magazine on E-voting PDF Print Email
By Dan Wallach, Rice University   
January 06, 2008

This article was posted at Ed Felten's Freedom to Tinker Blog and is reposted here with permission of the author.


This Sunday’s New York Times Magazine has an article by Clive Thompson on electronic voting machinesFreedom to Tinker’s Ed Felten is briefly quoted, as are a small handful of other experts. The article is a reasonable summary of where we are today, with paperless electronic voting systems on a downswing and optical scan paper ballots gaining in popularity. The article even conveys the importance of open source and the broader importance of transparency, i.e., convincing the loser that he or she legitimately lost the election.


A few points in the article are worth clarifying. For starters, Pennsylvania is cited as the “next Florida” — a swing state using paperless electronic voting systems whose electoral votes could well be decisive toward the 2008 presidential election. In other words, Pennsylvania has the perfect recipe to cause electoral chaos this November. Pennsylvania presently bans paper-trail attachments to voting systems. While it’s not necessarily too late to reverse this decision, Pennsylvania’s examiner for electronic voting systems, Michael Shamos, has often (and rightly) criticized these continuous paper-tape systems for their ability to compromise voters’ anonymity. Furthermore, the article cites evidence from Ohio where a claimed 20 percent of these things jammed, presumably without voters noticing and complaining. This is also consistent with a recent PhD thesis by Sarah Everett, where she used a homemade electronic voting system that would insert deliberate errors into the summary screen. About two thirds of her test subjects never noticed the errors and, amazingly enough, gave the system extremely high subjective marks. If voters don’t notice errors on a summary screen, then it’s reasonable to suppose that voters would be similarly unlikely to notice errors on a printout.


Rather than adding a bad paper-tape printer, the article explains that hand-marked optical tabulated ballots are presently seen as the best available voting technology. For technologies presently on the market and certified for use, this is definitely the case. A variety of assistive devices exist to help voters with low-vision, zero-vision, and other issues, although there’s plenty of room for improvement on that score.


Unfortunately, optical scanners, themselves, have their own security problems. For example, the Hart InterCivic eScan (a precinct-based optical scanner) has an Ethernet port on the back, and you can pretty much just jack in and send it arbitrary commands that can extract or rewrite the firmware and/or recorded votes. This year’s studies fromCalifornia and Ohio found a variety of related issues. [I was part of the source code review team for the California study of Hart InterCivic.] The only short-term solution to compensate for these design flaws is to manually audit the results. This is probably the biggest issue glossed over in the article: when you have an electronic tabulation system, you must also have a non-electronic auditing procedure to verify the correctness of the electronic tabulation. This is best done by randomly sampling the ballots by hand and statistically comparing them to the official totals. In tight races, you sample more ballots to increase your confidence. Rep. Rush Holt’s bill, which has yet to come up for a vote, would require this nationwide, but it’s something that any state or county could and should institute on its own.


Lastly, the article has a fair amount of discussion of the Sarasota fiasco in November 2006, where roughly one in seven votes that were cast electronically were recorded as “undervotes” in the Congressional race, while far fewer undervotes were recorded in other races on the same ballot. If you do any sort of statistical projection to replace even a fraction of those undervotes with the observed ratios of cast votes, then the Congressional race would have had a different winner. [I worked as an expert for the Jennings campaign in the Sarasota case. David Dill and I wrote a detailed report on the Sarasota undervote issue. It is our opinion that there is not presently any definitive explanation for the causes of Sarasota’s undervote rate and a lot of analysis still needs to be performed.]


There are three theories raised in the article to explain Sarasota’s undervote anomaly: deliberate abstention (voters deliberately choosing to leave the race blank), human factors (voters being confused by the layout of the page), and malfunctioning machines. The article offers no support for the abstention theory beyond the assertions of Kathy Dent, the Sarasota County election supervisor, and ES&S, Sarasota’s equipment vendor (neither of whom have ever offered any support for these assertions). Dan Rather Reports covered many of the issues that could lead to machine malfunction, including poor quality control in manufacturing. To support the human factors theory, the article only refers to “early results from a separate test by an MIT professor”, but the professor in question, Ted Selker, has never published these results. The only details I’ve ever been able to find about his experiments is this quote from a Sarasota Herald-Tribune article:

On Tuesday [November 14, 2006], Selker set up a computer with a dummy version of the Sarasota ballot at the Boston Museum of Science to test the extent of the ballot design problems.
Twenty people cast fake ballots and two people missed the District 13 race. But the experiment was hastily designed and had too few participants to draw any conclusion, Selker said.
Needless to say, that’s not enough experimental evidence to support a usefully quantitative conclusion. The article also quotes Michael Shamos with some very specific numbers:
It’s difficult to say how often votes have genuinely gone astray. Michael Shamos, a computer scientist at Carnegie Mellon University who has examined voting-machine systems for more than 25 years, estimates that about 10 percent of the touch-screen machines “fail” in each election. “In general, those failures result in the loss of zero or one vote,” he told me. “But they’re very disturbing to the public.”
I would love to know where he got those numbers, since many real elections, such as the Sarasota election, seem to have yielded far larger problem rates.


For the record, it’s worth pointing out that Jennings has never conceded the election. Instead, after Florida’s courts decided to deny her motions for expert discovery (i.e., she asked the court to let her experts have a closer look at the voting machines and the court said “no”), Jennings has instead moved her complaint to the Committee on House Administration. Technically, Congress is responsible for seating its own members and can overturn a local election result. The committee has asked the Governmental Accountability Office to investigate further. They’re still working on it. Meanwhile, Jennings is preparing to run again in 2008.


In summary, the NYT Magazine article did a reasonable job of conveying the high points of the electronic voting controversy. There will be no surprises for anybody who follows the issue closely, and there are a only few places where the article conveys “facts” that are “truthy” without necessarily being true. If you want to get somebody up to speed on the electronic voting issue, this article makes a fine starting place.


[Irony sidebar: in the same election where Jennings lost due to the undervote anomaly, a voter initiative appeared on the ballot that would require the county to replace its touchscreen voting systems with paper ballots. That initiative passed.]

Comment on This Article
You must login to leave comments...


Other Visitors Comments
There are no comments currently....
< Prev   Next >
National Pages
Federal Government
Federal Legislation
Help America Vote Act (HAVA)
Election Assistance Commission (EAC)
Federal Election Commission
Department of Justice - Voting Section
Non-Government Institutions
NASS
NASED
Independent Testing Authority
The Election Center
Carter Baker Commission
Topics
General
Voting System Standards
Electoral College
Accessibility
Open Source Voting System Software
Proposed Legislation
Voting Rights
Campaign Finance
Overseas/Military Voting
Canada
Electronic Verification
: mosShowVIMenu( $params ); break; } ?>