National Academies Press: OpenBook

Asking the Right Questions About Electronic Voting (2006)

Chapter: 4 Technology Issues

« Previous: 3 Voting Technologies
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

4
Technology Issues

As described in Chapter 1, an election is not a single event but rather a process. It is thus helpful to consider the information technology (IT) of voting in two logically distinct categories: IT for voter registration and IT for voting.

4.1 INFORMATION TECHNOLOGY FOR VOTER REGISTRATION

Voter registration is affected by information technology. Though the subject has received comparatively little attention in the public debate, it is beginning to receive attention. Voter registration is the gatekeeping process that seeks to ensure that only those eligible to vote are indeed allowed to vote when they show up at the polls to cast their votes. Although much of the voter registration process unfolds before Election Day, the final step generally occurs on Election Day. Specifically, citizens register to vote before Election Day, and presuming that they vote at the polls, their voting credentials are checked on Election Day.

Voter registration is a complex process, as one might expect of a decentralized endeavor that involves millions of voters. Historically, voter registration has been a local function, and the primary function of election officials. However, under the Help America Vote Act of 2002 (HAVA), states are required to assume responsibilities that have previously been the province of individual local election jurisdictions. Specifically, HAVA calls for the states to create, for use in federal elections, a “single, uniform,

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

official, centralized, interactive computerized statewide voter registration list defined, maintained, and administered at the State level,” containing registration information and a unique identifier for every registered voter in the state. This requirement applies to essentially all states; according to the Department of Justice, this requirement would not be satisfied by local election jurisdictions continuing to maintain their own nonuniform voter registration systems in which records are only periodically exchanged with the state. Rather, HAVA requires a true statewide system that is both uniform in each local election jurisdiction and administered at the state level.1

Once a voter registry has been established, two primary technology-related tasks for voter registrars are to keep ineligible individuals off the registration lists and to make sure that eligible ones who are on the lists stay on the lists. A third task—registering new voters—occurs on a regular basis as people come of age or move into a community and want to vote and normally spikes right before or during an election. However, registering new voters occurs on a “retail” case-by-case basis, in contrast to the purging function, which is necessarily done “wholesale.”

Purging tasks arise because individuals identified as eligible voters may lose their eligibility for a number of reasons. A list of such reasons from Florida is typical2—voters may lose eligibility due to felony convictions, civil court rulings of mental incapacity, death, and inactivity. In addition, a voter may cease to be properly registered, because his or her eligibility to vote in particular electoral contests can be affected by a change in residence or by redistricting that places his or her residence in a different voting district. Finally, an individual registered to vote in more than one local election jurisdiction, even if he or she is otherwise an eligible voter, may vote only in the location in which he or she is legally entitled to vote.

Because lists of registered voters contain millions of entries, the purging of a voter registration list must be at least partially automated. That is, a computer is required to compare a large volume of information received from other secondary sources (e.g., departments of vital statistics for death notices, law enforcement or corrections agencies for felony convictions, departments of tax collection or motor vehicles for recent addresses) against its own database of eligible voters to determine if a given individual continues to be eligible. Note also that states do not in general

1  

See http://www.usdoj.gov/crt/voting/misc/faq.htm.

2  

Florida Department of State, Florida Voter Registration System: Proposed System Design and Requirements, January 29, 2004. Available at http://election.dos.state.fl.us/hava/pdf/FVRSSysDesignReq.pdf.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

check across state boundaries to see if voters are registered in more than one state or if they have voted in two states on Election Day.

Though this task sounds like a relatively simple one—just compare the lists3—it is enormously complicated by two facts: (1) the same individual may be represented on the different lists in different ways (John Jones and John X. Jones may refer to the same person, and he may have given the former name in registering to vote and the latter name in obtaining a driver’s license) and (2) the same name (e.g., John Jones) may refer to many different people. (This problem would be greatly ameliorated by the use of an identifier unique to the individual, such as a Social Security number, but for a variety of historical and legal reasons, the nation has chosen to eschew such use.)

Thus, there must be some specific criteria for determining whether or not different names refer to the same person. For example, to deal with the first fact above, one criterion might be this: If similar names have the same home address associated with them, the names refer to the same individual. Such a criterion thus requires a rule for determining “similarity” or a match. One such matching rule might be “if the first and last names are identical, consider the full name a match.” Under this approach, John Jones and John X. Jones would be deemed to be the same individual only if they share the same home address, but John Jones and Mary Jones would be deemed different individuals even if they shared the same home address. Suffixes on names, such as Jr. and Sr., can also cause problems in a similar manner.

Similarly, the second fact involving identical names might require a criterion such as, “If the name is associated with several different home addresses, there are as many different individuals as there are home addresses.” In this case, the matching criterion applies to home addresses, which are somewhat less ambiguous than names.4

The problem of determining whether names match is an algorithmic one. A simple and obvious algorithm calls for a perfect character-by-character match between names. But names in a database may be misspelled (e.g., due to typographical errors), and thus an algorithm that is relatively insensitive to such errors may be of more utility in determining

3  

Lists provided by other sources must also be correct and complete (e.g., all those reported as felons must indeed have been convicted of felonies but not misdemeanors), but that point is outside of the scope of this discussion.

4  

But not entirely. In the District of Columbia, for example, a specific residence may be listed as “3751 Joycelyn Street, NW” and “3751 1/2 Joycelyn Street, NW” in different official records of the D.C. government, depending on whether or not the computer software in use at any given department is able to process “1/2” as part of a street address.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

a match. Names can be pronounced the same way but spelled differently and vice versa. One class of algorithms developed to handle such problems is Soundex algorithms.5 These algorithms are widely used today for applications involving name matching, and their applications include name matching in comparisons of voter registration databases with other databases.

It is useful to distinguish between a “strong match” and a “weak match.” A strong match is one in which there is a very high probability that two data segments represent the same person. A weak match indicates that two data segments are similar, but additional information or research is necessary to determine if the two data segments represent the same person. In addition, there can be many legal ways to identify a citizen who is eligible to vote, which suggests that information in multiple databases can be used to determine eligibility.

Whatever the approach, it is important to realize a trade-off between false negatives and false positives. Any approach will identify some names as different when they do refer to the same individual (false negative) and other names as similar when they do not refer to the same individual (false positive).

Consider the significance of this problem for purging of a voter registration list. Any approach will incorrectly identify some registered voters as ineligible and thus improperly purge them (false positive) and will also fail to find ineligible voters who are not identified as such and thus remain on the list (false negative). For example, John Jones on the voter registration list and Jahn Jones on the convicted felon list may constitute a weak match, and without additional research, John Jones may be improperly removed from the voter registration list (a false positive). On the other hand, the names Sam Smith on the voter registration list and Sam X. Smith on the convicted felon list (with both names referring to the same person) may result in Sam Smith improperly remaining on the voter registration list (a false negative).

It is a fundamental reality that the rate of false positives and the rate of false negatives cannot be driven to zero simultaneously. The more demanding the criteria for a match, the fewer matches will be made. Conversely, the less demanding the match, the more matches will be

5  

Soundex algorithms solve the generic problem of matching names that sound alike but have different representations in text form (e.g., Smith and Smithe). A Soundex algorithm generates a string of characters that represent approximately its phonetic sound, so that words that sound alike, even if spelled differently, all result in the same character string when proceeding through the algorithm. The original Soundex algorithm was patented in 1918, and there have been refinements to it over the years, resulting in a class of such algorithms.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

made. For example, a requirement that names match (using all of the letters), addresses match, and dates of birth match is more demanding and will result in fewer matches than if the requirement is that only names and addresses match and only some of the letters and/or sounds in the name are used to determine a match. The choice of criteria for determining similarity is thus an important policy decision, even though it looks like a purely technical decision.

Furthermore, the considerations discussed above suggest that the presence or absence of human intervention in the purging process is important. That is, one should regard as very different a purging system that is fully automated and one that uses technology only to flag possible individuals for further attention by some responsible human decision maker. Because the human decision maker would use different criteria to render a decision (including the use of common sense and contextual factors), the rate of false positives would be reduced—and considerably so if the different criteria could be applied consistently.

In addition, the use of lists of inactive voters can provide some protection against false positives. A purge removes a voter from the voter registration list entirely, and thus this voter would either be denied the ability to vote or might be allowed to cast a provisional ballot. But if a voter who might otherwise have been purged is moved instead to an inactive voter list, the voter still remains on the rolls—and may vote in a subsequent election.

Finally, the purging of voter registration lists must itself be seen in a larger context, as such purging can be used as a political tool to manipulate the outcome of elections. One such use is to purge in local election jurisdictions chosen so that a purge would have differential effects on various voting blocs. Statewide management of voter registration lists reduces the possibility that decisions to purge are made locally, but there may be nothing in state law that in principle or in practice prevents state officials from ordering such purges for political reasons.

The issue above is important because there must be some criterion by which to determine if a purging is undertaken overaggressively or underaggressively. An overaggressive purge purges individuals who should be retained on the rolls. An underaggressive purge does not purge individuals who should not be retained on the rolls. Either type of purge can be undertaken for political reasons, depending on the demographics of those inappropriately retained on or purged from the rolls.

One approach to understanding the nature of a purge is to compare the rate at which eligible voters are inappropriately purged (E) with the rate at which ineligible voters are not purged (I). That is, define R as the ratio of I to E. Thus, R reflects the number of ineligible voters who are not purged for every eligible voter who is purged. Those who put a very high

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

premium on eligible voters not being purged want E to be as low as possible, and thus tend to favor large R. Those who put a very high premium on purging the voter rolls of all ineligible voters want I to be as small as possible, and thus tend to favor small R.

In any event, given a certain fraction of ineligible voters in the voter registration database, the choice of R determines a great deal about the performance requirements of the purging process. As Box 4.1 illustrates, the choice of R fixes the relative effectiveness of the purging process in identifying eligible voters for retention compared with not identifying ineligible voters for purging.

Note also that Election Day credential checking involves a similar set of considerations. A citizen presents his or her credentials at the polling place, and these credentials are checked against a listing of eligible voters. Again, the issue of similarity is relevant. If the eligibility credential is an excerpt from the voter registration database (e.g., a voter registration card), the possibilities for error are minimized. But if, instead, the requirement is to prove one’s identity with some other set of credentials, such as a driver’s license, a judgment of similarity must again be made. However, this time the criteria—which may or may not be the same as those used for purging voter registration lists—work in the opposite direction. A demanding similarity criterion will tend to exclude eligible voters, while a less demanding criterion will allow more ineligible individuals to vote (or at least result in more confusion between different individuals).

Against the discussion above, a number of important questions arise:

4-1. Are the relative priorities of election officials in the purging of voter registration databases acceptable? As noted above, purging databases can be conducted in an overaggressive manner or in an underaggressive manner. The politically correct response for public consumption is that it is equally important to purge the registration rolls of ineligible voters and to ensure that no eligible voters are purged, but of course in practice officials must choose the side on which they would prefer to err. An explicit statement of R—the number of ineligible voters who are not purged for every eligible voter who is purged—is thus a quantitative measure of the direction in which a given policy is leaning. (Of course, being able to make an estimate of R requires that data be collected that indicate the probability that an eligible voter on the voter registration rolls is wrongly purged, the probability that an ineligible voter on the voter registration rolls fails to be purged, and the fraction of the voter registration rolls that actually consists of ineligible voters.)

4-2. What standards of accuracy should govern voter registration databases? In voting machines, a Federal Voting Systems Standard specifies a maximum error rate of 1 in 500,000 voting positions (e.g., 1 in every

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

Box 4.1
False Positives and False Negatives

Let Pfp = the probability that an eligible voter on the voter registration (VR) rolls is wrongly purged.

Let Pfn = the probability that an ineligible voter on the VR rolls fails to be purged.

Let f = the fraction of the VR rolls that actually consists of ineligible voters.


Each cell entry in the table below indicates the probability of the action taken given the status of an individual on the VR roll. In the ideal case (a perfect algorithm), the likelihood of purging an eligible individual is zero, as is the likelihood of not purging an ineligible individual.

Action Taken

Status of Person on VR Roll

Eligible

Ineligible

Not purged

1

0

Purged

0

1

In the more realistic case, with nonzero Pfp and Pfn, the probabilities are as follows:

Action Taken

Status of Person on VR Roll

Eligible

Ineligible

Not purged

1 − Pfp

Pfn

Purged

Pfp

1 − Pfn

By definition, f is the fraction of the database of size N that consists of ineligible individuals. Based on the tables above, the cell entries below indicate the number of people who are eligible (ineligible) who are subsequently purged or not purged.

Action Taken

Number of Individuals on Roll Who Are

Eligible

Ineligible

Not purged

(1 − Pfp)(1 − f) N

PfnfN

Purged

Pfp(1 − f) N

(1 − Pfn)fN

If we define R as

then

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

2,000 punch card ballots with 250 voting positions on each card). What might be a comparable standard for the accuracy of a voter registration database, taking into account that people move frequently and die eventually?

4-3. How well do voter registration databases perform? How many people who think they are registered really are registered? How many people who are registered should be registered? The first question requires a general population survey that is linked to registration records (the American National Election Studies did this for many years). The second question requires a sample from the registration list followed up with diligent efforts to contact the people and the collection of information about them.

4-4. What is the impact on voter registration database maintenance of inaccuracies in secondary databases? The quality of databases other than those for voter registration affects maintenance of voter registration databases. In general, databases such as those of departments of motor vehicles (DMVs), departments of correction, and departments of vital statistics are not under the control of the state election officials. (Vital statistics are usually under the control of a county or municipality.) For example, if a DMV database is highly inaccurate in its recording of addresses, and a decision on voter eligibility depends on a match between the address on the voter registration database and that of the DMV, the probability of purging an eligible voter increases, all else being equal. A related point is the fact that database interoperability is in general a nontrivial technical task. The secondary databases needed for verification of voter registration are developed for entirely different purposes, and both the syntax and semantics of those databases are likely to be different from those of the voter registration databases.

Finally, these secondary databases are subject to state legislative control as well, and there are a wide range of options for how legislatures can affect their disposition and use in the voter registration process. For example, states could explicitly disclose these sources, so that a voter could be especially careful to ensure that he or she is not being misrepresented in such databases. States could mandate that secondary databases be managed with a higher level of care when they are used for purposes related to voter registration. Or states could mandate that in the interests of protecting voter privacy only certain types of data in these secondary databases would be available to the voter registration process. More generally, refining criteria for the various legal reasons for purges has been and will be on the agenda of many legislatures, and discretion based in local election jurisdictions about how to conduct purges will probably be subject to increased scrutiny.

4-5. Will individuals purged from voter registration lists be notified in enough time so that they can correct any errors made, and will

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

they be provided with an easy and convenient process for correcting mistakes or making appeals? From the discussion above, it is clear that some number of eligible voters will be inappropriately purged in any large-scale operation. Given that the right to vote is a precious one, voters who may have been purged incorrectly should have the opportunity to correct such mistakes before they cast their votes.6

4-6. How can the public have confidence that software applications for voter registration are functioning appropriately? As the discussion in Section 4.2.1 indicates, software for voting systems is subject to a variety of certification and testing requirements that are intended to attest to its quality. But there are no such standards or requirements for software associated with voter registration. Voters who lack confidence in the operation of voter registration systems will be uncertain about their ability to vote on election day. Large numbers of such voters will almost surely result in reduced turnouts.

4-7. How are privacy issues handled in a voter registration database? In many states, much of the information in a voter registration database is public information. HAVA directs states to coordinate those databases with drivers’ license databases of state DMVs and with the U.S. Social Security Administration. States may choose to coordinate with other databases as well, such as databases containing identification information for felons and death records. Much of the information in these other databases is not relevant to one’s eligibility. For example, one’s driving record is contained in a database of licensed drivers maintained by the state DMV. This database may be used to verify names and addresses for voter registration purposes (checking consistency, for example), but one’s driving record is not relevant for determination of voting eligibility. How do state laws, regulations, or guidelines limit the fields that constitute public information or the extent to which the interfacing agencies are permitted to retain personal data received from the other agencies during the matching process required for voter registration? How, if at all, is such nonrelevant information protected from inappropriate disclosure? How might such nonrelevant information be used to bias voter turnout for partisan

6  

Provisional balloting is a method required by HAVA that enables provisional ballots to be cast, subject to subsequent validation of a voter’s credentials. Though in principle such an approach solves the problem of an improperly purged voter, there are two potential problems with it. First, for all practical purposes, a provisional ballot has the same privacy protections as an absentee ballot—which are necessarily of a lesser degree than the privacy protections available in the voting booth on Election Day. Second, provisional ballots are inherently suspect in a way that votes cast in a voting booth are not, and the voter casting a provisional ballot will leave the polling place without any assurance that the ballot will indeed be counted.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

purposes? (Indeed, much of the information contained in these databases is for sale by the states, and the purchasers of such information are often political parties.)

4-8. How can technology be used to mitigate negative aspects of a voter’s experience on Election Day? For example, in many large jurisdictions, check-in lines at polling places can be both long and uneven. One frequently heard reason for this phenomenon is that any given poll worker checking registration can only check certain last names (e.g., all those names starting with letters A through G). This is true because the roll books containing lists of registered voters are broken up that way, and the poll workers have no flexibility on this point. However, information technology might be used to provide such similar information to poll workers without the need for such a procedure.7

4-9. How should voter registration systems connect to electronic voting systems, if at all? Today, there is an “air gap” between voting, even if done electronically, and checking for voter registration, which is done manually. However, in the interests of efficiency and rapid movement through polling places, it is easy to see a persuasive argument for why these functions should be integrated. A voter could simply present an electronic registration card to a voting station and be allowed to cast a ballot. This arrangement might facilitate easy, vote-anywhere voting in thousands of locations across a state rather than in just one precinct location and also early voting, in which a voter could vote at a central site. In both situations, a voter could have high assurance that he/she received the correct ballot form corresponding to his or her registration address. The most obvious argument against this arrangement is that it potentially compromises the secrecy of voting in a major way. Nevertheless, it is easy to imagine that both voter registration and voting might be integrated in packages of services offered by election service vendors.

4.2 INFORMATION TECHNOLOGY FOR VOTING

IT for balloting is what is usually meant by “electronic voting systems”—the systems described in Chapter 3. This section addresses security and usability issues. Usability can be characterized as functionality that facilitates a voting system’s accurate capture of a voter’s intent in casting a ballot and assures the voter that his or her ballot has been so captured. Furthermore, the voting system must record that ballot accu-

7  

This is not to say that the use of information technology for this purpose has no downsides. For example, it may be more difficult to capture a signature if one is required.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

rately until it is tabulated, even in the face of deliberate wrongdoing (security) or accidental error or mishap (reliability).

4.2.1 Approaching the Acquisition Process

In considering the purchase of any given voting system, an election official’s first step is often to consider systems that have been qualified under a process established by the Election Assistance Commission (EAC). Specifically, a vendor’s voting system is qualified if an Independent Testing Authority (ITA) asserts that the system in question meets or exceeds the Federal Elections Commission’s 2002 Voting Systems Standards (Box 4.2).8 ITAs are designated by the National Association of State Election Directors, and a vendor pays an ITA for its work in qualifying a system.

Knowledge that a given voting system has been qualified according to a particular standard provides some degree of assurance that the system in question meets a minimum set of requirements. Nevertheless, the fact that a given voting system has been qualified may not be the only criterion that affects a decision maker’s procurement decision.9 This is because voting systems fit into a larger context that cannot be separated from an assessment of fitness for purpose. The election official is responsible for the conduct of an election with integrity, and the equipment used in the election is only one part of that election. Yet, the qualification process evaluates voting systems, making just such a separation. This is not the fault of the qualification process—it is simply a consequence of the fact that any testing process must necessarily set bounds on the scope of the evaluation.

Of particular significance is the fact that various jurisdictions have long-established policies, procedures, and practices that govern the conduct of elections. Introduction of new technology into established practices almost always results in some degree of conflict and difficulty, even when the authorities seek to adjust existing practices to accommodate the new technology. Technology may work properly only if certain pro-

8  

The Federal Election Commission’s 2002 Voting Systems Standards call for three kinds of tests to be performed on voting systems to ensure that the end product works accurately, reliably, and appropriately: qualification testing (the focus of this section), certification tests performed by states in order to document conformance to state law and practice, and acceptance tests performed by the jurisdiction acquiring the system to document conformance of the delivered system to characteristics specified in the procurement documentation as well as those demonstrated in the qualification and certification tests.

9  

In practice, qualification may only be a prerequisite for a vendor to be considered for purchase. That is, a county may be interested in “all qualified systems”; thus, the fact of qualification may have no relationship to a specific purchase decision.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

Box 4.2
Federal Voting Systems Standards

To address some of the difficulties of technology assessment for state and local election officials, the Election Assistance Commission (EAC) has responsibility, with assistance from the National Institute of Standards and Technology (NIST), for developing voluntary standards that help to provide assurance that conforming voting systems are accurate, reliable, and dependable. Initially approved by the Federal Election Commission (FEC) in 1990, with a revised edition released on April 30, 2002, these standards are again being revised as this report goes to press.

The FEC 2002 Voting Systems Standards (VSS) cover functional capabilities required of a voting system—what a voting system is required to do—but not election procedures or report formats. The functional capabilities include (1) a set applicable to all parts of the election process, including security, accuracy, integrity, system auditability, election management system, vote tabulation, ballot counters, telecommunications, and data retention; (2) prevoting capabilities, used to prepare the voting system for voting, such as ballot preparation; (3) voting capabilities, such as the casting of ballots at the polling place by voters; (4) postvoting capabilities that are relevant after all votes have been cast, such as obtaining reports for individual voting machines, polling places, and precincts; and (5) maintenance, transportation, and storage capabilities relevant to voting system equipment.

In addition, the FEC 2002 VSS cover hardware standards for performance, physical characteristics, and design; software standards intended to ensure that the overall objectives of accuracy, logical correctness, privacy, system integrity, and reliability are achieved; telecommunications standards that govern the capability to transmit and receive data electronically (e.g., via modem); security standards intended to achieve acceptable levels of integrity, reliability, and inviolability in conforming systems; standards for quality assurance such as documentation of the software development process; and standards for configuration management of voting systems.

In April 2005, the EAC’s Technical Guidelines Development Committee released a first draft of technical guidelines that add to the FEC 2002 VSS in the areas of security and transparency of voting systems, usability of voting systems, and core requirements and testing. After a period of comment, it is expected that the EAC will promulgate the augmented Voluntary Voting System Guidelines (VVSG)—Version 1 as the first round of a new set of standards. A second round of review for all of the VVSG is expected to follow, resulting in an integrated and forward-looking version of the VVSG that should be available in FY 2006.

cedures are followed by poll workers, for example, and any given set of standards may—or may not—presume that these procedures are followed.

Moreover, the qualification process may not be adequate for a particular jurisdiction’s needs. For example, an election official from a jurisdiction with a long history of fraud and corruption may perceive security

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

issues in a different light than an administrator from another jurisdiction without such a history. For the former, a given set of security standards may be inadequate, but for the latter, the same set may be more than adequate.

An important technical point is that the voting stations deployed in a particular jurisdiction may not be identical. A great deal of hard-earned experience in the IT world suggests that a station running software version A may work perfectly with other stations running software version A, and a station running software version B may work perfectly with other stations running software version B, but that a station running software version A is unreliable when it connects to a station running software version B. Or, a station may be secure when in stand-alone operation but much less secure when connected to a network.

Similar points apply to hardware and software qualification. The same body of experience suggests that especially when custom hardware is involved (as it is for nearly all voting systems), it is the total package—software of a specific version running on hardware of a specific model—that must be evaluated. And, a small change to a qualified piece of software can in principle render it noncompliant with the relevant standards.

For such reasons, election officials may wish to go beyond the qualification process in their assessments of vendor offerings. The discussion below focuses on two areas of particular significance: security and usability/accessibility.

4.2.2 Security

4.2.2.1 Perspectives on Security

A very important requirement of any information technology deployed in a critical application is that it be secure and reliable. Security involves its resistance to deliberate acts of fraud that cause the system to record votes differently from what was intended by the voters who cast them.10 Thus, a voting system must ensure that ballots are counted as cast and that the resulting vote counts are accurate, despite malicious hacker attacks or insiders hired or planted to alter election results. (The system must also be reliable—that is, resistant to unplanned events that

10  

In the computer science community, the term “security” (or “computer security” or “information security”) is often used to denote a broader set of concerns, including integrity (e.g., being able to prove that a message has not been altered) and confidentiality (e.g., keeping the contents of a message private to unauthorized parties). In the present context, the term “integrity” as used by computer scientists more accurately describes the inability to alter a vote once it has been cast. However, in the debate over electronic voting systems, the term “security” has been used instead, and that term is adopted for this report.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

render it unavailable for normal use by voters; such events include power failures, unanticipated input sequences that might cause the system to freeze, accidentally introduced software bugs, and potential administrative mishaps or errors. These are not security issues per se and are not addressed further in this report.)

Moreover, in the electoral context, the public must have reason to believe in the security of the system, even in the face of those inclined to challenge it. That is, even if a system is in fact robust against such problems, perceptions of a system’s security depend on people’s experience with those systems, media exposure, and public debate. With new technologies being frequently deployed, election officials may face the task of assuring the public that the new systems are in fact secure and reliable, even if no problems arise immediately. At the same time, the consequences of inaccuracy and/or system failure place election officials on the front line of responsibility that could ultimately affect the outcome of any election. This point is particularly relevant given the discussion in Chapter 2 about a polarized electorate.

Security issues in voting are among the most difficult that arise in the development of secure systems for any application. Systems to manage financial transactions, for example, must also be highly secure, and much of the experience and knowledge needed to develop secure systems for financial applications is directly relevant to the development of secure systems for voting. But these applications differ from voting applications in at least two important ways.

First is the need to protect a voter’s right to cast a secret ballot. Developing an audit procedure (and the technology to support audits) is enormously more difficult when the transactions of an individual must not be traceable to that individual. (Consider, for example, the difficulties in reconciling accounts if it were by design impossible to associate an individual with the amount of a specific transaction.)

Second, under many circumstances, the value of security in financial systems can be quantified as just another cost-benefit trade-off. For those instances in which it is possible to estimate the likelihood of a particular kind of security breach, it is possible to compare the cost of securing that breach to the expected loss if the breach is not secured. Such a cost-benefit analysis is difficult for voting applications, because there is no commonly accepted metric by which one can quantify the “value” of a vote. Thus, an advocate of one position might argue that the relevant point of comparison for the security of voting systems should be the nuclear command-and-control system, while another might argue that commercial banking security is the appropriate comparison.

Also, election systems must declare a winner even when the margin of victory is minuscule. When the vote is close, a very small number of

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

votes can sway the election one way or another. Thus, in closely contested races, an election fraudster must manipulate only a small number of votes in order to obtain the desired outcome—and small manipulations are almost invariably more difficult to detect than large ones.

From the perspective of the computer scientist, security is a particularly elusive goal. Except in very rare instances that are for practical purposes not relevant to complex systems (and electronic voting systems count as complex systems), it is impossible to achieve 100 percent security in a system. Even worse, it is impossible to specify in any precise way what it would mean for a system to be 99 percent or 90 percent secure.

To illustrate, system testing is a process that is used to identify defects in a system (e.g., security vulnerabilities, software bugs). A vulnerability or a bug is detected when there is evidence that indicates its presence. But because the conditions under which a complex system can operate are so varied, no reasonable amount of testing can prove that the system is free of vulnerabilities or bugs. Moreover, the fixing of a particular system vulnerability takes place in the context of a would-be attacker who is motivated to continuously explore a system for such vulnerabilities. This implies that system security must also be a continuous and ongoing process that searches for vulnerabilities proactively and fixes them immediately.

A key point about security is that a system is only as strong as its weakest link. System security is a holistic problem, in which technological, managerial, organizational, regulatory, economic, and social aspects interact,11 and the attacker’s search for vulnerabilities is not limited to technological vulnerabilities. The technological security provided to pre-World War II France by the Maginot Line was high—but German tanks circumvented the line. In an election context, it makes little sense to enhance security in particular areas (e.g., in the computer-related parts of the election system) if enormous vulnerabilities remain in the other parts of the system whose exploitation could be problematic. At the same time, security in particular areas has to be compared by asking how much damage an adversary can do with a given amount of effort and a given risk of discovery. That is, gaping security holes in one part of the system (e.g., the noncomputer part) may be of lesser concern than smaller security holes in another part of the system if the latter can be exploited on a large scale more easily and more anonymously.

Cybersecurity experience suggests that there is only one meaningful technique by which the operational security of a system can be assessed: an independent red team attack.12 The term refers to tests conducted by

11  

National Research Council, Cybersecurity Today and Tomorrow, Pay Now or Pay Later, Washington, D.C.: National Academy Press, 2002.

12  

NRC, Cybersecurity Today and Tomorrow, 2002.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

independent groups, often known as “red teams” or “tiger teams,” that probe the security of a system in order to exploit security flaws just as they would be uncovered by a committed attacker in an actual attack.13 Flaws are then reported to the party or parties who hired the red team. Vendors sometimes use red teams as a way of improving their products, while customers sometimes use red teams as a way of assessing the security present in a product they may buy or have bought. Conducted properly, a red team attack does whatever is necessary to compromise the security of a system, exploiting technological or procedural flaws in the system’s security posture or flaws in the human infrastructure in which the technology is embedded. (A technological flaw might be the use of a weak encryption algorithm. A procedural flaw might be a poll worker who can be bribed to take an improper action.) Red team attacks are also unpredictable, in contrast to scripted tests in which the system’s developer tests what it believes to be likely attacks.

As a general rule, many computer scientists are also skeptical of “security by obscurity,” a practice that involves hiding vulnerabilities rather than fixing them. The reason is that information about vulnerabilities, especially those of high-value systems, is enormously difficult to keep secret. Moreover, such vulnerabilities are often discoverable through the application of enough technical expertise and experimentation. Open discussion of vulnerabilities, argue these individuals, provides strong incentives for system owners to fix them or to configure their systems in such a way that hostile exploitation of the vulnerabilities is less (or not) harmful.14

For such a strategy to be meaningful, the source code of the system in question must be available for inspection, because it is the code actually running on the system that defines its behavior under all possible circumstances. Without access to source code, it would be essentially impossible to discover, for example, that the system is programmed to behave in one way until a specific sequence of keys is pressed with the right timing between key presses, at which time the system’s behavior shifts into an entirely different mode that allows access to and manipulation of the data

13  

To date, red team attacks against electronic voting systems have not been undertaken under conditions that resemble the actual use of voting systems in the field.

14  

To be more precise about this argument, obscurity (concealing the internal workings of a system) can and does provide a layer of protection for a system. But there are many disadvantages to relying only or primarily on security by obscurity of the sort described above, and these disadvantages may well (and often do) outweigh the advantages provided by obscurity. At the same time, good security design and implementation can reduce those disadvantages—a point well recognized by the National Security Agency’s classification of many encryption algorithms.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

contained within the system. Indeed, such practices are common in software developers, who often install such “back doors,” known as maintenance traps, to facilitate system maintenance and debugging. While traps are a convenience for system developers, they are also blatant security holes and as such should not be included in production versions of the software. Alas, the pressures of software development under deadline are such that they are often included in production versions anyway.

When approaching any computer security problem, the computer scientist’s perspective can be summarized as a worst-case perspective—if a vulnerability cannot be ruled out, it is necessarily of concern. Furthermore, the computer scientist argues, a wealth of experience suggests that even obscure vulnerabilities in a system can be and often are exploited to the detriment of the system owner.

Computer scientists also note that the use of computers in voting makes possible the commission of automated fraud. Throughout most of the history of voting, the magnitude of fraud was strongly dependent on the number of people or on the effort required to commit fraudulent acts such as stuffing ballot boxes—larger numbers of fraudulent votes required a larger number of people. However, when computers are involved, a small number of individuals—albeit technically sophisticated individuals with high degrees of access to the internals of these computers—become capable of committing fraud on a very large scale indeed. Furthermore, because the software of computer systems is intangible, the difficulty of detecting such attempts is greatly increased.

It is thus not surprising that these perspectives shape the way that computer scientists look at security issues in electronic voting systems. In the words of one computer scientist:

As a general rule, the burden and cost should be on advocates of a particular voting product to provide evidence to the panel that the product is safe, rather than on critics to prove to the panel that it is unsafe. In case of doubt, a voting system should be considered unsafe until proven safe, and election officials should refrain from certifying, purchasing, or deploying voting equipment until independent security reviewers are confident that the technology will function as desired.15

The perspective of the election official is quite different. From a public policy perspective, it is desirable for election officials to have open attitudes about election concerns raised by members of the public, to welcome skepticism as a way of reassuring the public about how elections are conducted, to treat every election as precious, and to strive to eliminate

15  

David Wagner, University of California, Berkeley.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

Box 4.3
Burdens of Proof

As a matter of public policy, many states have adopted legal frameworks to promote a high degree of scrutiny for documents and processes related to the operation of government. According to this freedom-of-information philosophy, information related to the operation of government must be available to the public unless specifically exempted by law—the essential notion being that the making of public policy should itself be public.

Against this standard, every aspect of the election process, including records, procedures, and vote-counting mechanisms, ought to be subject to public inspection. However, in practice, the convergence of several issues has attenuated the degree to which such inspection is possible. Vendors have asserted intellectual property rights in order to keep the source code of electronic voting systems out of public view (and most freedom-of-information laws specifically exempt proprietary information from disclosure)—a point of controversy in the public debate. The short period available to election officials for declaring a winner means that the time available for public inspection and access is short. And, the political pressures from all sides in an election to know its outcome rapidly mean that election officials have strong incentives to avoid recounts that might delay the declaration of a winner.1

If election processes—and in particular, source code—were available for inspection, critics of electronic voting systems could reasonably be expected to assume the burden of demonstrating that security problems exist. But because such information is not available, these critics become “outsiders” to the election process and thus must use the tools available to outsiders—public discussion of potential vulnerabilities, close scrutiny of election events, and media attention—to draw attention to the issues they raise.

1  

In addition, election officials who are attempting to maintain or to create partisan advantage have incentives to avoid recounts that might reduce or eliminate their advantage.

every possibility of error. Indeed, election officials are responsible for the safety and security of an election, and as a rule, they accept that the burden of assurance properly rests on their shoulders (Box 4.3).

But in practice, resource constraints, time pressures, the lack of administrative control, and simple mistakes make the normative goals described in the previous paragraph difficult if not impossible to achieve. How election officials actually behave ranges from idealistic to pragmatic (and in some—hopefully rare—cases, politically expedient or partisan as well).

There is also the point that the victors in an election are—by definition—transient. The preservation of democracy has historically depended much more on the integrity of elections taken over time than it does on the outcome of any single election. In the more than 200-year history of the nation, there have been hundreds of thousands of electoral contests,

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

and despite more than occasional fraud or irregularity in elections, the democracy endures—at least in part because election officials have taken measures to fix the problems that allowed those problems to occur.

Election officials also have multiple goals. Sharon Priest, once secretary of state for Arkansas and a former president of the National Association of Secretaries of State, notes that most election officials are necessarily as concerned with affordability, system usability, turnout, and compliance with the federal, state, and local laws that govern elections as they are with security—which suggests that security is not the only, sole, or primary issue for them, but rather is one of several equally important issues.

Indeed, election officials have learned over the years that misfeasance is typically a greater risk than malfeasance. That is, election workers routinely make mistakes and technologies routinely fail without obvious partisan bias. Ballots are lost, procedures are not followed, and improvised solutions are put into place to respond to pressures of the moment on Election Day. Although the impacts of misfeasance are likely to be more or less random, they still account for the majority of obvious problems that election officials must address with limited resources. And, as a result, administrators have generally paid more attention to improving the procedures that have led to such problems than to improving technology.

From the point of voter registration to the moment of winner certification, there are many opportunities for something to go wrong—both deliberately and accidentally—that can potentially affect an election outcome. As with all public officials, election officials do not have the resources to deal with all problems, and they necessarily leave some unaddressed. Within the constraints of their limited resources, they must set priorities—and their perceptions of the likelihood of various problems play an important role in setting those priorities. If it can be shown that a set of events has actually affected the outcome or tallies of an election, it is inevitable that an administrator will believe the likelihood of that kind of problem is greater than the likelihood of other sets of events that have not yet affected outcomes or tallies.

While political loyalties can and do protect the tenure of some election officials, other election officials realize they can lose their jobs if an election is not carried off correctly. Elections still must be decided, even when races are close. Close races increase the likelihood of recounts, and recounts dramatically increase the likelihood of vulnerabilities being exposed. For understandable reasons, many election officials would prefer to avoid such careful scrutiny.

Consider how these different perspectives play out in the consideration of election fraud. Election fraud, or the appearance of fraud or impropriety, can undermine public confidence in elections. But, of course,

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

the nondetection of fraud, whether in traditional or electronic voting systems, can mean either that there has been no fraud or that the fraud was successfully concealed—and there is no a priori way of determining which of these is true. That is, although some statistical techniques can suggest that fraud may have been committed,16 these techniques are based largely on historical data, and their indications do not come anywhere near a legal standard for asserting that fraud has occurred. In short, no one knows the baseline level of fraud in elections, regardless of what technologies have been used,17 and because there are many impediments to conducting recounts (especially in high-profile races),18 it is unlikely that fraud—if it exists—will be discovered.

Election officials and legislators tend to respond to fraud cases that have come to light during their tenure. By this standard, some election officials are skeptical of the claim that electronic voting systems without paper trails are less secure than nonelectronic systems, partly because most proven instances of election fraud to date have involved nonelectronic voting systems.19 And, in response to the possibility of fraud, many election officials have worked to improve procedures and organization that enhance the overall security posture of elections.

On the other hand, electronic voting systems have not been in use for very long, and so it may simply be that election irregularities and fraud associated with these systems have not yet come to light. By contrast, computer scientists see myriad possibilities for fraud, and because there is no way to rule out those possibilities or to bring them to light, they tend to behave as though such possibilities must be taken for granted. Moreover, they are concerned that the use of electronic technology enables the

16  

See, for example, Jonathan N. Wand et al., “The Butterfly Did It: The Aberrant Vote for Buchanan in Palm Beach County, Florida,” American Political Science Review 95(4): 793-810, 2001.

17  

See, for example, Fabrice Lehoucq, “Electoral Fraud: Causes, Types, and Consequences,” Annual Reviews of Political Science 6:233-256, 2003; Larry Sabato and Glenn Simpson, Dirty Little Secrets: The Persistence of Corruption in American Politics, New York, N.Y.: Random House/Times Books, 1996; John Fund, Stealing Elections: How Voter Fraud Threatens Our Democracy, San Francisco, Calif.: Encounter Books, 2004.

18  

Such impediments include the high cost of recounts and the fact that a winning candidate is virtually certain to oppose a recount using any legal mechanism available—and there are many such mechanisms.

19  

Dozens of problems with electronic voting systems have been documented, and allegations of fraud involving electronic voting have appeared in the form of signed affidavits. Testifying before the U.S. House of Representatives Committee on House Administration, July 7, 2004, Michael Shamos reported that since 1852, the New York Times has published over 4,000 articles detailing numerous methods of altering the results of elections through physical manipulation of ballots (available at http://euro.ecom.cmu.edu/people/faculty/mshamos/ShamosTestimony.htm).

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

commission of fraud in ways much more subtle than in the past and that these technology-enabled frauds may be much more difficult to detect.

Whereas computer scientists often compare what they have today with what could be in principle, administrators tend to compare what they have today with what they had yesterday. Computer scientists will presume a vulnerability is significant until shown otherwise, but election officials will presume that the integrity of an election has not been breached until compelling evidence is produced to the contrary. This difference in perspective largely accounts for the tendency of some election officials to blame electronic voting skeptics for scaring the public about security issues and for the tendency of some electronic voting skeptics to say that election officials have their heads in the sand.

As a baseline for comparison purposes, consider the security of a voting system based on hand-counted paper ballots. Such a system is manifestly subject to fraud if the chain of custody is not well defined or maintained, as the expression “stuffing the ballot box” indicates. Fraudulent votes can be introduced through the counterfeiting and subsequent marking of ballot documents, and while there are techniques that can be used to authenticate a document as legitimate, they all require that ballot documents be checked one by one. All else being equal, manual (re)counting of ballot documents is relatively straightforward when the number of voters involved is small, but it becomes more prone to error when hundreds of thousands of ballots are being recounted.

It is helpful to categorize security questions according to the timeline of a system’s use.20 First, a system (including all necessary hardware and software) should be assessed for its security. Second, if the system’s security is found adequate, the assessed system must be propagated to all the sites where it will be used. That is, the physical units that voters actually use should be identical to the system that was assessed. A third set of security issues arises while the systems are being operated by the voters. The fourth and final set of issues arises after the polls close and the results of each unit are passed to the parties responsible for vote tabulation.

4.2.2.2 Assessing the Security of a System Prior to Deployment

It is broadly accepted that independent testing and evaluation are an essential component of assessing the security of a system, and at this writing, the EAC is in the process of establishing Voluntary Voting System Guidelines (VVSG) in the area of security. Box 4.4 describes some of

20  

Testing issues are discussed in Douglas Jones, Testing Voting Systems, available at http://www.cs.uiowa.edu/~jones/voting/testing.shtml.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

Box 4.4
Security Issues That an Independent Assessment Might Examine

An assessment of the security of a voting system would involve independent technical experts with backgrounds in computer security and the ability to draw on people with deep knowledge of election practices and procedures. The assessment team should control the process, and it should have full access to all system documentation, software, source code, change logs, manuals, procedures, training documents, all material provided to any other testing or review process, and working physical examples of the voting system in question (hardware and software). In addition, the assessment team must have adequate resources and time to complete its assessment, and it must have the independence to make its findings known without intervention on the vendor’s part.

Assessments of this nature include but are not limited to finding specific software problems. They are intended to examine the system holistically to determine the extent to which it will be capable of resisting attempts to compromise its security (for example, how resistant is the system to the bribing of a single insider?). Collectively, the group responsible for assessing security might examine:

Hardware

  • Accessibility of data- or processing-related components internal to a voting station

  • Detectability of attempts to tamper with internal components

  • Configuration and programming of firmware and any boot-related devices or media

the issues that an independent laboratory might consider in such an assessment.

Security vulnerabilities introduced into an electronic voting system prior to its deployment are the most serious in terms of their potential impact on the outcomes of elections.21 The reason is that vulnerabilities built into the design of a system are propagated to every individual unit. Thus, the design and implementation phase of system development is a

21  

Note that an explicit evaluation of the security of a specific electronic voting system is not the only possible approach to making electronic voting credibly secure. Whereas an explicit evaluation seeks to uncover security flaws that might exist in any given implementation, a redundant implementation—that is, a competing implementation sponsored and created by any political party with a stake in elections—would require that at least two independent systems be compromised in order to commit fraud successfully. However, the redundant approach has not been adopted for electronic voting, though it has been used in a variety of situations where high reliability and security are required.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
  • Ability to reprogram boot sequences

  • Ability to access ports remotely

Software

  • Source code inspection and verification

  • Logic and accuracy testing

  • Ability to ensure that code is digitally signed

  • Security features built into the software (e.g., authentication protection for access to system internals)

  • Reliability (e.g., ability to recover from a power failure)

  • Architecture and design for modular construction

  • System behavior under different configurations (e.g., different ballots, ballots for people of different abilities)

  • Maintenance “traps” that circumvent normal protections.

Procedures

  • Procedures for upgrading or patching software

  • Procedures for qualifying and certifying patches (or, in fact, the system configuration after a patch has been installed)

  • Procedures for decertifying or dequalifying software or hardware

  • Procedures for setting up and breaking down the system in operational use

  • Procedures for handling vote totals at the close of the polling place

SOURCE: Drawn in part from Leadership Conference on Civil Rights and the Brennan Center for Justice, New York University, Recommendations for Improving Reliability of Direct Recording Electronic Voting Systems, June 2004. Available at http://www.civilrights.org/issues/voting/lccr_brennan_report.pdf.

point of high leverage for individuals seeking to compromise election security.

Qualification of a system according to the Federal Election Commission’s 2002 Voting Systems Standards provides some degree of assurance to a purchaser that a few security measures have been taken. Purchasers wishing to go beyond that degree of assurance might ask additional questions.22

22  

For example, Mulligan and Hall argue that current voting system standards (that is, the standards promulgated in 2002) are inadequate, and that systems fully certified as compliant with those standards exhibited critical problems due to gaps in the standards and the certification process, such as the lack of federal guidelines that speak to human factor issues in electronic voting. They further assert that the federal qualification system for DRE voting machines is inadequate and incomplete, and that significant problems evidently slipped through the cracks, resulting in polling place or tabulation failures in 2004. See Deirdre Mulligan and Joseph Lorenzo Hall, “Preliminary Analysis of E-Voting Prob

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

4-10. To what extent and in what ways has a realistic risk analysis been part of the acquisition process? A risk analysis includes a threat model describing the various ways adversaries might exploit vulnerabilities in a system; a description of possible adversaries, their level of motivation and sophistication, and what resources they might bring to bear; an assessment of the likelihood of exploitation of various vulnerabilities and an estimate of the harm that might be done should exploitation occur; and a consideration of the possibility that an attack could be mounted without detection. For example, a postulated attack that involves the ability to improperly modify the code that will run on deployed voting stations presents security challenges that are very different from one that does not. Indeed, an attack involving insider access is much more serious, because of the possibility that the actions of a small number of individuals could have security ramifications in every deployment location (without such access a much larger degree of effort would be needed to achieve large-scale compromise).

In practice, a risk analysis must be undertaken by both vendors and election officials. A vendor must undertake a risk analysis in order to know what security properties a system must have. Development and design of the full system are not possible until the risk analysis has been performed. Though election officials—in their role as purchasers or lessors—are not responsible for system development or design, they too must undertake a risk analysis to determine if their own concerns about security are reflected in the vendor’s analysis. For example, if the threats of concern to election officials are not reflected in the threat model used to analyze risk, the risk analysis is not likely to provide useful guidance to those officials. Also, election officials, with input from independent security specialists and the general public, may wish to formulate the threat models of most concern to them independently of the vendors’ postulated threat models so as to avoid being captured by vendor biases.

   

lems Highlights Need for Heightened Standards and Testing,” undated white paper contributed to the committee, available at http://www7.nationalacademies.org/cstb/project_evoting_mulligan.pdf.

The particular problem cited—the lack of guidelines relevant to human factors—was addressed explicitly in the proposed EAC revisions to the Federal Election Commission’s 2002 Voting Systems Standards. The Technical Guidelines Development Committee of NIST was specifically chartered to address such shortcomings. But the pace at which the standards-setting process works remains an important issue. It is reasonable to anticipate that over the long run, the relevant guidelines will become more comprehensive. Nevertheless, at any given moment in time, there may well be important outstanding issues that have not been addressed in the standards.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

4-11. How adversarial has the security assessment process been? Experience in the cybersecurity world has shown that adversarial techniques are generally the best for assessing security. That is, security should be assessed from the standpoint of an outsider trying to find exploitable flaws in it rather than an insider checking off a list of “good security measures.” Indeed, a system may conform to the best of checklists and still have gaping security holes.

The best example of an adversarial assessment is the use of independent red teams, or “tiger teams,” as described earlier.23 Short of a red team attack, an independent adversarial examination of the “internals” of a system (physical construction in the case of hardware, actual code in the case of software) will provide some insight into its ability to resist attack, since it is likely to uncover flaws that an adversary might use. Moreover, in the absence of such an examination, it is not possible for any amount of testing to eliminate the possibility that the system will demonstrate some improper behavior under some set of circumstances. That is, testing may be a sufficient basis for concluding that a system does meet certain requirements (e.g., produces certain outputs when given certain inputs), but it cannot show that the system will not do something else in addition that would be undesirable.24 Only by inspecting the internals does one have a chance of detecting the potential for inappropriate behavior when the system is put into use.

4-12. How has the system’s ability to protect ballot secrecy been assessed? The same kinds of adversarial techniques used to assess security are also useful for assessing the ability of a system to maintain ballot secrecy. Box 4.5 illustrates some of the issues that might come up in such an assessment.

23  

An example of red team analysis is the “Trusted Agent” report on Diebold’s AccuVoteTS Voting System, prepared by RABA Technologies LLC in January 2004 and available at www.raba.com/press/TA_Report_AccuVote.pdf. The red team analysis found that the Diebold system, which Maryland had procured for use in primaries and the general election, contained “considerable security risks that [could] cause moderate to severe disruption in an election.”

24  

A simple example will illustrate the problem in principle. Using the logic described in Section 4.2.2 for maintenance traps, a system could be designed to change every 10th vote for Candidate A to Candidate B when a specific set of keys on the display is pressed in a specific sequence with a minimum time in between key presses. This particular example is contrived, as it would require quite a bit of skullduggery and the commission of a number of felony offenses on the part of a vendor, but the fact remains that no plausible testing process will ever uncover such a problem.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

Box 4.5
Ballot Secrecy Considerations That an Independent Assessment Might Examine

Maintaining the secrecy of a voter’s ballot is an important public policy consideration that is specified in state law. Known as “confidentiality” among computer scientists, the problem amounts to one of keeping the voter’s ballot private under all circumstances. In particular, these circumstances include voter collusion (as might be the case for a voter trying to sell his or her vote); observations of voters and voter behavior in the polling place being correlated with voting station records; and corrupt insiders who might have access to voting station records. Put differently and more generally, computer scientists believe that a system properly designed to provide ballot secrecy must be able to defeat attempts to compromise the secrecy of an individual’s ballot under all possible adverse circumstances.

In the absence of a specific system design, it is impossible to anticipate all possible threats to secrecy in anything but the most general terms. The following examples are intended to suggest a range of possible threats against which a system must be designed:

  • The first person to vote on Election Day in her precinct may well be known to poll workers or others present at the precinct. A voting system that does not randomize the order in which ballots are reported will report this person’s vote, and ballot counters will be able to recognize which ballot was cast first and thereby be able to easily deduce how she voted.

  • A voter with a Vietnamese name requests a ballot in Vietnamese and is the only person with a Vietnamese name voting on Election Day in that precinct. If the system is designed to report votes as ballot images, it is easy to determine that one ballot is cast in Vietnamese and thus to associate with high probability this ballot with the Vietnamese-surnamed voter.

  • If an electronic voting system is designed to produce a unique random 10-digit serial number on a cast vote record (e.g., so that a voter-verified paper audit trail of the ballot can be associated with the image),1 a voter trying to prove how she voted (e.g., to sell her vote or because she has been forced to by a coercer) could identify her ballot by memorizing that serial number and then telling it to someone who has access to the cast vote records.

  • If a DRE system is designed to record, next to each cast vote record, the sequence of selections and button presses performed by the voter to reach this cast vote record (e.g., to obtain information that might be useful in the design of future ballots for greater usability), a voter who wants to mark his or her ballot in an identifying way can use some distinctive sequence of button presses (forward, back, forward, forward, back, back, forward, back). This voter’s ballot will be the only one that is recorded adjacent to that unusual sequence, and so this voter will be able to prove to anyone with access to this log how he or she has voted.

1  

A cast vote record is a stored record of the set of all of a voter’s choices.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
4.2.2.3 Deploying the Assessed System to Polling Stations

Qualification and certification testing of a voting system are only the first steps in the process of assuring end-to-end security. Even a voting system that has been qualified as secure, reliable, and easy to use is useless if it is not the system that voters use on Election Day. That is, the qualified and certified system must be deployed to polling stations for actual use on Election Day.

Acceptance testing is one element in providing such assurance. According to the Federal Election Commission’s 2002 Voting Systems Standards, one purpose of acceptance tests is to ensure that the units delivered to local election officials conform to the system characteristics specified in the procurement documentation as well as those demonstrated in the qualification and certification tests. To help ensure that qualified voting systems are used consistently throughout a state, ITA labs can file digital signatures of qualified software with the software library of the National Institute of Standards and Technology (NIST).25

Acceptance testing is undertaken in the absence of a specific ballot configuration. Logic and accuracy (L&A) testing is the testing of voting systems configured with the ballot that will be used in the actual election. In principle, L&A testing serves two main functions—to account for any changes to a unit’s configuration between the point of acceptance and Election Day, and to ensure that the unit performs properly with the actual ballot to be used. Thus, L&A testing can be usefully applied to every unit that voters will use in the election, although the expense of testing generally allows only a fraction of those units to be tested. When units are known to be identically configured, only one of them needs to be thoroughly tested and the rest tested simply to ensure that no failure has occurred.

These two types of testing motivate several additional questions:

4-13. How is the security of voting stations maintained to ensure that no difficult-to-detect tampering can occur between receipt from the vendor and use in the election? In theory, this is a straightforward matter—put the voting stations in a locked building with no remote access to

25  

A digital signature is a unique, algorithmically generated fingerprint of any digital object (such as a software module). By comparing signatures, one can easily determine if two objects are identical. NIST maintains a library of certified code to which ITAs can submit qualified election software versions, with a digital signature that enables states and local election officials to check whether individual machines utilize exactly the same software. But even the smallest change in software will change the signature (for example, the code for “3 + 2” will have a very different signature from the code for “2 + 3”). Practical difficulties of performing such a check are addressed in Footnote 28.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

them and ensure that no one has access until they are removed for use on Election Day. But there are several factors that complicate this simple picture. For example:

  • Vendors may need access to load ballots to individual voting stations, a task that must be performed before Election Day.26 However, the steps that must be taken to load ballots may, or may not, resemble those needed to change software. How will those supervising the loading of ballots be certain that no other changes are being made to the voting stations?

  • Third parties may masquerade as election officials or vendors and demand access to the voting stations in storage. Or moles (individuals with ostensibly authorized access but who in fact have been compromised to work in a partisan manner) may be present in the offices of election officials. What procedures are in place to guard against changes introduced by these insiders (for example, a rule that requires that access to systems in storage is never associated with only one or two persons)?27 How rigorous are the procedures for ensuring that only properly authorized parties have access to the storage facilities?

  • Early voting, an increasingly common practice that entails taking voting stations out of storage before Election Day, further complicates the achievement of security and chain-of-custody goals.

4-14. What steps have been taken (either technically or procedurally) to limit the damage an attacker might be able to inflict? As a practical matter, the compromise of one voting unit in one precinct is obviously less harmful than the compromise of all of the units in the entire jurisdiction. One approach to limit possible damage is to ensure that modifications or updates cannot be made en masse, that is, through one action updating all units. Thus, a large-scale compromise would entail significantly more effort for the attacker than a small-scale one. Of course, this approach makes it much more inconvenient and costly to deploy updates when they are necessary.

26  

In principle, election staff could do so as well. But given the prominent role that vendors have often been given in providing supporting services (Section 6.7), it is entirely possible that vendors may have this responsibility.

27  

The insufficiency of a two-person rule has been noted in the finance industry, in which audit procedures typically call for involving three or more individuals. The reason is that if one party in a two-person conspiracy breaks the secrecy pact, his or her identity is known with certainty to the other party. However, if the conspiracy involves three or more individuals, the identity of the party breaking the secrecy pact cannot be inferred with certainty by any of the others. In an election context, such a procedure might involve representatives from two parties jointly picking a third.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

4-15. How can election officials be sure that the voting systems in use on Election Day are in fact running the software that was qualified/certified? For example, a vendor may uncover a potentially problematic issue in software that has been previously certified and address the issue in a program patch. Strictly speaking, any change to a program requires recertification, and some state laws require recertification after every software change, no matter how small. But because full recertification generally takes a long time (in principle, as long as the initial certification), there are strong incentives for the vendor to argue that the change can be administratively approved.

The question then arises whether the change involved is small enough to be addressed administratively. In the absence of specific criteria, vendors are in the best position to know about the scope and significance of any change. On the other hand, from the point of view of an outsider without such privileged knowledge, the nature of programming is such that it is essentially impossible to assure that changes made in one part of the program will have no effects on other parts of the program. Without inspecting the code involved (and the other parts of the program with which it interacts), there is no way to determine if a change is significant or not. Some evidence may be forthcoming if the original program is designed in a modular fashion with well-documented interfaces, the behavior of existing modules is understood, and the changes are confined to one or a few modules. But the mere assertion of a claim does not suffice for most outsiders.

If an administrative certification is not possible, election officials have the operational choice in practice between running certified code that may have problems or running uncertified code that has been fixed. Thus, some election officials may still try to think of ways to avoid this certification step, particularly if they know that a smooth election process depends on a last-minute fix.

A related issue is that despite precautions that have been taken, software may have been compromised through the introduction of an unauthorized patch. Beyond vendor assurances, what technical means are available to demonstrate that such compromise has not taken place? For example, a digital signature of the software running on any given station can be taken for comparison with a known version, though this is difficult in practice today.28

28  

The difficulty arises because the software for most electronic voting systems resides in a programmable read-only memory (ROM) module soldered to the system’s motherboard, and obtaining access to the module’s contents in practice is today a cumbersome and labor-intensive process that entails physical removal of the module. Moreover, short of a readout of the ROM’s contents and the computation of the digital signature, there is no way to independently ascertain which version of software is in fact running on a given station.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

A second related issue is that the source code for software running on an electronic voting system may not be fully available.29 Some vendors of electronic voting systems may build their systems using the foundation of a (proprietary) commercially available operating system. Device drivers (programs that manage the devices attached to a computer) may also be available in object code but not in source code. (As a rule, systems that are based on the use of such commercial off-the-shelf components are generally less expensive and faster to develop than systems that are custom-designed and implemented from the ground up.) In this case, there is a strong sense in which the certification or qualification of voting system software is necessarily conditional (perhaps implicitly), because it presumes that the operating system or device drivers, or interactions between the voting application and the operating system or device drivers, do nothing strange or unexpected or malicious. Furthermore, vendors or jurisdictions managing relatively small contracts will not generally have enough leverage with the provider of operating systems or device drivers to obtain source codes for inspection.

4.2.2.4 Using the Deployed Units on Election Day

In general, the issues on Election Day are more likely to be associated with reliability than with security. That is, if rogue voters are able to compromise the security of the voting systems they use, it will almost certainly be through the Election Day exploitation of a pre-existing security vulnerability. Such situations are covered under Sections 4.2.2.2 and 4.2.2.3.

The one exception is what might be called a denial-of-service attack against voting systems in use. For example, Party A might try to deny service in an area with large numbers of people from Party B, thus reducing the turnout and vote count for Party B. Lack of availability of even a few voting stations for even a short amount of time during peak hours can result in very long lines for voting, leading to voter discouragement and an effectively lower turnout.30

29  

Source code refers to the software in the form in which it was originally written—usually in a high-level programming language that is understandable to humans. Object code refers to the corresponding ones and zeroes that actually run on a computer. Programs known as compilers are required to translate source code into object code.

30  

An example of such a threat might involve a set of voting stations connected via a wireless LAN to a central monitoring station in the precinct. A system might be vulnerable to electronic jamming in the precinct that would prevent the voting stations from communicating with the central monitoring station and might thus be prevented from accepting input at all. (Perhaps for this reason, no present electronic voting system is based on this architecture.)

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

From the standpoint of assuring election integrity, Election Day is also an opportunity to collect data that can be used later to audit the election and to document anomalies that might point to systemic problems that need remediation in the future.

4-16. What information must be collected on Election Day (and in what formats) to ensure that subsequent audits, recounts, or forensic analysis can take place if they are necessary? As noted in Chapter 1, elections may be subject to post-Election Day challenge. To resolve such challenges after the fact (of the election), information about what happened on Election Day must be available. Challenges to the vote as recorded and communicated by the voting station and the tabulation equipment might arise from a sufficient number of individual voters wanting evidence of how their voting intent was interpreted, or from systemic difficulties due to bad system design or fraud. Should an audit become necessary (because irregularities are charged or because a state’s best practices mandate random audits), auditors need data and records to examine. It is therefore essential that a locality collect such data before and during the election so that appropriate records are available. An example of data that might support an audit is exit poll data, which might be collected by the state rather than a media organization, for later comparison to actual totals.

This point is the primary motivator of various demands for paper trails in electronic voting systems—the concern expressed by many advocates of paper trails is that a DRE system without such a capability is unaccountable, and that such systems give election officials who are challenged the stark choice between accepting the numbers proffered by the system and redoing the election.

Box 4.6 provides some examples of relevant data that are arguably relevant for forensic analysis.

4-17. How are anomalous incidents with voting systems reported and documented? Given that in-use operations are the ultimate test of voting systems, it is important to capture as much information as possible about how voting systems perform in actual use. What incident-reporting structure will guarantee that problems are reported promptly to vendors, to states, to other local election jurisdictions within the state using the same systems, and to standards-setting organizations? How can knowledge of these anomalies be used to improve voting system performance?

For example, Florida certified an electronic voting system despite the fact that the voting machines took a long time to boot up and machines had to be opened in sequence. It took between 90 minutes and 4 hours to open a precinct. Therefore, the machines could not be turned on the same day that an election took place. The certification standards had not ad-

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

Box 4.6
Election Administration Information Required for a Complete Audit

1. Data to collect before the election:

a. Local voter registration numbers and lists. [P,S]

b. Inventories of equipment and ballots upon acceptance (e.g., date of purchase, source, maintenance records, vendors, serial numbers, retain code versions in offsite escrow). [S]

c. Seal numbers for ballots and machines and storage locations for voting equipment. [S]

d. A record of personnel with access to equipment, including detail such as when and where. [S]

e. Changes made to the equipment (e.g., oiling, charging, battery changes, memory upgrades, putting in a module, checking odometers, code drop). [S,P]

f. A list of the times and modes by which voting equipment is transported (including license plate number and driver for chain-of-custody purposes). [S]

g. Inventory of equipment and materials before and after transportation. [S]

h. Inventory of equipment and materials before voting begins. [S]

i. Pre-election equipment testing data, including the number of systems tested and problems observed during testing. [S,P]

j. Number of training sessions held for poll workers, and a roster of poll workers attending each session. [P]

k. Copies of sample ballots and voter information materials. [P]

When electronic voting systems are involved:

  • Date of most recent software update.

  • Type of certification for software update.

  • Comparisons of digital signatures of software running on individual voting stations with digital signatures in NIST’s National Software Reference Reference Library.

  • Results of logic and accuracy testing.

  • Contingencies for which the poll workers were trained.

  • Physical security maintained on voting station.

These data help assure that ballots, equipment, and polling places are usable and also makes it possible to deal with problems and questions that may arise later.


2. Data to collect during the election:

l. Number of poll workers at each poll, including the times at which poll workers arrive and leave. [S]

m. Signatures (not check marks) of those present. [S]

n. Signatures for inventory received election night, both in precincts and when inventory is returned to the central office. [S]

o. Tally at precinct and time it was conducted. [S,P]

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

p. The number of poll and early voting sites and any rents required to use these locations. The number of workers in each poll or early voting site, their rate of pay, and their required number of hours of work. [P]

q. If “parallel testing” is conducted on Election Day, the number of voting machines tested, the way in which they were selected for testing, and the results of those tests. [S,P]

r. Exact time when each poll site opened. [P] (Maximum waiting times at each poll site.)

s. The number of poll sites that experienced significant problems, an explanation of the problems experienced, and a description of how these issues were resolved. [P,S]

The number of individuals turned away from the polls and the reasons they were turned away.

When electronic voting systems are involved:

  • Frequency of restarts and reboots required for voting stations.

  • Descriptions of anomalous behavior during use.

These data will ensure that processes during the election are monitored. They also give the best possible means to later establish what voters’ intentions were, and that they were allowed to vote correctly.


3. Data to collect after the election:

t. Inventory of equipment and materials after polls close. [S]

u. The total number of ballots cast (report absentee and poll site totals separately, if possible). [P,S]

v. The number of votes cast for all candidates for each federal and local office (reporting absentee and poll site totals separately, if possible). [P]

w. The number of registered voters. [P,S]

x. The number of people who voted as indicated on check-in/check-out lists. [P,S]

y. The numbers of absentee ballots applied for, tabulated, and challenged. [P,S] The reasons for any successful challenges to such ballots.

z. The number of absentee ballots received, recorded by date received. [P]

aa. The number of absentee ballots returned from citizens residing outside the country, and the number of these that are challenged. [P,S]

bb. The number of tabulated provisional ballots provided to voters that were challenged. [P,S]

cc. The number of early voters. [P]

dd. Transportation records of equipment (consistent with above criteria). [S]

ee. Storage records of materials. [S]

These data establish the ability to know that votes were handled and reported correctly. Furthermore, they give people the ability to know how to improve processes for future elections.


4. Demographic and administrative data:

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

ff. The annual expenditures for election administration, including personnel and capital expenditures. [P]

gg. The number of physical voting sites and the number of precincts (if not the same because of consolidation) used in the election. [P]

hh. The number of days in which early voting is allowed, and the number of early voting sites operated. [P]

ii. Census demographics of voting precincts, if available. [P]

jj. Salary, by job category, of poll workers for the election, details of their job qualifications and hiring process, and years of experience. [P,S]

kk. Type of election administration system (e.g., elected or appointed board, elected or appointed registrar). [P,S]

NOTE: “P” indicates data critical for undertaking performance audits; “S” indicates information critical for security audits. Where the information can be used to audit both performance and security both letters are used, in order of priority.

SOURCE: Nonitalicized material is taken from the Caltech/MIT Voting Technology Project, Insuring the Integrity of the Electoral Process: Recommendations for Consistent and Complete Reporting of Election Data, October 2004, available at www.vote.caltech.edu/media/documents/auditing_elections_final.pdf. Italicized material originates with the committee. The Caltech/MIT Voting Technology Project proposed that the above set of (nonitalicized) data at the precinct level be collected, retained, and distributed for every federal election in the United States in order to support postelection audits should they become necessary.

dressed the time required to open the machines or the time required to open average precincts or large precincts—elements that proved important to using the machines. After the problem was identified, the state certified a new version of the software that permitted somewhat faster opening of the polls. A few minutes after one machine began booting up, the clerk could begin opening the next machine. However, the standards were not changed to make speed in opening the polls an element of certification.

4-18. What is the role of parallel testing? Parallel testing, which is intended to uncover malicious attack on a system, involves testing a number of randomly selected voting stations under conditions that simulate actual Election Day usage as closely as possible, except that the actual ballots seen by “test voters” and the voting behavior of the “test voters” are known to the testers and can be compared to the results that these voting stations tabulate and report; this exception is not available (because of voter secrecy considerations) if the parallel testing is done on Election Day. Note also that Election Day conditions must be simulated using real names on the ballots (not George Washington and Abe Lin-

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

coln), patterns of voter usage at the voting station that approximate Election Day usage (e.g., more voters after work hours, fewer voters in mid-afternoon, or whatever the pattern is for the precinct in question), and setting of all system clocks to the date of Election Day. Parallel testing is a check against the possibility that a system could recognize when it is being used on Election Day and report undoctored results when it is being tested at any other time. An important issue in parallel testing is how many stations must undergo parallel testing in order to provide reasonable assurance that inappropriate behavior has not occurred.

4-19. What physical security provisions will be put into place at polling places after the voting stations have been delivered but before the polls open? Physical security is the primary barrier to unauthorized changes in the configuration of individual units. In the period after delivery of voting stations to polling places but prior to the opening of the polls, physical security must again be maintained—the procedures required are generally the same as when the voting stations are in storage but must now be carried out in different locations. (Note that an important characteristic of polling places staffed by poll workers is that the workers provide some degree of control over physical access to voting stations, as compared, for example, with a home computer used by a voter to cast or mark a ballot, either by mail or—in the future—using a personal computer. Internet access for such a voting station would introduce additional possibilities for making unauthorized changes.)

4-20. What physical security provisions will be put into place immediately before the polls open and immediately after the polls close? Poll workers are generally responsible for initializing voting stations so that the internal counts in each station are set to zero and for delivering station totals to the central tabulation authority. Unless special precautions are taken against the possibility of a compromised or partisan poll worker, these are the points on Election Day at which tampering is most likely to occur. For example, special security precautions might include requiring individuals from more than one party to be present for station initialization, each of whom is familiar with what is and is not necessary to initialize the station. If this practice were not followed, someone could be selected at random to perform initialization.

4-21. What physical security provisions will be put into place at polling places while the polls are open? While the polls are in use, a different set of physical security issues arises. Voters will be using these units for periods as long as many minutes, and voter secrecy considerations preclude any kind of monitoring that might be intrusive from the voter’s perspective. Poll workers may also be busy with checking voter registration, so they may not have time to perform such monitoring in any case.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
4.2.2.5 Aggregating/Tabulating Voting Results

Election outcomes are determined by aggregating the votes cast at all the polling places. Individual votes can be directly counted by a central authority, or aggregated at the level of the individual voting station. Either case entails communication between the voting machines at each location and some central authority responsible for tabulation, and individual unit counts are usually transmitted at the end of the day.

4-22. How are the results from polling stations communicated to the central tabulation authority? Because the results from every voting station must be included in the final tally of votes, there must be some mechanism for communicating this information to the tabulation authorities. (Results may be conveyed as station subtotals for various contests or as individual untallied records of the individual votes cast—the “cast vote records.”) There are only three ways for this task to be accomplished: manually at each station (e.g., by someone reading vote totals at each station and transferring the numbers to a notebook or ledger, or talking into a telephone); by the physical removal of some computer-readable media from the station that contains vote totals; and by direct transmission over some wired or wireless medium such as a modem and telephone lines and computer network (as was the case with the Department of Defense SERVE prototype; see Box 3.1). These methods may also be used in combination. For instance, if security were an issue, a wired or wireless medium might be used to provide preliminary data, while the official data might be transported via secure couriers carrying flash memory cards.)

Each of these methods entails different risks. Reading vote totals at each station and transferring the numbers manually raises issues of human error in recording vote totals as well. For example, a person reading numbers over the phone might be misunderstood by the receiver of those numbers, or the handwriting in a written record could be misread, or the numbers could be wrongly transcribed. Manual handling of the numbers and the use of computer-readable media for recording the vote totals both raise issues of physical custody of the ledger or media in transport to the tabulation authority. For example, if precautions are not taken, an adversary could substitute a CD-ROM prewritten with the appropriate vote totals for the CD-ROM taken from a specific voting station. Direct transmission of vote totals over a wired or wireless network renders the transmission vulnerable to spoofing attacks, in which the receiving computer is tricked into accepting numbers from an unauthorized source; or the transmission could be intercepted, modified, and played back; or a denial-of-service attack could take place in which the input channels on the receiving computers are blocked.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

Procedures and/or technologies are available to deal with all of these problems, but they all require special attention to think of these problems and then to implement the available solutions. (For example, one element of guarding against the substitution of an unauthorized CD-ROM for an authorized one containing voting information might call for multiple poll workers of different parties to accompany the CD-ROM to the counting facility.)

4-23. How does the central tabulation authority aggregate vote totals? In general, computers will be responsible for tabulating the results from individual voting stations. But all of the concerns about software security expressed earlier in the context of individual voting stations apply as well to software at the central authority, with the possible exception that physical security is likely to be easier to maintain in a single place than in many precincts.

4-24. What physical security provisions will be put into place at the central tabulation authority? For example, because of the sensitivity of the tabulation operation (aggregating records from all polling stations), one might argue that physical access to the facility should be carefully controlled (e.g., all persons entering or leaving the tabulating center might be required to provide legal identification and sign in and out on a public log as an elections employee, a temporary employee, a contractor, or a visitor. Operations at the facility might also be recorded on videotape.

4-25. What roles can postelection auditing and investigation routinely play to increase the likelihood that fraud or other problems will be detected? Some legal regimes governing elections require that a postelection audit be performed automatically if the margin of victory for any candidate or proposition is less than a certain percentage. In other regimes, losing candidates can (and often do) request a recount if the margin of victory is less than a certain percentage. (In California, 1 percent of all precincts are audited routinely after each election, with the intent of using the results to uncover problems and make improvements in future elections rather than trying to find fraud.)

The assumption implicit in legal regimes based on the magnitude of a margin of victory is that the effect of anomalies is small—that only a few of the votes cast were not properly counted. According to this logic, a large margin of victory renders the presence of anomalies more or less irrelevant in the practical sense of affecting the outcome of the election; only when the margin of victory is small could anomalies matter.

In a precomputer era, this assumption was easily defended. Large-scale anomalies would require a large-scale effort and a large number of human beings, thus increasing the likelihood that the perpetration of anomalies would be detected by the authorities. But, as noted earlier in Section 4.2.2 on the possibility of automated fraud, the concern of com-

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

puter experts is that a small number of corrupt or compromised individuals might be able to conduct their dirty work so that they would have a very large effect.31 To guard against such a situation, some security specialists advocate routine auditing, security review, or other investigation in the wake of an election; such auditing would have a chance of finding attempts at fraud that various testing and/or code inspection procedures had not discovered.

4.2.3 Usability and Human Factors Engineering

32
4.2.3.1 Perspectives on Voting System Usability

All voting systems face the usability problems of accurately capturing the voter’s intent in casting a ballot and of being easy for voters to use, both of which are exacerbated by the vagaries of human behavior. Indeed, the importance of usability is highlighted by the role of the infamous butterfly ballot in the 2000 presidential election in Florida, which allegedly confused many voters into casting a ballot that was contrary to their intent. Electronic voting promises many advantages from a usability standpoint, but there is no single best way to capture voter intent. Consequently, different vendors and different election officials can legitimately and ethically make different decisions about how best to present information to the voter and how best to capture the voter’s vote.

One quantitative measure of a system’s usability is the error rate of

31  

A particularly worrisome scenario is that corrupt partisans might modify vote totals so that the margin of their candidate exceeds that required by law for recounts, precluding a recount or any other subsequent closer examination. Alternatively, corrupt partisans might modify vote totals so that the margin requires the loser to pay the full amount of the recount, effectively making a recount unaffordable by the challenger. In other words, by adjusting the vote totals carefully, corrupt partisans could create an apparent margin of victory large enough to make unlikely or impossible a recount or an audit that might reveal the fraud.

32  

The discussion in this section mostly concerns electronic systems that are used to capture voter ballots directly. Today, these systems are for the most part direct recording electronic systems. Optical scan systems are another important type of electronic voting system, but in optical scan systems the voter marks up a paper ballot that is then scanned electronically. Thus, the mechanism for capturing voter intent is paper-based rather than electronic, and the considerations of this section are mostly not relevant to optical scan systems. This subsection draws in several places from Harry Hochheiser, Ben Bederson, Jeff Johnson, Clare-Marie Karat, and Jonathan Lazar, The Need for Usability of Electronic Voting Systems: Questions for Voters and Policy Makers, Association for Computing Machinery (ACM) Special Interest Group on Computer-Human Interaction (SIGCHI), U.S. Public Policy Committee, white paper submitted to the committee. Available at http://www7.nationalacademies.org/cstb/project_evoting_acm-sigchi.pdf.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

that system in capturing votes—an error would be the recording of a vote that was contrary to the voter’s intent in casting the vote, and the error rate would be the fraction of all votes recorded that were in error. If the error rate is x percent, then an election that is decided by a margin of less than x percent cannot necessarily be said to reflect the intent of the voters. While careful attention to usability issues can force x to be lower than it would otherwise be, x cannot be driven to zero. A design issue is then what the appropriate value of x is for any given system.33

Lowering the error rate of a system in use is the domain of what has come to be called human factors engineering. This is an interdisciplinary field that includes cognitive psychology, the ergonomics of sensing and making manual responses, and systems engineering. The field is largely experimental, much as is the field of medicine, making heavy use of statistics to draw inferences from human subjects in spite of their variability. The end goals of human factors engineering are the design of a technology to make it safe and effective for human use and to develop procedures for machine operation and training for the maintenance and management of the technology.

In recent years human interaction with computers has been a major component of human factors engineering. This includes not only standalone computers but also computers embedded in a variety of systems: aircraft piloting and air traffic control, military and space systems, manufacturing plants, hospitals, business and banking systems, and, more recently, automobiles, homes, and special-purpose computing appliances such as personal organizers and digital music players.

For much of the past, usability issues in voting systems were limited to a consideration of physical accessibility on the part of the voter and translation into non-English languages for non-English-speaking voters. But as the 2000 election demonstrated so clearly, there is much more to usability than access. Indeed, in a voting context, usability includes many things: human behavioral constraints (perceptual, cognitive, and motor capabilities); background (language, education, culture, past experiences); complexity and extent of the task (arrival, departure, waiting in line, ask-

33  

For a sense of the order of magnitude of x in practice, Ansolabehere and Stewart estimate that the residual vote due to technology factors is on the order of 1 percent; see Stephen Ansolabehere and Charles Stewart III, “Residual Votes Attributable to Technology,” Journal of Politics 67(2), 2005. Recount data also provide indicators of error rates, and these are in the 0.5 to 1 percent range; see, for example, Stephen Ansolabehere and Andrew Reeves, “Recounts and the Accuracy of Vote Tabulations: Evidence from New Hampshire Elections 1946-2002,” CalTech/MIT Voting Technology Project Working Paper, January 2004. Available at http://www.vote.caltech.edu/media/documents/wps/vtp_wp11.pdf.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

ing for help, etc.); situation and environmental contexts, such as the physical situation (adequacy of lighting, electricity, heating, etc.) and the social situation (crowds and time limits); sociological issues (privacy, confidence in technology, and equity issues); psychological factors (workload, attention, situation awareness, and distractions that constrain people’s actions); political factors (e.g., proper randomization of candidates, allowing for straight ticket voting); and different perceptions on the part of designers and users of what a system should do.

Design for human usability, like any kind of design, is an art informed by experimental findings that have been reported in a growing scientific literature. This includes handbooks, guidelines, and checklists developed for particular applications. Guidelines applicable to voting systems might include the following:

1. Task analysis. A first order of business is to understand what the basic voting task is, not what specific objects or events the voter must see or hear or what particular responses must be made but rather what information must be communicated to the voter (from the machine, the physical environment, and the poll workers), what information must be communicated from the voter (to the machine, the physical environment, and the poll workers), and what decisions must be made by the voter, the poll workers, and the machine at particular stages of the task. Appreciating the task at this abstract level is essential to considering the design alternatives and pitfalls. There are many formal methods of task analysis involving space, time, probability, causal contingency, and so on.

2. Sensing constraints. What people perceive and discriminate depends on physical variables, expectations, and attention. In vision, these variables include size, brightness, contrast, color, and time duration. Hearing and touch are similarly dependent on a corresponding array of physical variables, though these factors generally play a lesser role for most voters using a voting machine. The minimum perceptible and differential (discrimination) thresholds and trade-offs among these variables are well established in the human factors literature.

3. Cognitive constraints. What people understand and remember from what they perceive depends on more subtle aspects of natural language and symbol familiarity, cultural norms, education level, one’s mental model of how something works, situation awareness, memory, mental workload, basic mental capacity, and so on. What a voter decides depends on clearly understanding the decision alternatives.

4. Response constraints. Appropriate voter response depends not only on what candidate choice the voter intends but also on knowledge of how to respond so as to communicate that choice to the machine. This may be easy or difficult, depending on physical variables such as location of re-

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

sponse devices (buttons, levers, sensitive areas of a touch screen, force levels and accuracy thresholds of response motion, etc.). It also depends on evident correspondence (in location, direction of response motion, sequential order, label wording, etc.) of the appropriate response to the stimulus (e.g., name of the candidate). This is what human factors professionals call stimulus-response (or display-control) compatibility. It is the criterion that the infamous butterfly ballot flaunted.

5. Error types, causation, and remediation. Human errors can be classified in different ways, and such classification is a step toward understanding their causes and preventions. Errors can be omissions (correct action not taken) or commissions (actions taken that ought not to have been taken). Errors can be slips (intended action not taken) or mistakes (intended action taken but turning out to be inappropriate). Errors can occur at any of the stages of sensing, remembering, deciding, or responding.

Human errors often result when people do not receive sufficient feedback in a timely and understandable way. In daily living, people constantly get such feedback from their physical and social surroundings. Other common error causes are inappropriate mental models of how something works, forgetting, distraction, incorrect expectations (e.g., performing a task in a habituated way when present circumstances call for a deviation from the norm), lack of sufficient stimulus energy, or mental or bodily incapacity.

The best way to prevent error is to design the machine or process to be easy (simple, obvious) to use, and this includes good feedback, even in redundant ways. Education and training are next most important, but best designs also minimize necessary training. Computer-based decision aids and in situ guidance, alarms, and prevention of exposure to the opportunity to err (the computer will not recognize certain commands under some circumstances) are other techniques used. Posted warnings have proven to be the least effective means of preventing errors. A well-designed system with adequate feedback will allow the user to commit an error, observe the error, decide what to do about it, and gracefully recover from it.

6. Training. What is obvious to the designer of any machine or process is often not so obvious to the user. Any experience that differs from what one is accustomed to is likely to trigger some confusion. Therefore, at least a modicum of training will be essential for electronic voting. Some training can be accomplished by a well-designed brochure made available either prior to or at the site of voting. It can be augmented by poll workers explaining features of the machine or process that may be confusing. A more sophisticated approach used in some computer-based systems is to embed the training—that is, have the voter go though a few steps of observation and response to displayed dummy candidates to

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

ensure that the voter understands the system. Training is also important for the poll worker, who are often senior citizens less familiar with and more anxious about using computers than the majority of the voter population.

7. Interaction with automation. Human interaction with computer-based machines that may be said to embody at least rudimentary intelligence poses special problems. These may occur for poll workers or technicians employed to set up the machines, make sure they are working properly, understand indications of machine failure (and curtail their use if necessary), and transfer voting data from them to other repositories. It is common that the user attributes more intelligence to a computer than it has. It is also common that a mode error is committed—namely, the user assumes that the machine is set in one mode and takes actions appropriate to that mode, when in fact it has been set to another mode and the action produces an undesirable result.

8. Experimentation and simulation. Experimentation and simulation are essential to system design, setup, voter and poll worker training, and evaluation of voter confidence and system effectiveness. Dealing with human subjects is a special art. Because of the special challenges of dealing with the great diversity of voters and poll workers with respect to education, technological sophistication, and physical and mental limitations, great importance must be attached to well-designed simulation trials, with voter subjects drawn from the representative population. Experimental designs must include a sufficient sample size and proper allocation of subjects to experimental runs to minimize bias in resulting data. Only then can designers of machines and training regimens feel confident, and only then can conclusions about system effectiveness and voter confidence be made.

Voting systems pose a particularly difficult usability challenge. They must be highly usable by the broad public.34 As Hochheiser et al. point out, a citizen in the voting booth facing an electronic voting system may not feel comfortable with information technology, may not be literate (in terms of everyday reading and writing and/or with respect to using a computer), may not be an English speaker, and may have physical, perceptual or cognitive disabilities that interfere with understanding the bal-

34  

Voter registration database systems are another example of an election-related information technology, and as such, user interface issues are important to their users as well. But the population of intended users for these systems—those involved with election administration—is very different from the general adult population at large (that is, those who are part of the population of potential voters). As one example, election officials are likely to interact with a voter registration database system frequently, whereas voters are likely to interact with a voting system only rarely.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

Box 4.7
Usability Issues That an Independent Assessment Might Examine

  • Are voting station controls clearly labeled?

  • Are fonts readable?

  • Is consistent language used throughout the interface?

  • Can users easily change votes once selected?

  • Are write-in votes easy to cast, with clearly labeled choices?

  • Are controls laid out so as to minimize the likelihood of accidental completion of a ballot?

  • Have user interfaces been designed for use by and tested by a wide range of users of varying levels of expertise, education, and literacy?

  • Have user interfaces been designed for use by and tested by voters with various disabilities, including (but not limited to) poor vision/blindness, motor impairments, and cognitive difficulties?

  • Has the testing been conducted in environments that approximate the stresses and distractions of real polling places?

  • Does the system provide adequate feedback that the vote intended was indeed captured?

SOURCE: Harry Hochheiser, Ben Bederson, Jeff Johnson, Clare-Marie Karat, and Jonathan Lazar, The Need for Usability of Electronic Voting Systems: Questions for Voters and Policy Makers, Association for Computing Machinery (ACM) Special Interest Group on Computer-Human Interaction (SIGCHI), U.S. Public Policy Committee, white paper submitted to the committee. Available at /cstb/project_evoting_acm-sigchi.pdfhttp://www7.nationalacademies.org/cstb/project_evoting_acm-sigchi.pdf.

lot, interacting with the system, and casting a vote. This citizen is probably alone in the booth and may not be able to, or may be socially inhibited from, asking for help. Finally, most citizens vote no more than once or twice a year and thus have little opportunity to develop experience or familiarity with the system. Box 4.7 addresses some of the issues that might be examined in a usability assessment.

4.2.3.2 Design for Effective Use

The first stage in the life cycle of a voting system is requirements development and design. The top-level requirement is relatively simple: the system must capture the voter’s vote as he or she intended it. However, designing a system to do this under a wide variety of circumstances is a nontrivial task. Questions related to design include the following:

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

4-26. How does a voter receive feedback after he or she has taken an action to cast a vote? After the voter has pressed a button or touched a screen, a natural question for the voter to ask is, “Did the machine accept my input?” or “How do I know my vote was entered?” While punch card, optical scan, and lever voting systems involve physical artifacts that provide immediate feedback to the voter about the choice or choices that have been made, the workings of electronic voting systems are more opaque from the voter’s standpoint. Indeed, in some electronic voting systems, feedback mechanisms must be explicitly designed in. (In this context, this question is a user interface question rather than a security question. That is, it is assumed that the software is not trying to trick the voter into believing something that is not true.)

Note also that the presence of some feedback does not solve all user interface problems. Useful feedback both informs the user that an action was recorded and indicates which action was accomplished. For example, a click sound and the appearance of an X in a selection box indicates that a selection was made but not necessarily which selection was made. If the box is not clearly located next to the appropriate option, or the option is not highlighted when selected, a user may not know which specific option was selected.

In the case of the Florida butterfly ballot of 2000 (a punch card ballot), voters received feedback about having punched a hole in the card. But the ballot nevertheless confused voters about which selections they had actually made. One possibility is that voters did not punch the card fully; a second possibility is that poorly maintained machines made it impossible to punch the card fully. In both cases, the result would have been some ballots with “hanging” and “dimpled” chads—and doubt about the validity of those votes. At the same time, the voter would not know that the ballot cast might not be interpreted as a valid vote. A third possibility is related to ballot design—some number of votes appear to have been inadvertently cast for the wrong candidate because of misalignment of the punch hole locations and the candidate names—and the voter may have cast a vote for someone other than his or her actual choice without knowledge of that error.

4-27. How is an electronic voting system engineered to avoid error or confusion? Both the display and control interfaces of the system and the logic enforced by the system are at issue. For example, a large ballot may need to be presented to the voter on multiple display screens. What feedback does the system provide to the voter about where he or she is in the ballot? What provisions are made to enable the voter to back up, go forward, and jump around the ballot? To retrace his or her steps? To review the entire ballot before submitting it? As for logic, systems can be designed to block actions that would invalidate a vote or to warn the

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

voter of possible errors in the ballot before the ballot is cast, thus providing an opportunity to correct his or her ballot. For example, a direct recording electronic (DRE) system can prevent a voter from overvoting by forcing the selection of an “excess” choice to result in the deselection of a previously selected choice, or by not allowing new selections beyond a certain number and generating a message that informs the voter of a mistake. In the case of undervoting, a DRE system can warn a voter if a particular contest has been left blank but without forcing him or her to cast a vote in that contest.35 (Both punch card and optical-scan voting systems can warn voters of overvotes if ballots are counted in real time by a precinct-based system.)

4-28. What accommodations have been made to address the special concerns and needs of people with disabilities? Citizens with disabilities have a right to a voting experience that is fair and acceptably straightforward—a requirement that is codified in the Help America Vote Act of 2002. Note that these issues are not simply problems of technology. In some instances, assistance from poll workers may be necessary.

4-29. What accommodations have been made to address the needs of non-English speakers, voters with low literacy skills, and citizens from various cultural, ethnic, and racial groups? All citizens have a right to vote regardless of their background, language group, or cultural situation. Electronic voting systems offer the possibility that a ballot can be easily switched to different languages or rendered audible for nonreaders.

4-30. How and to what extent have concerns about the needs of these parties been integrated into the design of the system from the start? A substantial body of experience indicates that attention to such concerns is much more effective at the start of the design process than at the end, at which point other decisions have been made that eliminate options that might otherwise have been desirable. (For example, a “screen reader” that tries to render a written ballot into words is often not as successful as a ballot that is designed from the beginning to include auditory interaction.)

4-31. What are the ballot definition capabilities offered to jurisdictions? Ballot definition is the process through which the ballot pre-

35  

Error checking can also create voter dissatisfaction. For example, some voters have become accustomed to nonelectronic systems that do not perform error checking. If they violate the ballot logic (e.g., an overvote), their votes do not count, but they have no way of knowing this fact if the votes are tabulated remotely. When faced with an electronic voting system that does perform error checking, the voter may react negatively because it is preventing him or her from voting in the accustomed manner.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

sented to the voter is laid out. It involves aspects such as font size, graphics, placement and formatting of items, translation into other languages, and so on. Ballot definition issues were responsible for the problems with Florida’s butterfly ballots in the 2000 presidential election. In practice, a voter’s experience is determined by some mixture of the system’s devices for entering input and the appearance of the ballot to the voter. Voting systems must be usable with a wide variety of ballots. That is, a vendor may wish to sell systems to multiple jurisdictions, each of which has different ballot requirements. Even within the same jurisdiction, a number of different ballots may be involved. Ballot design directly affects the ability of voters to understand the issues, recall their decisions, and actually carry out their intentions, and a given technology affects which ballot designs can be implemented. For example, voting systems based on touch-screen technology may be subject to frequent interface modifications that create a difficulty for election officials and voters but also make possible rapid prototyping for ballots and responsive redesign for error correction.

Vendors have the responsibility of enabling jurisdictions to define ballots. The specific ballot definition capabilities provided to the jurisdiction are of considerable importance, because they can increase or decrease the likelihood of confusing, misleading, or even illegal ballots. (For example, a vendor might provide user-tested and validated templates for jurisdictions to use as a point of departure. Or vendors could provide local election jurisdictions with ballot definition toolkits that enforce usability principles as well as local laws and regulations, to the extent feasible.)

4-32. How is provisional balloting managed? Of course, election officials have the option of insisting that a provisional ballot be processed entirely offline. But a vendor may offer such capabilities online. Online provisional balloting raises a number of issues:

  • Segregation of provisional ballots from ordinary ballots. Since a provisional ballot counts only if it is determined later to be cast by a person eligible to cast it, it must be separated from ordinary ballots.

  • Maintenance of voter secrecy. Given that the provisional ballot must be connected in some way to voter-identifying information (so that the voter’s status can be later ascertained), the potential for secrecy violation is manifestly obvious. What mechanisms are available to ensure that voter secrecy rights are respected?

  • Ballot selection. More advanced electronic voting systems may seek to support vote-anywhere voting, in which a voter can present himself or herself at any precinct in the state, identify his or her home jurisdiction, and expect the correct ballot to appear on the screen at his or her voting station. How will this capability be managed?

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
4.2.3.3 Usability Testing

Usability testing is done through simulations and experiments as described above. In addition to the response time and error data derived from experiments, it is useful to get subjective data, either from questionnaires or from focus groups or both. But a primary lesson from human factors engineering is that the number of different ways machines can confuse people is far larger than one can imagine from even the most careful on-paper analysis. While experienced designers and careful on-paper analyses are important elements of human factors engineering, repeated cycles of realistic and intensive testing with a broad range of users and reengineering to reduce the likelihood of errors is absolutely essential to the process. A broad range would include people with a diversity of education, socioeconomic backgrounds, technical experience, literacy, and physical, perceptual, language, and cognitive abilities. Realistic testing includes environmental conditions that approximate those found in the polling place, including attendant chaos, noise, and time pressure.

To illustrate the kinds of unusual and not-easy-to-anticipate problems that occur in operational use, consider that a voter may need to switch the language of presentation in mid-stream. Quoting from the field notes of a member of the committee who was observing:

[In observations of early voting for the 2004 General Election in Los Angeles County,] a young, female Asian voter was observed in a Monterey Park early voting location (Monterey Park City Hall, Community Room), on October 29, 2004, at approximately 12:30 pm (the final day of early voting in Los Angeles County for that election). This young woman asked one of the polling place workers for assistance using the voting machine, and she clearly began to have some difficulties with her ballot. Eventually, she requested assistance again, which involved two polling place workers, as she wished to change the language that the ballot was presented in from Chinese to English, in the middle of casting her ballot. Eventually, the polling place workers managed to switch her ballot from Chinese to English on the electronic voting device. This voter was timed as taking almost 24 minutes to vote, from start to finish; other voters at this same location were observed typically taking from about 5 to 7 minutes to vote using the same electronic voting machines.

It is thus reasonable to ask about the nature of usability testing and the range of users involved in such testing.

4-33. What is the range of the subjects used in testing usability? As a general rule, the broader the spread of demographic and socioeco-

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

nomic characteristics of the test population, the greater the likelihood that potential operational problems will be identified in advance.

4-34. What is the error rate in capturing votes of any given system? How is that error rate determined? A commonly used and well-accepted aggregate metric for this error rate is the residual vote, defined as the sum of overvotes and top-of-ticket undervotes (in which the voter indicates no choice for the most important contest on the ballot, and thus the ballot does not count as a vote). Overvotes are clearly errors, whereas undervotes are entirely legal and may reflect a voter’s preference to refrain from voting in a particular contest. Nevertheless, because the top-of-ticket contest (e.g., the contest for president of the United States) is the most important contest, it is assumed that an undervote for that contest reflects an error on the part of the voter.36 Note that because the voter’s experience is determined by a combination of the voting system, the particular ballot layout, and the particular environment (e.g., ambient noise, lighting, time pressure), a realistic estimate of error rate is obtainable only by undertaking the measurement under circumstances that are very close to those that would prevail on Election Day.

4-35. What are the submetrics of usability that are applied to evaluate and compare systems? Usability is in general a multidimensional issue, and different voting jurisdictions may place different weights on the various dimensions of usability. For example, a rural jurisdiction serving a voter population that almost exclusively speaks English may well place lesser weight on usability metrics that relate to ballot presentation in languages other than English than would an urban jurisdiction serving a large number of language minorities. Residual vote is a useful aggregate measure of usability, but making specific usability improvements in a voting system requires a more detailed understanding of why voters overvote and undervote. Moreover, residual vote is a conservative measure of error, in that it does not capture voters who vote for a candidate other than the one they intended.

4-36. To what extent, if any, do problems with usability systematically affect one political party or another or one type of candidate or another? Usability problems that have a greater effect on a certain demo-

36  

To illustrate the use of residual vote as a metric for comparing the performance of different voting technologies, Henry Brady used residual vote to compare the performance of punch cards in 1996 to that of optical scanning in 2002 in Fresno County in California. He found that the residual vote dropped by a factor of about 4 as the result of changing voting technologies. See Henry Brady, Detailed Analysis of Punch card Performance in the Twenty Largest California Counties in 1996, 2000, and 2003, available at http://ucdata.berkeley.edu:7101/new_web/recall/20031996.pdf.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

graphic group, for example, may work to the disadvantage of a particular party.

4-37. How is feedback from actual usage incorporated into upgrades to currently deployed systems? The ultimate in operational testing is experiences during Election Day, when voting systems get their maximal workout. Because it is virtually certain that some users will be confused and make errors with any deployed system, it is desirable to have some method for systematically capturing anomalous voter experiences and using information about such anomalies as a point of departure for future upgrades. Vendors and election officials should therefore go out of their way to seek information about voter problems with a given system rather than to ignore or, worse still, suppress such reports.

4-38. How does usability testing incorporate the possibility that different jurisdictions may create ballots that are very different from one another? Because the voter’s experience at a voting station depends both on the underlying technology and the way the ballot is presented, it is important that usability testing be conducted across a range of different ballots.

4-39. Who should conduct usability testing on specific ballots? Because the ITAs are not in a position to evaluate specific ballots that jurisdictions may use, ITA qualification does not provide assurances about the usability of given ballot. Indeed, the soonest that a specific Election Day ballot can be made available is after the relevant primaries for that election. Thus, election officials must either conduct usability testing themselves, or engage some other party (parties) to do it. An obvious—though hardly disinterested—choice is the vendor. But there may be other parties available to perform such services on relatively short notice.

4.2.3.4 Education and Training

Voter education is challenging. Because many people vote only once or twice a year, they may well forget how to use the systems they used in previous years. Given the rate at which people change residences, some nontrivial number of voters in any given jurisdiction are likely to be first-time voters there, and because different jurisdictions make their own decisions about which voting systems they will acquire, some people will always be voting on unfamiliar equipment. Some devices for entering input, such as touch screens, can behave idiosyncratically in a way that is dependent on how a particular unit is calibrated. Finally, product upgrades from vendors may change the user interface, which would result in a different “look” and “feel” from election to election. This suggests that education or training will be necessary, at least for some (significant number of) voters.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

Voter education materials must be comprehensible to a wide range of people, and so should be written so as to not require high levels of education, be available in multiple languages, have visuals that correspond closely to the systems and ballots in use, provide step-by-step instructions, and be available to nonsighted individuals.

4-40. How long does it take a first-time user to become familiar enough with the system to use it reliably and with confidence? As a rule, this question can only be answered by simulation and direct user testing.

4-41. What kinds of educational materials should be prepared and distributed in advance? Many organizations, both partisan and nonpartisan, provide voter education materials that illustrate how to fill out ballots. While these materials are generally oriented toward the specific choices that voters will make, information about the operation of the voting systems that will be used is likely to be helpful to most voters. Such information can be made available in many ways, notably in print and online. Nonpartisan educational materials in multiple formats (e.g., video cassettes, DVD, and online or Web-based) teaching how to operate the units can be available to voters at the polls prior to actual voting.

4-42. To what extent are practice systems available for use before and on Election Day? While good “paper” instructions would be helpful, actual hands-on experience and familiarity would make a world of difference for the voter in operating a voting station. The availability of a demonstration station, configured identically to the ones that voters will actually use, would allow voters who are uncertain about the mechanics of voting to practice ballot casting in a realistic fashion. Even if demonstrator stations are not available in every polling place, making a few available in convenient locations prior to Election Day would help.

4-43. What voter assistance can the voting station itself provide to users? Nothing in principle prevents the voting system from providing information about the mechanics of casting a ballot. For example, voting systems can prevent overvoting (voting for more than one candidate when only one selection is allowed) by providing an indicator that such a condition has occurred and preventing the user from making the ballot final until the problem is corrected. They can also warn the user if an undervote has occurred—that is, that the voter has not made choices for certain offices or propositions—by asking if the undervote was deliberate.

It is also possible to have an online help facility that a confused or uncertain user can invoke. Context-sensitive help (i.e., help that varies depending on where the user is in the voting process) is generally much more helpful than generic advice that the user must read and compre-

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×

hend before finding what he or she needs. Note also that in the unfamiliar confines of the voting booth, with lines of other voters waiting, voters may feel pressure to complete their votes as quickly as possible. Such pressure increases the likelihood of errors and may reduce the willingness of some voters to use online help facilities.

4.2.4 Reconciling Security and Usability

For a variety of reasons, election officials often believe that security and usability are necessarily traded off against one another. For example, the tension between overaggressive purging and underaggressive purging of a voter registration list reflects this trade-off: Greater security (and reduction of fraudulent voting) is associated with overaggressive purging, while greater accessibility to the polls is associated with underaggressive purging. Maintaining privacy in the voting booth is a matter of security, while allowing another individual inside the voting booth to assist the voter is a matter of usability. And, security by obscurity is fundamentally dependent on a denial of access.

These contrasts illustrate a more general point—in the design of any computer system, there are inevitably trade-offs among various system characteristics: better or less costly administration, trustworthiness or security, ease of use, and so on. Nevertheless, in the design of electronic voting systems, the trade-off between security and usability is not necessarily as stark as many election officials believe. That is, there is no a priori reason a system designed to be highly secure against fraud cannot also be highly usable and friendly to a voter.

The reason is that the security and usability requirements are directed at different targets. The biggest threat to security per se is likely to come from individuals with strong technical skills who are working behind the scenes to subvert an election. By contrast, usability is an issue primarily for the voter at the voting station on Election Day. Because these populations are qualitatively different, efforts to mitigate security problems and efforts to mitigate usability problems can proceed for a long time on independent tracks, even if they may collide at some point after attempts at better design or better engineering have been exhausted.

This point also has implications for the testing and certification process. Specifically, because security and usability are in large measure not attributes that must be traded off against each other, different skill sets are necessary for a competent evaluation of security and usability. Thus, it cannot be assumed that experts in one area are necessarily competent to evaluate issues in the other.

Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 45
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 46
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 47
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 48
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 49
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 50
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 51
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 52
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 53
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 54
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 55
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 56
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 57
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 58
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 59
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 60
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 61
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 62
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 63
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 64
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 65
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 66
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 67
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 68
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 69
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 70
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 71
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 72
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 73
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 74
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 75
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 76
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 77
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 78
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 79
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 80
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 81
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 82
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 83
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 84
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 85
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 86
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 87
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 88
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 89
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 90
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 91
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 92
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 93
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 94
Suggested Citation:"4 Technology Issues." National Research Council. 2006. Asking the Right Questions About Electronic Voting. Washington, DC: The National Academies Press. doi: 10.17226/11449.
×
Page 95
Next: 5 Life-Cycle and Training Issues »
Asking the Right Questions About Electronic Voting Get This Book
×
Buy Paperback | $42.00 Buy Ebook | $33.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Many election officials look to electronic voting systems as a means for improving their ability to more effectively conduct and administer elections. At the same time, many information technologists and activists have raised important concerns regarding the security of such systems. Policy makers are caught in the midst of a controversy with both political and technological overtones. The public debate about electronic voting is characterized by a great deal of emotion and rhetoric.

Asking the Right Questions About Electronic Voting describes the important questions and issues that election officials, policy makers, and informed citizens should ask about the use of computers and information technology in the electoral process—focusing the debate on technical and policy issues that need resolving. The report finds that while electronic voting systems have improved, federal and state governments have not made the commitment necessary for e-voting to be widely used in future elections. More funding, research, and public education are required if e-voting is to become viable.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!