Electronic Voting System Usability Issues

Benjamin B. Bederson,

Bongshin Lee,
Robert M. Sherman

Human-Computer Interaction Lab

Computer Science Department,
Institute for Advanced Computer Studies

Univ. of Maryland,

College Park, MD 20742

{bederson, bongshin}@cs.umd.edu,


+1 301 405 2764



Paul S. Herrnson

Center for American Politics and Citizenship

Dept. of Government and Politics

Univ. of Maryland,

College Park, MD 20742


+1 301 405 4123



Richard G. Niemi

Department of Political Science

Univ. of Rochester,

Rochester, NY 14627-0146


+1 585 -275 -5364



With the recent troubles in U.S. elections, there has been a nationwide push to update voting systems. Municipalities are investing heavily in electronic voting systems, many of which use a touch screen. These systems offer the promise of faster and more accurate voting, but the current reality is that they are fraught with usability and systemic problems. This paper examines surveys issues relating to usability of electronic voting systems and reports on a series of studies , including one with 415 voters using new systems that the State of Maryland purchased. Our analysis shows these systems work well, but have several problems, and a significant minority of voters have concerns about them.


Electronic voting systems, Direct Recording Electronic (DRE), voting usability.



A major lesson derived from the 2000 presidential election in the U.S. is that the manner in which voters cast their ballots is important. Voting technology and ballot design can influence election outcomes, affect how voters feel about their ability to exercise their right to vote, and influence voters willingness to accept the results of an election as legitimate. It was also discovered that most polling places nationwide employ outdated technology, including unreliable punch-card ballots and mechanical lever machines, with only a third of the electorate using modern computerized technology, such as optical scanning systems or direct recording electronic (DRE) systems with ATM-style touch-screen voting [13] [14]. And, because the poor and ethnic and racial minorities were more likely to cast their ballots on outdated systems, their votes were among the least likely to be counted.

States have responded to the problems associated with the 2000 elections by commissioning studies, revamping election administration, redesigning ballots, and, in some cases, by investing heavily in new voting equipment. A major problem, however, is that there is little solid information about the interface between voters and various voting systems and ballots on which to base or evaluate the success of the massive reform and significant expenditures that are coming.

Voting systems present a unique challenge to interface designers because of the nature of the social contract our (U.S.) society has for voting. Unlike just about every other system in our society, voting systems must be usable by every citizen at least 18 years old. This includes the elderly, disabled, uneducated, and poor users. It also includes individuals who for whatever reason, have opted out of using electronic machinery those who go into a bank and see a teller, dont scan their own groceries, and pay for gasoline with cash.

The challenges are even harder because there is little or no training available for voters. The first time most voters ever touch the voting system is the moment they vote. And once they start voting, there is tremendous social pressure to do it independently without help. With people watching, inadequately trained poll workers, busy people waiting on line, the social importance of voting, and the value placed on secrecy, voters may become anxious and afraid to ask for help.

Finally, the systemic issues of how voting machines get purchased and evaluated are problematic as well. State or county purchasers are usually more concerned about cost then usability. And once the systems are purchased, the public has no access to the machines for evaluation. Election workers who design ballots tend not to have experience in usability and screen design. Poll workers who deploy the voting systems have minimal training to cope with the inevitable problems.

Electronic voting systems offer promise as well from the opportunity to change font size and language on demand, to offering disabled users customized access, to accurate and fast recording and tabulation of votes. But there are many issues that add up to thea risk that voters may become disenfranchised. And this is especially true for the elderly and citizens at the margins of society.

In this paper, we lay out the issues of electronic voting systems, report on a study we performed on new systems that the State of Maryland purchased, and make suggestions for improvements.


The unusual requirements of voting systems bring special concerns related to support of all citizens in their access and trust of voting machines. In addition, there are further concerns relating to the possible bias of ballot design, anonymity of voters, and validity of the recorded vote.


One of the largest issues related to DRE voting systems is accessibility. For designers of computer programs, accessibility is the easiest design factor to ignore. Many classes of voters can easily be disenfranchised by a voting system that accommodates only normal users.

The most obvious of these is disabled voters. The federal Voting Accessibility for the Elderly and Handicapped Act (VAEHA), passed in 1984, mandates that polling places be available and usable by the elderly and handicapped [1]. Regardless, a study of polling place accessibility performed by the National Voter Independence Project in 1998 and 2000 revealed that 47% of polling places had some type of accessibility problem [21].

According to the National Organization on Disability, DRE balloting systems are the most accessible technology, compared to lever, punch-card, optical scan, and hand-count systems [3]. Nonetheless, there is considerable diversity among DRE products, and our own evaluation of Marylands voice-only system showed many problems.

Age and Technical Experience

In addition to general disabilities, the issue of computer disability can cause problems in DRE elections. Research suggests that older adults consistently perform more poorly than younger adults in performing computer-based tasks. This is true both with respect to the amount of time required to perform the task, as well as the number of errors made [18]. This is likely due in part to a perception on the part of older adults that they might inadvertently damage the computer, or general uneasiness in using the technology. Younger users also are more comfortable with the degree to which their interaction changes computer output, and the changing nature of the computers control objects.

It may also be that a decrease in manual dexterity and in eye-hand coordination accounts for greater difficulty in operating such systems. In one recent study, age was positively correlated with difficulty in performing tasks with a computer mouse [24]. Although popular DRE systems do not use a computer mouse, similar issues are present. Older adults have greater difficulty in viewing a computer screen, and correct conceptualization of the relationship between screen or button manipulation and program activity may be a problem.

VAEHA requires three types of accommodations: access, assistance, and instruction. While these accommodations do not address the issue of ballot design, and the VAEHA does not apply to state elections, courts have used the criteria in the VAEHA to evaluate whether a state election complies with the Americans with Disabilities Act (ADA) [2].


Aside from accessibility, the issue of bias presents both a logistical and a legal problem for elections. Actual ballot design is fairly contentious, in part, because candidates believe that their location on the ballot changes the likelihood that a voter will select them. For example, candidates listed first on a ballot are generally favored [11]. For this reason, many jurisdictions pre-select a designated balloting order; often, candidates are listed by party in a specified configuration, by lottery, or alphabetically.

Electronic ballots cannot avoid these pitfalls for the same reason that paper ballots cannot; names on a ballot must be presented in some fashion. Computers, of course, can randomize the presentation of names, but this creates difficulty for users who have pre-planned their voting.

Although hardware maintenance is an issue with any technology, it becomes especially important with touch screen voting systems. With repeated use, touch screens can wear out. In particular, problems can develop with localized sensitivity. This means that, if the equipment is not properly maintained and replaced, it could physically become more difficult to vote for a popular candidate than to vote for an unpopular one. Part of the standard for any electronic voting technology must be regular equipment maintenance schedules that avoid such overuse problems.

Accountability and Verifiability

Traditionally, votes were cast on paper and counted by hand. Voters were confident that the marks they made on ballots reflected their intended vote. Voting machines that used levers and punch card systems also provided voters with a high degree of confidence that they cast their votes as intended. Until the 2000 elections voters also routinely assumed their votes were properly counted. Because they are paperless, DRE systems raise the question: how can one know that when a voter chooses a particular candidate on the screen, a vote for that candidate is recorded?

The most pressing verifiability problem with the use of computerized voting is that the systems are provided by private companies, and the government usually has no oversight into the production of the systems beyond choosing whether or not to use them. It is easy to imagine a scenario whereby a malicious, or simply a careless, programmer sets up a situation in which votes for Candidate A appear to go to Candidate A as far as the users display, but actually are tabulated for Candidate B. More critically, suppose that the same situation occurred, but only with a small percentage of the votes cast. This might go unnoticed, yet it might affect the election. The use of a DRE system in this case would be catastrophic, because there would be no way to review voting records to conduct a recount. With paper ballots, voters can visually inspect the official record, but with computer-based voting this is next to impossible.

A simple solution to this problem is to provide the user with a printed record of the votes electronically recorded. Before leaving the polling place, the voter would be required to certify the contents of the paper record and place it into a ballot box. The printed records could then be manually counted in the event of a challenge, and this procedure would foil any attempt at falsifying votes internally to the voting system. This approach, however, has not been implemented in any commercial systems that we are aware of.


Many DRE systems are available today, and it is not always easy to identify the strengths and weaknesses of each product. Some , such as the Hart InterCivic eSlate are are quite attractive, yet they suffer from major usability problems. More than ever, system evaluators need to become accustomed to thinking in terms of usability, and especially usability for elderly and disabled voters.

We briefly describe here some of the major systems available in the marketplace today to give a sense of what is currently being used.

Diebold AccuVote-TS

The Diebold AccuVote machine is the system that we tested, and is in use in the State of Maryland. It uses a touch screen (Figure 1) with a card reader that the voter gets after being authenticated by polling officials. Detailed screen shots are shown in the section about the study (Figures 46-68).

 Large Photo of  eSlate

Figure 1: Diebold AccuVote-TS system (left) and Hart InterCivc eSlate system (right)

Hart InterCivic eSlate

The Hart InterCivic eSlate (Figure 12) is a hardware-based voting device with no touch screen. It displays the ballot in a page-at-once format (displaying multiple races on one page). Voters navigate using triangle-shaped prev and next keys. Voting itself is accomplished by rotating a dial labeled select until the desired candidate is highlighted. To vote, the enter key is pressed. After all votes have been entered, the user presses the red cast ballot key.

 Large Photo of  eSlate

Figure 2: Hart InterCivic eSlate system

MicroVote Infinity Voting Panel

The MicroVote Infinity Voting Panel (Figure 3), like the eSlate, does not use a touch screen. Rather, it has identical rows of hardware buttons on the left and right of the display. Voters use this device just as they would a traditional voting system: they press the button next to the candidate they wish to select.

The page control buttons, at the bottom of each row, are physically the same as the vote selection buttons. As do several other systems, the Infinity Voting Panel sports a large red Cast Vote button. This device also offers feedback in the form of cast vote lights, which indicate whether or not a vote has been successfully cast.

Figure 3: MicroVote Infinity Voting Panel


The SureVote company provides a system that offers higher protection against malfunction or fraud (Figure 24). At voting time, users [authenticate themselves and their right to vote using a numeric personal identification code and a numeric ballot code. They then can] Delete the part in brackets (or say what authenticate means. Authenticate themselves as a proper voter?) enter a four-digit vote code for each race. For example, a voter might enter 2304 to indicate a vote for George W. Bush for President. An error message is presented if the entered code is invalid for that race. If the code is valid, the vote is sent to multiple vote storage servers scattered across the country. Each server sends back a numeric response, which is combined by the client into another four-digit code, the sure code. If the voters sure code matches what is shown on his ballot, he can be certain that his vote was counted properly.


While this system is technically interesting and may provide one mechanism whereby voters can have more confidence in the system, it raises usability issues. Requiring voters to enter and compare numbers is likely to be problematic for many users. Even a change as minute as making the vote code and sure code have a different number of digits could decrease confusion.


Figure 4: SureVote DRE system

VoteHere Platinum

VoteHere Platinum uses a completely software-based touch screen interface. It can be run on any personal computer with a touch screen monitor. However, this also means that the system does not offer hardware buttons or any of the benefits that hardware buttons provide. In addition, it introduces new risks that the computer the software is running on may have been tampered with.

The VoteHere system presents one race on the screen at a time; the voter presses the next and back buttons at the top of the screen to navigate between races (Figure 35). However, the number of pages is not indicatedre is no indication of how many pages there are.

Figure 5: VoteHere Platinum system


There is a long history of research in political science on the impact of procedural reform on elections, but very little within the HCI community. Mercuri has been investigating a wide range of electronic voting issues for some time [19].

More generally, the political science community . It has focused on changes in the form of ballots and shifts in the administrative procedures for casting them. Ballot reform was a frequent topic of papers and articles in the earliest days of the disciplineincluding prescriptions about excessive length and lack of uniformity, but also evidence of the effects of ballot design on roll-off and split-ticket voting [4] [6].[1] Fifty years later, there was a flurry of new studies on the relationship of ballot formats to roll-off and split-ticket voting. Since then, there has been a small, but continuing series of studies on these effects along with the potential effects of candidate name order.

These studies generally conclude that office-bloc ballots result in greater roll-off than party-column or party-row ballots [15 p. 212] [30],[2] but that the provision of straight-party circles or levers tends to reduce roll-off, at least in partisan contests [22p. 109-110] [25][30]. More recently, studies have shown that electronic voting machines result in less roll-off, presumably because they alert voters to whether they have completed the ballot [22, 23]. Studies have also shown the party-column ballot encourages straight-ticket voting in comparison to the office-bloc ballot [5] [10 ch. 11] [27].

Studies of ballot order effects often report that candidates listed first on the ballot are favored, at least in nonpartisan and non-salient elections ( [5] [11, 17][20]. NOTE: Here, in the previous para, and later, should references be put in numerical order? Bowler also found that ballot propositions were less favored the further down they appeared on the ballot [7].

In the aftermath of the 2000 U.S. election, a number of studies evaluated various aspects of the Florida vote, somewhat altering the focus of investigation in the process. Analyses showed, for example, that there were many more overvotesover votes in Palm Beach County, where the butterfly ballot design was used [31].[3] The U.S. Commission on Civil Rights [29] concluded that poorer and minority communities more often utilized less modern equipment that is prone to overvotesover votes and other kinds of errors. The Commission also noted a failure of the state to educate voters on the mechanics of voting.

The largest post-2000 study was conducted by an interdisciplinary group of voting and computer specialists from the California Institute of Technology and the Massachusetts Institute of Technology. The report from this group found, surprisingly, that residual votes[4] were typically greater in jurisdictions using electronic voting than other kinds of machines (with the exception of punch cards), even when confounding factors were controlled [9]. Kimball also found that electronic voting machines produced a slightly higher residual vote than optical scan ballots [16].

Only rarely have researchers considered whether or how ballot features might confuse voters. In one noteworthy though very small-scale comparison, it was found that the labeling of rows on lever machines resulted in considerably different ballot order effects [5]. In another study, it was found that placing a salient race at the bottom of the ballot caused some voters not to cast a ballotthough a confusing straight-party device may also have contributed to this decline [12][13], especially as another study found no such effect [8]. The fact that particular demographic groups, including the elderly, poor, and uneducated are more likely to cast incomplete ballots, also suggests the possibility of confusion, though indifference and lack of knowledge about the candidates might also explain these results [23] [30] [31]. The only experimental studies of the voting process, while very small scale, also revealed some confusion on the part of voters [26] [28].

Finally, the events surrounding the 2000 U.S. presidential election highlighted an aspect of voting that has not been dealt with since the introduction of the Australian (secret) ballot at the end of the 19th centurynamely, that voting technology and ballot design affect how voters feel about their ability to exercise their right to vote and influence voters willingness to accept the legitimacy of the election results. Thus, despite high quality research on turnout, roll-off, split-ticket voting, and order effects, there is little information about how to reform voting technology and ballot design in ways that will develop, encourage, and support perceptions of the voting process as an accurate and fair reflection of voters intentions.


At the request of election officials from Allegany, Dorchester, Montgomery, and Prince Georges [(Maryland)] counties, we evaluated the Diebold AccuVote-TS voting system using three commonly used techniques: expert review, close-up observation, and field testing.[5] (The full study is available at http://www.capc.umd.edu/rpts/MD_EVoteMach.pdf.) The result of each technique points out specific difficulties with this system and also indicates general issues that other voting system manufacturers should keep in mind. We did not have the ability to design the ballot or change interface at all, and instead evaluated the system as it was given to us by the election officials. Figures 46-68 show screen shots of the system we evaluated.


Figure 6: Diebold help page


Figure 7: Diebold ballot casting page


Figure 8: Diebold review page

Expert Review

We first employed an expert review [23]to analyze the DRE system with . Expert review consists of having several individuals with significant experience in user interface design walk through the system in detail, perform representative tasks, and record weaknesses .

We performed our expert review with five faculty and staff at the Human-Computer Interaction Lab at the University of Maryland. Each person spent approximately one hour using the DRE system and independently reported their concerns and suggested solutions. The standard and audio-only systems were evaluated independently.

We summarize here each problem area with the number of experts that reported that problemwe found.

Inconsistent Terminology/Labeling (5 reviewers). Several words were confusing, inconsistent, or didnt match the instructions.

Color usage (4 reviewers). Several dark background colors resultedinresulted in poor contrast with the black text.

Inserting/Removing card (4 reviewers). It was difficult to insert the card and to know where to insert the card in the first place. Confusion was compounded because there is a short delay before the machine reacts.

Help / Instructions (4 reviewers). The instructions are long and unclear, and no help button is visible during voting.

Layout (4 reviewers). It wasnt clear what the ballot will look like when the list of candidates is more than a column long or when names are exceptionally long. The review screen may cause confusion because it is organized differently than voting screens.

System information shown (4 reviewers). The startup screen showed system-level information irrelevant to voters.

Glare on screen (3 reviewers). Screen glare may cause problems in some polling places.

Changes / Feedback (2 reviewers). Voters must unselect an existing vote prior to selecting another candidate. No warning is given for overvotingover voting.

Poor graphics/design quality (2 reviewers). The images are low resolution, the colors are strong, and there are too many font styles.

Privacy (1 reviewer). Others might be able to see ones vote as it is being cast.

Audio-only System

1.         Inappropriate Keypad Mapping (5 reviewers). The keypad mapping is inconsistent and unusual, making it hard to remember which function uses which number is assigned to which function..

2.         Audio Quality (5 reviewers). Static, clicks, and delays make the audio segments difficult to understand.

3.         Review Ballot (3 reviewers). There is no review of the ballot before casting it.

4.         Feedback (2 reviewers). The buttons don't have any audio feedback when pressed. No waringwarning is given for overvotingover voting.

5.         Cast Ballot (2 reviewers). Voters are forced to go through the entire ballot, which they may not wish to do.

6.         Volume Control (1 reviewer). The volume control doesn't indicate which way is loud or soft.

Close Observation

We then observed and videotaped non-experts responding to all aspects of the voting process, including inserting the ballot card, selecting the candidates, and casting their ballot. We employed the think aloud method. Finally, we had the voters fill out a questionnaire describing their reactions to the new voting system. For each participant, we measured how long it took them to vote from the time they walked up to the machine to the time they left it. We also counted how many errors they made.

For this part of the study, we observed 47 University of Maryland members, primarily students, but also some faculty and staff. The election that was tested included five races and one question that was split between two screens.

The average time to complete the ballot was 2 minutes and 10 seconds. All participants except one completed their vote successfully with one participant . The one problem occurred when the participant was unable to figure how to write-in a candidate.

The participants generally liked the DREs, rating their overall comfort 7.7 out of 9 (on a 1-9 scale where 9 represented highest level of comfort). They found the screen layouts and color more problematic (6.9 out of 9).

Representative comments are mentioned here with our observations following:Some representative comments from participants include:

l                Easy to use, straightforward

l                Excellent idea

l                Inserting card was very confusing.

l                Concerns about reliability

l                Colors are not well chosen.

l                Font size could be bigger.

l                Layout of the ballot was confusing.

The primary issues we observed from the videotapes follow.

1. System Failure. Reliability is a main concern with an electronic voting system. We didnt expect to measure the robustness of this system because of the simplicity of the simulated election and the small number of subjects and machines. However, at the very start of the experiment, one of the two machines malfunctioned and was rendered unusable (it would not return the voter card).

2. Card Insertion. Many participants had difficulty inserting the card, which begins the voting process. They expected the machine to accept the card as ATMs do. So, they put the card in the slot gently and waited for the machine to take it in. But this system requires the card to be inserted hard until it clicks. The card then becomes inaccessible until the ballot is cast and the card is rejected.

3. Layout. Only a small number of subjects were concerned with the layout of the mock ballot, which in part reflects its lack of realism (being only two pages). The entire ballot was only two pages. (We had to test the ballot and system as it was given to us by the counties.)

4. Language Selection. There were two language options, English and Spanish, with English selected by default. The shape and layout of the buttons were not clear. So, most of the subjects touched the English button and then waited for the next screen. It often took several seconds for voters to recognize they also had to press the Start button.

5. UndervotingUnder voting. The system provides a summary page once the voter has sequenced through the entire ballot. This page indicates via a distinct color the races in whichawhich a candidate has not been selected. However, if a multi-candidate race was undervotedunder voted (i.e., the full number of candidates were not selected), the race is not highlighted. It appears on the summary page as if a full set of candidates were selected.


Field Study

Finally, we designed a field study to be administered to a more representative group of individuals in a more natural setting. The study was designed to have three components: 1) the observation and recording of information about individuals interactions with the new voting systems; 2) the administration of a questionnaire to record the voters assessments of the systems; and 3) the administration of parts 1 and 2 to a large heterogeneous group of voters, including some Spanish-speaking individuals who were to receive a Spanish language ballot and questionnaire. The study was implemented by election officials.


Unfortunately, the field study had two major shortcomings. The election officials did not record information about individuals interactions with the voting machines, and they did not involve as large or heterogeneous population as would have been ideal. The latter limitation was mainly due to the fact that the majority of participants 365 came from the wealthier Montgomery County, with only 50 coming from Prince Georges County and none from the other counties. Thus we have no record of voters interactions with the voting machines, and we only have responses from a very narrow slice of the population of Maryland voters.

The voters who participated in the study consist of individuals from a relatively affluent retirement community, four libraries, a shopping mall, and the lobby of the Prince Georges County Administration Building. Because they are mostly an economic and socially elite population group, whose levels of educational attainment, computer usage, and Internet usage are higher than the population of Maryland voters, the experiences these citizens had with the new voting system are not representative of those of Maryland voters in general.

Virtually absent from the field test are the experiences of individuals in rural or farming communities, individuals 34 years of age or younger (more than 60 percent of the participants were over 65 years of age), individuals who have not earned a high school diploma (over half had a degree from a four-year college and 32 percent had done some post-graduate work), members of most minority populations (Latinos, Asian Americans, Native Americans, and multiracial citizens each comprised less than 3 percent of the participants and African Americans accounted for only 8 percent), and individuals born outside the United States or whose native language is not English. Thus the results paint an incomplete and a probably overly favorable assessment of how Marylanders can be expected to respond to the new voting systems absent a major educational campaign.

The same caveats about the simplicity of the ballot discussed above apply here, but it bears restating that the challenges that participants faced is lower than those voters are likely to encounter on election day.

The questions in the field study used a scale of 1 to 9, where 1 represented a negative characteristic and 9 represented a positive one.

1). When asked to report their overall impressions about using the system (rated between difficult and easy-to-use), 80 percent of the respondents reported the system was easy to use (rated 8 or 9), 10 percent reported it was moderately easy to use (rated 7), and the remaining 10 percent indicated it was anywhere from difficult to somewhat challenging to use (rated from 1 to 6). Although 10 percent seems a small portion, it is important to recall that this is an elite group, and 10 percent of Marylands voting age population equals roughly 383,000 voters.

Despite the overall homogeneity of the sample, there was some variation of opinion among the respondents. Individuals who own a personal computer, use computers frequently, or live in a city or suburban area had more favorable overall impressions of the new voting system than did others. Women had more favorable impressions than did men. (Note: all of the comparisons reported here and below are statistically significant.)

2) When asked to report whether they felt comfortable using the system (rated between low comfort and high comfort), 86 percent of the respondents reported they were comfortable using the system (rated 8 or 9), 7 percent reported they were moderately comfortable (rated 7), and the remaining 7 percent indicated they were anywhere from uncomfortable to somewhat comfortable using the system (rated from 1 to 6). Once again, women, individuals who own personal computers, use computers frequently, or live in a city or suburban area gave the mostmore favorable responses.

3) When asked how easy it was to read the characters on the screen (rated between difficult and easy-to-read), 86 percent of the respondents reported they it was easy to read the screen (rated 8 or 9), 8 percent reported they found the screen moderately easy to read (rated 7), and the remaining 6 percent indicated they found the screen anywhere from hard to read to somewhat easy to (rated from 1 to 6). Older individuals and those with higher levels of education had more difficulty reading the characters on the screen, reflecting what is generally known about the eyesight of these groups.

4) When asked to assess the terminology on the voting systems screen (rated between ambiguous and precise), 83 percent of the respondents reported that the terminology was precise (rated 8 or 9), 10 percent reported they found the screen moderately precise (rated 7), and the remaining 7 percent indicated they found the screen anywhere from hard to somewhat easy to decipher (rated from 1 to 6). Individuals who use personal computers less frequently were most likely to find the terminology more ambiguous.

5) When asked to report whether correcting mistakes was easy (rated between difficult and easy), 81 percent of the respondents reported the system was easy (rated 8 or 9), 11 percent reported it was moderately easy (rated 7), and the remaining 8 percent indicated it was anywhere from difficult to somewhat challenging (rated from 1 to 6). Individuals who use computers frequently found it easier to correct mistakes than did others.

6) When asked to report whether they trusted that the system recorded the votes they intended to cast (rated between did-not-trust and trust), 85 percent of the respondents reported they trusted the system (rated 8 or 9), 7 percent reported they trusted the system moderately (rated 7), and the remaining 8 percent indicated they did not trust the system or only trusted it somewhat (rated from 1 to 6). Individuals who use computers frequently reported having less trust in the new voting systems than did others. This result probably stems from their greater awareness of the limitations of computer technology, exposure to computer crashes, familiarity with viruses, and other challenges facing the computer industry.



Our efforts to understand electronic voting systems in general, and the specific ones being used in Maryland leave us optimistic, but concerned. These systems have promise, but the bottom line is that about 10% of the voters we talked to had significant concerns. While 90% satisfaction may be acceptable for some usability studies, we feel strongly that digital government initiatives in general, and voting systems in particular must have higher standards. With important national elections being decided by less than 1% of the voters, leaving 10% unconfident about their vote is a major problem.

So, we suggest that electronic voting system designers These systems offer the promise of being more accurate and faster to tabulate while increasing accessibility to a broad set of users. But the reality is that we arent there yet. The economics and politics of the situation push usability and voter trust to the back burner. And the fact that these systems, once purchased, are likely to be used for many years makes these issues all the more important.

The good news is that the software can still be updated, further usability studies can still be performed, and these systems can be made to be excellent. We encourage election officials to interpret their mission broadly and reach out to the community of design and usability specialists professionals that can help make these systems great for all users which will increase voter confidence, and can in the end, have wide-reaching results.

We encourage the election officials that are involved in purchasing these machines to make usability a priority. Put pressure on vendors to meet existing usability guidelines, establish their own metrics, and perform tests with representative users to measure their success.

To make this happen, we suggest that it be made easier for election officials to understand the usability of systems they are considering purchasing. To that end, we are currently embarking on creating a simple protocol for non-experts to perform the kind of studies we described in this paper. We hope that such a protocol can help the people involved in system selection and design understand some of the broader usability and accessibility issues raised in this paper.

Finally, as citizens and HCI professionals, we should all communicate with our local election officials to explain the importance of these issues, and offer our help.


We appreciate the opportunity to have worked with election officials from Allegany, Dorchester, Montgomery, and Prince Georges counties in Maryland.

This work was supported in part by NSF planning grant EAI-0201634.



[1] Voting Accessibility for the Elderly and Handicapped Act. (1994). 42 U.S.C. 1973 et. seq..

[2] NAACP Philadelphia Branch, et al. v. Ridge, et al. (2000). LEXIS 11520. U.S. District Court.

[3] Voting System Accessibility Comparison. (2001). National Organization on Disability, Washington, DC .

[4] Allen, P. L. (1906). Ballot Laws and Their Workings. Political Science Quarterly, 1, pp. 38-58.

[5] Bain, H. M., & Hecock, D. S. (1957). Ballot Position and Voters Choice. Detroit: Wayne State University Press.

[6] Beard, C. A. (1909). The Ballot's Burden. Political Science Quarterly, 24, pp. 589-614.

[7] Bowler, S., Donovan, T., & Happ, T. (1992). Ballot Propositions and Information Costs: Direct Democracy and the Fatigued Voter. Western Political Quarterly, 45, pp. 559-568.

[8] Bullock, C. S. I., & Dunn, R. E. (1996). Election Roll-Off: A Test of Three Explanations. Urban Affairs Review, 32, pp. 71-86.

[9] Caltech/MIT Voting Technology Project. Residual Votes Attributable to Technology: An Assessment of the Reliability of Existing Voting Equipment (2001). http://www.vote.caltech.edu/Reports/.

[10] Campbell, A., Converse, P. E., Miller, W. E., & Stokes, D. E. (1960). The American Voter. New York: Wiley.

[11] Darcy, R., & McAllister, I. (1990). Ballot Position Effects. Electoral Studies, 9(1), pp. 5-17.

[12] Darcy, R., & Schneider, A. (1989). Confusing Ballots, Roll-Off, and the Black Vote. Western Political Quarterly, 42, pp. 347-364.

[13] Election Data Services Inc. 1998 Voting Equipment Study Report (1998). http://electiondataservices.com/content/vote_equip.htm.

[14] Federal Election Commission. The administrative structure of state election offices: voting systems (2000). http://www.fec.gov/elections.html.

[15] Key, V. O. Jr. (1956). American State Politics. New York: Knopf.

[16] Kimball, D. C., Owens, C., & McAndrew, K. (2002). Unrecorded Votes in the 2000 Presidential Election. (Report No. Unpublished manuscript). University of Missouri-St. Louis.

[17] Krosnick, J. A., Miller, J. M., & Tichy, M. P. (in press). An Unrecognized Need for Ballot Reform: The Effects of Candidate Name Order on Election Outcomes. New York: Oxford University Press.

[18] Kubeck, J. E., Delp, N. D., Haslett, T. K., & McDaniel, M. A. (1996). Does Job-Related Training Performance Decline With Age? Psychology and Aging, 11, pp. 92-107.

[19] Mercuri, R. (2000). Electronic Vote Tabulation Checks & Balances. Doctoral dissertation, University of Pennyslvania, Philadelphia, PA.

[20] Miller, J. M., & Krosnick, J. A. (1998). The Impact of Candidate Name Order on Election Outcomes. Public Opinion Quarterly, 62, pp. 291-330.

[21] National Voter Independence Project. http://www.halftheplanet.com/departments/vote/intro.html [2002].

[22] Nichols, S. M. (1998). State Referendum Voting, Ballot Roll-Off and the New Electoral Technology. State and Local Government Review, 30, pp. 106-117.

[23] Nichols, S. M., & Strizek, G. A. (1995). Electronic Voting Machines and Ballot Roll-Off. American Politics Quarterly, 23, pp. 300-318.

[24] Riviere, C. N., & Thakor, N. V. (1996). Effects of Age and Disability on Tracking Tasks With a Computer Mouse: Accuracy and Linearity. Journal of Rehabilitation Research and Development, 33, pp. 6-16.

[25] Robinson, J. A., & Standing, W. H. (1960). Some Correlates of Participation: The Case of Indiana. Journal of Politics, 22, pp. 96-111.

[26] Roth, S. K. (1998). Disenfranchised by Design: Voting Systems and the Election Process. Information Design Journal, 9(1), pp. 1-8.

[27] Rusk, J. (1970). The Effect of the Australian Ballot Reform on Split Ticket Voting. American Political Science Review, 64, pp. 1220-1238.

[28] Sinclair, R. C., Mark, M. M., Moore, S. E., Lavis, C. A., & Soldat, A. S. (2000). An Electoral Butterfly Effect. Nature, 408, pp. 665-666.

[29] U.S. Commission on Civil Rights. Voting Irregularities in Florida During the 2000 Presidential Election (2001). http://www.usccr.gov.

[30] Walker, J. L. (1996). Ballot Forms and Voter Fatigue: An Analysis of the Office Block and Party Column Ballots. Midwest Journal of Political Science, 10, pp. 448-463.

[31] Wand, J. N., Shotts, K. W., Sekhon, J. S., Mebane, W. R., Herron, M. C., & Brady, H. E. (2001). The Butterfly Did It: The Aberrant Vote for Buchanan in Palm Beach County, Florida. American Political Science Review, 95, pp. 793-810.


[1] Roll-off is the failure to cast votes for some offices on the ballotusually offices below those at the top of the ballot or for ballot propositions. . Split-ticket voting is the decision to cast votes for more than one political party.

[2] An office-bloc ballot lists candidates for each contest (e.g., governor) in a bloc. It is in contrast to the party-column (or party row) ballot, which lists all candidates for a given party listed under a single, party heading. Often, but not always, party-column ballots have a circle or other device that allows a voter, with one mark, to vote a straight ticket (i.e., for all members of the party).

[3] Over votes occur when individuals cast votes for more candidates than are to be elected for a given office (typically one).

[4] Residual votes were defined as ballots on which no presidential vote was countedbecause no vote was cast or because of human or machine error.

[5] The full report describing this study is available at www.cs.umd.edu/~bederson/voting.

Web Accessibility