Improving Accessibility and Usability of

Geo-referenced Statistical Data


Haixia Zhao1, Catherine Plaisant, Ben Shneiderman1

Department of Computer Science1 & Human Computer Interaction Laboratory

University of Maryland, College Park 20742

{haixia, plaisant, ben}


Several technology breakthroughs are needed to achieve the goals of universal accessibility and usability.  These goals are especially challenging in the case of geo-referenced statistical data that many U.S. government agencies supply. We present technical and user-interface design challenges in accommodating users with low-end technology (slow network connection and low-end machine) and users who are blind or vision-impaired. Our solutions are presented and future work is discussed.

1. Introduction

Government agencies accumulate enormous amounts of geo-referenced statistical data. This data is useful to senior citizens looking for a place to settle after the retirement, businesses considering relocation, lawmakers deciding on new policies, elementary school students learning more about the country, etc. Statistical data could be more widely used if it were made available through tools that help users gain insight by discovering geographic patterns, understanding temporal trends, or making financial decisions.


Dynamic query choropleth maps have been demonstrated to be a powerful visual thinking tool for these purposes. However, it is vital that government services reach and empower every citizen regardless of their technical disadvantages or personal disabilities. Universal usability focuses on three challenges: technology variety, user diversity and gaps in user knowledge [Shneiderman 2000]. In our research supported by the Census Bureau and NSF, we focus on improving the access to dynamic choropleth maps of government statistics for users with low-end technology (slow network connection and low-end machine) and user groups who are blind or vision impaired. This paper first describes YMap (initially named Dynamap), our desktop version of a dynamic query choropleth map tool, then describes the universal usability challenges of bringing YMap to the Web and making it usable by the vision-impaired, and our solutions.

2. Dynamic choropleth map for visual exploration of geo-referenced statistical data

YMap is a generalized map-based information visualization tool designed and developed at the Human-computer interaction lab at university of Maryland [Dang et al. 2001]. Figure 1 shows screen snapshots of the Visual Basic desktop version of YMap. After a data file is loaded, users can quickly visualize the distribution of a data attribute on the choropleth map by choosing the attribute from the drop-down list to color the map. By adjusting the double-thumb sliders, users can formulate conjunctive queries and view the query results immediately on the map. Map elements that have been filtered out by the query are colored dark gray. Elements that are not filtered remains colored according to the chosen choropleth attribute. As users drag the sliders, the map animates to give immediate feedback in real time (figure 1(a)). The scatterplot plots a 2-dimensional graph of the map elements according to two selected data attributes. The sliders, map, scatterplot and the detail window are tightly coupled: The dynamic query sliders filter both the map and the scatterplot. Selecting items in either the map or scatterplot causes the corresponding items to be highlighted in the other (“brushing”), and also displays the items’ attribute values in the detail window (figure 1(b)).


YMap also supports zooming and panning to observe data patterns in smaller or denser regions which is often the case as in a US county view map and with map elements other than polygonal geographic regions, such as lines. User studies and previous literature show that dynamic query choropleth map help users to quickly locate highs, lows and trends, to find specific geographic regions that match a query and retrieve details, and to detect the correlation between attributes [Norman et al. 2003]. The scatterplot and the brushing capability can further enable user exploration of the data, discovering patterns, outliers, and relationships from both a statistical and geographical perspective. Thus YMap can help to answer both 1) specific questions, such as "what is the population of my county?" and 2) open-ended decision making questions, such as "where is a nice place to live?" by generally employing a two-stage process: rapid elimination of unacceptable geographic alternatives followed by a detailed comparison of remaining possibilities. An improved Visual Basic version of YMap is being distributed on the US Census Bureau's data CD-ROM products as a viewer for the census data.


                                     (a)                                                                           (b)

Figure 1: (a)YMap filtered for high “Percent Over Age 65” and low Unemployment. (b) Brushing between scatterplot and map reveals high income, highly educated states are in the northeast.

3. Dynamic choropleth map on the Web – accommodating hardware and software diversity

The Internet has become an important media for geo-referenced data dissemination for public access, because of the well-documented benefits in terms of distributed access, centralized control for updates, and modest development cost. When exploring the possibility of providing the dynamic choropleth map on the Web for data publishing, we focus on the technical issues to accommodate users’ hardware and software diversity. Specifically, we aim to accommodate users with slow network connections and low-end machines, and minimize the requirements for specialized client-side software. Recent data shows that most users still access the Internet via modem connections (56K and less, based on the data by Nielsen//NetRatings), and analysts do not expect the majority of homes to have broadband access (fast access) for at least five years [Romero 2002].).


Currently many government agencies use a raster-imaged based approach to publish their map-based data on the Web, such as American Fact Finder (, Bureau of Transportation Statistic Website (, and FedStats ( (FedStats only uses raster maps for choosing regions). In a typical approach, servers generate statistical data shaded choropleth maps as pictures in one of the standard raster graphic formats supported by graphical Web browsers. Interaction is accomplished by submitting a request to the server for a new map image. Even simple user actions such as turning on or off display attributes often require such a  “round-trip” and complete screen refreshes. This typical architecture places severe restrictions on the map interactivity and interface design flexibility, and poses additional limitations such as slow map update, increased network and server load, and often the time poor scalability in terms of the number of simultaneous users [Andrienko & Andrienko 1999].


In order for varieties of exploratory interaction methods such as dynamic query and linked brushing to be employable on Web maps, researchers switched to the vector-based client-side approach that typically ship vector geographic data (in the format of such as ESRI shapefile, Vector Markup Language (VML), Scalable Vector Graphics (SVG) or Geography Markup Language (GML)) to the client computer. There the data is interpreted and rendered typically by software implemented in Java, such as Descartes [Andrienko & Andrienko 1999] (which later became part of CommonGIS (, or with the help of specialized browser plug-ins (or special browsers), such as CDV [Dykes 1997], Intergraph GeoMedia Web Map, and Autodesk MapGuide. However, these approaches typically bring three problems: 1) Large download size for software and large size of geographic data files to be transmitted over the network, which cause the initial download time to be very long. 2) Unsatisfactory interaction performance scalability to the number of geographic features, and 3) Requirements for specialized client-side software that may be incompatible with the client computer [Zhao & Shneiderman 2002].


The above limitations have inhibitive impacts on the goal of data sharing with all citizens. The long initial download and rendering time has a strong negative effect on users, often causing occasional users to give up the attempt. The problem becomes more severe over modem network connections and when the number of geographic features (regions, lines, etc) increases, e.g., a map of 3140 USA counties is about 1.52Mb (as ESRI Shapefile) and takes more than 3 minutes to download over a 56K modem connection to a Pentium-III 1.0G CPU, 256Mb RAM notebook (including the time to initially render the map). Interaction methods such as dynamic query require sub-second response time to ensure user perception of a smooth change [Shneiderman 1998]. While the current vector-based systems work well with small numbers of geographic features (e.g., a USA map of 50 states and District of Columbia), they do not scale up well enough to deal with a map of 3140 USA counties. Unsatisfactory interaction performance scalability could cause a critical problem for users with low-end machines. Occasional users usually do not have the required plug-ins or special software (if any) installed, and may not want to invest the time or do not have the knowledge to install them.


In our effort to put dynamic choropleth maps on the Web, we attacked the above problems by proposing a new technique that uses special color-coding rules to encode geographic knowledge into raster images that are delivered to the client to be decoded and manipulated by a Java applet [Zhao & Shneiderman 2002]. The raster images are called base maps and are very compact. The technique enables varieties of sub-second fine-grained interface controls such as dynamic query, dynamic classification, geographic object data identification, user setting adjusting, as well as turning on/off layers, panning and zooming. It has the features of short initial download time, near-constant performance scalability for larger numbers of geographic objects, and download-map-segment-only-when-necessary, which potentially reduces the overall data transfer over the network. As a result, it accommodates general public users with slow modem network connections and low-end machines, as well as users with fast T-1 connections and fast machines. In order to minimize the need for specialized client-side software, we chose Java applets to empower the client-side manipulation because Java applets can be executed by most Web browsers, thus are most suitable in terms of client compatibility and platform independence [Brinkhoff 2000]. We avoided using advanced interface packages such as Java Swing, so the applet can be run in the basic Java Runtime Environment.


Metadata which describes the content of data is important in maintaining and disseminating large evolving data collections. Our Web YMap prototype integrates the use of metadata by binding the metadata to the sliders and map view selection. Users can choose different sets of data attributes and map views to display from a list of all the data attributes available (together with the available granularities such as geographical granularity (state-level vs. county-level)) in the server statistical databases. The list is dynamically generated by the server JavaBean according to the current metadata.  This ensures that users always get the most up-to-date information.


Several new features and refinements were added to Web YMap design, based on user-studies. Histograms were added to the sliders to show the data distribution of statistical attributes. The histograms are coupled with the map brushing to give the effect of presenting region details as a graph and provide bi-directional linked identification. Different slider and shader scales were provided to allow more uniform filtering effect [Norman et al 2003].

4. Sonification of dynamic choropleth maps – supporting blind and vision-impaired users

Approximatively four million people in the US are blind or visually impaired. One of the universal usability research challenges is to support this user group, and it is a requirement for government-provided services [Vanderheiden 2000]. Traditional accommodations to blind and vision-impaired users include using speech synthesizers as screen reader, and/or Braille to convey the text information on the display.  For graphic user interface navigation, keyboard function keys or other special input devices are widely used. However, speech-based approaches are weak at representing the two-dimensional spatial layout in the graphical user interface. And speech feedback can often become overwhelming when users’ navigational moves change context rapidly.


In recent years, researchers have used non-speech sound (earcons that are structured usage of music melody or auditory icons that are everyday-life sounds) as part of the auditory feedback.  Researchers claim that non-speech sounds improve access for blind users to the information other than text that is embedded in the graphical display, and facilitate interface navigations. Examples include audio-assisted menu navigation [Brewster 1998] and more general windows GUI navigation [Mynatt & Weber 1994], improving the perception of 2D numerical tables [Ramloll et al. 2001], developing auditory HTML browsers to facilitate the convey of Web documents structure [Stuart & Carsten 1999], and many more. Interface sonification and audiolization have been further extended to convey more pure graphical information, such as 2D diagrams [Kennel 1996], images [Meijer 1992], graphs and patterns in graphs [Flowers et al. 1997] [Hermann et al. 2000]. Sometimes, other sensory modes such as haptic feedback are also used to provide a multi-modal perceptual environment. Some research suggests that the human auditory input system has the same perceptual power as the visual system (and better in certain circumstances), but is far less explored in computer environments. Researchers and practitioners need to find out the effective mapping strategies between information attributes and sound attributes (such as timbre, pitch, and location) for different task andapplication scenarios.


For maps, Jeong [Jeong 2001] compared the effectiveness and user preferences of using auditory feedback (volume of sound), haptic feedback (extent of vibration) or both in the tasks of identifying the highest or the middle valued state on a static choropleth map of US states. The experiment shows that user overall performance is the best by using haptic feedback alone but users prefer having both haptic and audio feedback. We conjecture that this result can be attributed to the fact that the haptic option provided spatial cues while the sound option did not.  On the other end, haptic feedback requires special devices that may not be available to users, while synthesized spatial sound can be listened to through standard headphones.


Since humans are able to localize sound with amazing precision by using binaural perception, spatial location can be an important aspect of information perception. Our hypothesis is that the addition of positional cues will greatly improve the sonification of maps (at least for the majority of vision impaired users who acquired their impairment later in life, i.e. after learning to use maps). Recently, researchers have been able to create virtual auditory space by synthesizing three-dimensional sound using head-related transfer function (HRTF) in real-time on commercial off-the-shelf PCs, e.g. [Zotkin et al. 2002].  As our first step to develop spatial sound displays for interactive maps, we generated three-dimensional sounds and tied these sounds to regions of the map.

As in figure 2, the 3D sounds create the effect of a virtual US state map hung on the surface of a large virtual ball with the user at the center. Each state’s sound is produced from its spatial location on the virtual map, and has a pitch that is proportional to its value of the choropleth attribute. The state name and numerical attribute value can be played optionally with speech synthesizers, and can be spatially located as well. Currently three ties of the sounds have been implemented in our prototype: to the dynamic query sliders, the cursor, and sweeping lines.

·         Tie sound to the cursor: when the users glide the cursor over the choropleth map, the state under the cursor will produce its sound. Two different timbres are used to distinguish filtered-out states from the remaining.

·         Tie sound to the dynamic query sliders: when users adjust the sliders and states are being dynamically filtered out or vice versa, the sounds of those states are played. Two different timbres are used, one for the states becoming filtered out, and the other for the states becoming filtered in. Sounds of the states can be played either in parallel or sequentially.

·         Tie sound to a sweeping line: A sweeping grid is defined and all the states are placed at the grid vertices according to their relative geographic positions. To hear the pattern on the choropleth map, the users can start/pause/resume/stop a vertical or a horizontal sweeping line to scan all the states and hear their 3D sounds being played in parallel or sequentially. Again, two different timbres can be used to distinguish filtered-out states from the remaining.

5. Future Work

As the next step, we will design an interface navigation mechanism for our auditory YMap using either keyboard or tablet, and explore the coupled use of tactile perception with sound. We will conduct user studies to examine the effectiveness of our tool in helping vision-impaired users to find answers from the government statistical data sets, compared to table-based access. The human audio perception system has special characteristics. For example, humans can tell sound position more precisely in azimuth than in elevation [Shinn-Cunningham et al.1996]. There are many choices designers must make to avoid the weakness in human audio perception, maximize the use of that ability, and provide satisfactory sounds. Our research goals are to identify effective sonification mechanisms out of varieties of design alternatives such as the choice of timbres and pitch scales, especially as applied to dynamic choropleth maps. Our goal is to find an alternative to the powerful visual exploration tools to bridge the gap between sighted users and vision-impaired users in their ability to explore and make decisions.  Another goal is to provide alternative perceptual modes for sighted users to use over the telephone or as a complement to visual modes.


This material is based upon work supported in part by the National Science Foundation under Grant No. EIA 0129978 (see also and the US Census Bureau.


Andrienko, G.L. and Andrienko, N.V., 1999, Interactive maps for visual data exploration, International Journal of Geographical Information Science, 13 (4), June 1999, 355-374

Brewster, S., 1998, Using nonspeech sounds to provide navigation cues, ACM transactions on CHI, 5(3), 1998, 224-259

Brinkhoff, T., 2000, The impacts of map-oriented internet applications on Internet clients, map servers and spatial database systems. Proc. 9th International Symposium on Spatial Data Handling, August 10-12, 2000, Beijing, China.

Dang, G., North, C., and Shneiderman, B., 2001, Dynamic Queries and Brushing on Choropleth Maps. Proc. International Conf. on Information Visualization 2001, 757-764. IEEE Press, 2001.

Dykes, J.A. 1997, Exploring spatial data representations with dynamic graphics. Computers & Geosciences, 23(4), 1997, 345-370

Flowers, J.H., Buhman, D.C., and Turnage, K.D., 1997, Cross-modal equivalence of visual and auditory scatterplots for exploring bivariate data samples. Human Factors, 39(3), 341-351, 1997

Goose, S. and Moller, C., 1999, A 3D Audio Only Interactive Web Browser: Using Spatialization to Convey Hypermedia Document Structure. ACM Multimedia 1999 10/99, Orlando, FL.

Hermann, T., Meinicke, P., and Ritter, H., 2000. Principle curve sonification. Proc.International Conference on Auditory Display, April 2-5, 2000, Atlanta, Georgia, USA

Hochheiser, H. and Shneiderman, B. 2001, Universal usability statements: Marking the trail for all users, ACM interactions 8(2) (March-April 2001), 16-18.

Jeong, W., 2001, Adding Haptic and Auditory Display to Visual Geographic Information, Florida State University PhD Thesis, 2001

Kennel, A., 1996, AudioGraf: diagram reader for the blind. By linking touch with audio feedback, Proc.2nd Annual ACM Conference on Assistive Technologies, 1996

Meijer, P.B.L., 1992, An Experimental System for Auditory Image Representations, IEEE Transactions on Biomedical Engineering, 39(2), Feb 1992, 112-121

Mynatt, E. D. and Weber, G., 1994, Nonvisual Presentation of Graphical User Interfaces: Contrasting Two Approaches, Proc.ACM Conference on Human Factors and Computing Systems, 1994

Norman, K., Zhao, H., and Shneiderman, B., Golub, E., 2003, Dynamic query choropleth maps for information seeking and decision making. Proc.10th International Conference on Human - Computer Interaction, June 22-27, 2003, Crete, Greece

Ramloll, R., Brewster, S., Yu, Y., and Riedel, B., 2001, Using non-speech sounds to improve access to 2D tabular numerical information for visually impaired users. Proc.15th British HCI Group Annual Conference on Human-Computer Interaction (IHMHCI), September 10-14,2001, Lille, France

Romero, S., 2002, Price is limiting demand for broadband, The New York Times, December 5, 2002.

Shinn-Cunningham, B.G., Lehert, H., Kramer G., Wenzel, E.M., and Durlach, N.I., 1996, Auditory Displays, In Spatial and Binaural Hearing in Real and Virtual Environments, Eds. R. Gilkey & T. Anderson. New York:Erlbaum, 611-663.

Shneiderman B., 1998, Designing the User Interface, 3rd Edition, Addison-Wesley Longman, Inc., 1998.

Shneiderman, B., 2000, Universal Usability: Pushing human-computer interaction research to empower every citizen, Communications of the ACM 43(5), 84-91. See also

Vanderheiden, G., 2000, Fundamental principles and priority setting for universal usability. Proc.ACM Conference on Universal Usability, 2000. ACM, New York, 32-38.

Zhao, H. and Shneiderman, B., 2002, Image-Based Highly Interactive Web Mapping for Geo-Referenced Data Publishing, HCIL-2002-26, CS-TR-4431 , UMIACS-TR-2003-02

Zotkin, D. N., Duraiswami, R., and Davis, L.S., 2002, Creation of virtual auditory spaces, Proc. International Conference on Acoustic Speech and Signal Processing, May 2002, Orlando, FL, 2113-2116


Web Accessibility