Home

Benchmarks

Contributing

Links

 

Introduction

The Information Visualization Repository contains resources to improve the evaluation of information visualization techniques and systems.

Benchmark datasets and tasks are being made available, as well as results submitted by teams demonstrating their visualization tools at work using the benchmark data. The first benchmark datasets and tasks were created for the InfoVis2003 Contest. A contest continues to take place every year at the InfoVis Symposium, and now at the VAST symposium as well, and we will continue populating the site with new benchmarks and their results.

You can help improving this site by suggesting interesting datasets to visualize or by trying your system on datasets and tasks contained in the repository and returning your results to infovis-repository@cs.umd.edu

News
- Sept 2009:The Visual Analytics Benchmark Repository is now available and replaces this repository. It allows you to find benchmarks but also allows you to add descriptions of the use of existing benchmarks (e.g. analysis done useing them), and to upload or point to papers that report on the use of the benchmark datasets. If you are interested in helping us test it, please contact the PIs.
- IEEE VAST 2006 contest dataset, tasks and results now archived here(11/06)
- IEEE Infovis 2006 contest dataset, tasks and results now archived here. (11/06)
- Beliv'06: An AVI workshop on evaluation. Papers are now online in the ACM Digital Library


The Repository is maintained by Catherine Plaisant and was created with Jean-Daniel Fekete during the Infovis 2003 Contest