The goal of the Visual Analytics Challneges is to advancing Visual Analytics Evaluation through Competitions. HCIL members have been involved for many years in the organization of the Challenge, mostly with UMass Lowell, and PNNL. Others are likely to take on the future Challenge
List of VAST Challenges over the years:
The Visual Analytics Benchmarks Repository contains resources to improve the evaluation of visual analytics technology. Benchmarks contains datasets and tasks, as well as materials describing the uses of those benchmarks (the results of analysis, contest entries, controlled experiment materials etc.) Most benchmarks contain ground truth described in a solution provided with the benchmark, allowing accuracy metrics to be computed. When the use of the benchmark is described in a paper, the paper can be linked to the benchmark(s) used.
The SEMVAST Project webpage:
Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces. As new visual analytics methods and tools are developed an evaluation infrastructure is needed. There is currently no consensus on how to evaluate visual analytics systems as a whole. It is especially difficult to assess their effectiveness as they combine multiple low level components (analytical reasoning, visual representations, computer human interactions, data representations and algorithms, tools for communicating the results of such analyses) integrated in complex interactive systems that requires empirical user testing. Furthermore, it is difficult to assess the effectiveness without realistic data and tasks.
Our project has focused on two activities: 1) making benchmarck data sets available and 2) seeding an infrastructure for evaluation
- Catherine Plaisant, HCIL
AcknowledgementsThe VAST Challenge was first launched by Georges Grinstein, Catherine Plaisant, Jean Scholtz and Mark Whiting, following in the footsteps of the Infovis Contest. It became an important part of the VAST Conference thanks to Jim Thomas who early-on recognized the importance of such event and acted as a champion for the Challenge. Running the Challenge is a large effort. The original support came from NVAC via PNNL (especially for the development of the datasets), and from NSF for the organization of the Challenge through the SEMVAST project.