Check the VAST 2009 Challenge page!


The materials submitted by ALL teams have been posted in an online repository at NIST (the National Institute of Standards and Technology).


We received a record of 73 submissions from 28 organizations and 13 countries (12 wiki, 13 boat, 22 phone, 20 trace minichallenges, and 6 grand challenges).

Read our 2 page summary overview of the Challenge, which appears in the VAST 2008 proceedings, pages 195-196.


See the list of awards, and the slides presented at the symposium, from the committee, Oculus, NEVAC and Palantir.


Where is the solution?

The solution is not posted here. We will give the solution to everyone who has tried to solve the problems and provide us with their answers.   Please contact to submit your answers or to discuss why you need an exception.

Teachers: Interested in using data sets in your Visual Analytics classes?  Please contact  



1.       You are automatically invited to the “Vast Challenge participant discussion”, on Sunday October 19 – 6:30pm to 10pm, at the start of VisWeek. See draft agenda. 

2.       Three grand challenge teams have been invited to the “Interactive session” and will work with analysts on a new problem, on Monday October 20, starting at 5pm

3.       Thanks to NSF we were able to provide some student travel support, and distribute 3 free registrations offered by VisWeek.


Questions? Send email to:

Overview of the event

The VAST Challenge is a participation category of the IEEE VAST 2008 Symposium (part of VisWeek 2008).  The VAST Challenge continues in the footsteps of the VAST 2006 and 2007 contests with the purpose of pushing the forefront of visual analytics tools using benchmark data sets and establishing a forum to advance visual analytics evaluation methods. We also hope it will speed the transfer of VA technology from research labs to commercial products, and increase the availability of evaluation techniques.

[NEW in 2008]  In order to provide more opportunities for increased participation we now offer an overall Grand Challenge as well as several smaller Mini Challenges.  Teams may enter one or more mini Challenges independently of the entering the Grand Challenge.  ALL teams submitting an entry to a VAST Challenge will be invited to discuss their work during a challenge workshop.

Entries will be judged on both the correctness of the analysis (based on the availability of ground truth) and the utility of the tools in conducting the analysis. Participants have several months to prepare their submissions.

Grand Challenge and Mini Challenges


The Grand Challenge consists of 4 heterogeneous data sets (Register to download):  a dataset of phone records; a dataset of geo-temporal records; a dataset of Wikipedia edit data and history; and a dataset of location tracking.  Grand Challenge participants are expected to integrate the results of the analysis of all Mini Challenges to understand the overall situation (but are not required to submit entries to the individual mini challenges). Grand Challenge participants are asked to find evidence of suspicious activities and answer specific who, what, when, and where questions and to provide the relevant evidence for each.   They are asked to provide a debrief describing the situation and to provide their process description.  Both quantitative and qualitative measures will be used to evaluate the entries.  


Mini Challenges are focused in the areas of social network analysis (phone transactions), modeling data analysis (e.g. evacuation modeling), unstructured text analysis (wiki edit records), and geo-temporal analysis.  For each of these mini challenges participants have to answer specific questions that can be deduced strictly from the data provided.  Participants are asked to provide a process description highlighting the visualizations and interactions used to arrive at their conclusions.  Both quantitative and qualitative measures will be used to evaluate the entries. 


Although the grand and mini-challenges are currently focused on homeland security topics we want to emphasize that both the problems and their solutions are indicative of very broad applicability. The challenges are very similar and representative of data sets that deal with medical, health, financial, educational, transportation, or social data. In fact it is our strong belief that good tools that succeed in solving the mini or grand challenges will easily apply to a broader collection of data sets, and we encourage individuals and groups from all disciplines to not only participate in the challenges but in the workshop and future planning of the VAST challenge activities. 


Register to download the data sets  (Data sets remain publicly available after the symposium)

     Interested in using this data set or similar ones in your Visual Analytics classes?  Please contact  

Detailed Task Descriptions for All Challenges

Questions and answers were regularly added to FAQ’s and History up to the deadline


Criteria for judging  

Answer Forms  

How to submit your entry


Challenge Flier to Advertise


At the IEEE VAST symposium


A VAST Challenge session will be held during the conference week, opened only to Challenge participants and Challenge sponsors.  This format will allow all Challenge participants to learn from each other and help advance the science of visual analysis evaluation.  During this session participants will discuss their process and results, provide feedback on evaluation methodology and provide suggestions for the VAST 2009 Challenge.  Deserving entries will receive “certificates of excellence” in various categories.


Grand Challenge entries will once again be eligible to participate in an interactive session during which they will work with professional analysts on a new smaller problem.  This interactive session is held during an evening of the symposium week and has been found extremely useful by past participants.

Representative teams from the Grand Challenge and from each mini Challenge will be invited to participate in a panel held during the VAST symposium. 


The materials submitted by ALL teams have been posted in an online repository at NIST (the National Institute of Standard) after the Symposium.

The two page summaries of the most deserving entries which are awarded “certificates of excellence” were published in the VAST 2008 Symposium Proceedings; Finally, we encouraged all to consider submitting a longer paper to the CG&A Special Issue on Visual Analytics Evaluation .


2008 Timeline


February 15          Sample data available

March 20             Data sets available - Mini challenge topics finalized

July 11              Submissions deadline (strictly enforced)

Early August         Results returned to participants

August 18            Camera ready copies of two page summaries due for publication

Oct. 19 – 24th        VisWeek - VAST Symposium, and Workshop for challenge participants



The challenge is open to EVERYBODY.  If in doubt, ask the chairs.

Student teams must have a faculty sponsor and provide the faculty's contact information with their registration. 

Teams may use any existing commercial product or research prototype and, of course, may combine tools. We strongly encourage teams to find partners with complementary expertise (e.g. groups with text analysis or reasoning tools might want to seek partners with user interface expertise). We can assist you if you are looking for partners, please ask!

At least one participant of the teams receiving recognition will have to attend the Symposium, so please discuss travel support issues early with your advisor or supervisor.  If you are a student register as a student volunteer as early as possible.  In the past we have been able to offer a few free registrations to the best entries and hope to be able to continue this tradition but cannot guarantee it at this time. 

Remember, everybody who submits is invited to the workshop!  (but only those who submit...)

VAST Challenge Chairs

Georges Grinstein, University of Massachusetts Lowell
Catherine Plaisant, HCIL, University of Maryland
Jean Scholtz, Pacific Northwest National Laboratory  

Challenge Committee Members

Theresa OConnell, National Institute of Standards and Technology
Sharon Laskowski, National Institute of Standards and Technology
Mark Whiting, Pacific Northwest National Laboratory 

In addition we appreciate the help of the students who assisted us: Loura Costello, Heather Byrne and Adem Albayrac (U. of Massachusetts Lowell)

Related URLS

    SEMVAST project: Scientific Evaluation Methods for Visual Analytics Science and Technology
                        and associated SEMVAST wiki

    Journal paper about the VAST 2007 contest

    CG&A Special Issue on Visual Analytics Evaluation. (deadline Sept 12, 2008)