News:
Information about the VAST Challenge 2008 (formally called contest)
is up!
Check out
the RESULTS OF 2007 with photos and summaries, and the
solution…
Also of
interest: note this new research project SEMVAST:
Scientific Evaluation Methods for Visual Analytics Science and Technology
See History of changes and FAQs
Overview of the event
The VAST
Contest is a participation category of the IEEE VAST 2007 Symposium
(part of VIS
2007). It continues in the footsteps of the VAST 2006 contest
as its purpose is to promote the development of benchmark data sets and metrics
for visual analytics, and to establish a forum to advance visual analytics
evaluation methods.
The
dataset consist of news stories and blog entries, along with background
information and some multimedia materials (small maps and data tables). Participants are to investigate a major
law enforcement/counter-terrorism scenario, form their hypotheses, and collect
supporting evidence. There
are several tasks that teams will need to tackle on the way. These include 1)
processing the text and multimedia information to identify entities of interest
(e.g. people, places and activities), 2) depicting this information visually
using interactive visualizations and other tools to aid in the analysis of the
information; 3) answering the contest questions based on the analysis; and 4)
producing a video demonstration of their system showing how they arrived at
those answers.
The initial data set processing
may be accomplished either by using software written by the teams, by using
commercial software, by partnering with another organization, or by using the
preprocessed version provided by the contest organizing committee.
Contest entries will be judged on both the correctness of the analysis of
the situation and the utility of the tools in conducting the analysis.
Participants
will have several months to prepare their submissions. Selected entries will
present their work at the conference. A small number of teams will be invited
to participate in a special closed-door session. In that session they will work
on a new but similar problem with professional analysts who will provide
feedback and guidance to help refine the tools.
How does it compare to last year?
The
general format and schedule of the event remains the same.
Most of the raw data will
use the same format as last year, so you can use the 2006 materials to see how
you are doing since the answers are available for 2006.
New this year: top entries will coauthor a journal paper. We also provide a pre-processed version of
the dataset (e.g. with entities already extracted) so that more teams can
participate. Your team will need to decide whether to enter the contest using
the preprocessed data or the raw data.
Data set, tasks, and
judging
- INFORMATION about the data set, tasks and judging
criteria (Flier to advertise the contest)
- DOWNLOAD: NEW >> Use the VAST BENCHMARK REPOSITORY to DOWNLOAD challenge datasets
- Contest RESULTS
and winners
Schedule and deadlines
Top entries
will receive awards and will present their work during the
contest result session. We will award prizes for best entries in several
categories and present certificates of appreciation to other entries. The
judges will decide on the exact categories based on the submissions.
Possibilities include "Best Overall", "Best
Student Entry", and "Most Original".
The top
entries (3 or 4) will also be invited to participate in a special live session
at the conference: Professional analysts will work with the contestants and
developers on a new problem and data set. Our 2006 experience suggests that
this session provides invaluable "personal" feedback to the
teams, a unique opportunity.
Publication
opportunities
NEW : The top entry(ies)
will coauthor a joint journal paper with the contest chairs, which will
appear in the Applications Department of the March 2008 issue of Computer Graphics and
Application (CG&A)
All accepted entries will be posted on the Information Visualization
Benchmark Repository.
Rules
The contest
is open to all, except for the contest organizers and judges. All categories of
competitors (academic and commercial) may participate. If in doubt, ask the
contest chairs.
Student teams must have a faculty
sponsor and provide this faculty's contact information in their
registration.
You may use
any existing commercial product or research prototype and, of course, you may
combine tools. We strongly encourage teams to find partners with complementary
expertise (e.g. groups with text analysis or reasoning tools might want to seek
partners with user interface expertise). If you need help finding partners, let
us know.
At least
one participant of accepted submissions must attend the Symposium.
"Partial
answers" are acceptable but we encourage you to attempt and address all
tasks. In other words, if your tool or approach only addresses part of the
task, you can still participate in the competition.
Submission Information
What to submit?
A completed
answer form and a recorded video demonstration.
We provide an answer form template for you to 1) answer
questions about who/what/when/where 2) provide a debriefing about the situation
and 3) describe the process you used. You are required to use the template so
that all your answers follow a consistent format to facilitate comparisons.
The recorded video demonstration should focus on showing how you analyzed the
data with a special emphasis on demonstrating interactive features of the tool.
We require a demonstration with audio commentary because interactivity is an
important component and is impossible to evaluate on paper. Audio commentaries
are very helpful in explaining such interactions and items that could be missed
without pointing the reviewers to them. Demonstrations should avoid exceeding
10 minutes in length.
We recommend that you save your video as Flash as most people will be able view
them and these have a good compression ratio. If you are not familiar with
recorded video demonstrations take a look at Camtasia or BB Flashback. Those
tools will work very well for most systems (unless you have fast animation in
which case you may need to record with an external video recorder). Always
include audio explanations. Test your video by asking others e.g. your friends
if they can read and play the video (e.g. a common mistake is to not provide
the video codec). Avoid .AVI or other formats that generates huge files. If you
anticipate having difficulties generating a digital video, please contact the
contest chairs to arrange an alternative submission methods.
Note: this
year we do not ask for a 2-page PDF summary at the time of submission. We will
request the summary only if you are selected as one of the top entries. The
summaries of accepted submissions will be published in the VAST adjunct
proceedings and in the IEEE Digital Library.
How to
submit?
- First
save the answer form as a web page somewhere on your own website, with links to
the video and any other materials so that it can be checked for correctness.
- By Friday July
13th, 5pm EST, send an email to the Contest Chairs giving us the URL.
Judging will begin within a few days so no extension can be given.
>>>>>
IF YOU NEED AN EXTENSION TILL MONDAY MORNING, LET US KNOW <<<<
- We will
check your materials (e.g. check that the standard form was used, questions answered,
and that the video is readable and understandable)
-
On Monday 16th we will email
you instructions for uploading the final materials to an FTP site and you
will have till Wednesday to do so. If needed we might ask for small corrections
to be made immediately. (Please make sure that you use relative links in all
your materials so that the links will still work when the pages are moved
somewhere else)
- If you
have problems or questions about this procedure, let us know.
Contest Chairs
Georges
Grinstein,
Catherine
Plaisant, HCIL,
Jean Scholtz,
Contest Committee
Members
Theresa OConnell, National Institute
of Standards and Technology
Jereme Haack, Pacific Northwest National Laboratory
Mark Whiting, Pacific Northwest National Laboratory
Related
websites
VAST 2006
contest
InfoVis Contest 2003, 2004,
2005
2007
TVCG paper summurizing InfoVis contest results and lessons learned.
IEEE VAST Symposium 2006 and 2007
Beliv'06:
workshop on evaluation of information visualization at AVI'06
Information Visualization
Benchmark Repository
Questions? Email
the Contest Chairs