Workshop: Evaluating Visual Analytics
Friday May 29, 2009
in conjunction with the
26th HCIL Symposium
Update after the workshop:
-
We will write a summary and
post it here (in July 09 most likely)
-
Slides of the introduction to the VAST Challenge.
-
A reporter for the Washington
Post attended the beginning of the workshop and mentioned our discussions in “The
Next Frontier: Decoding the Internet's Raw Data”, an article which appeared
on Monday June 1st.
-
For participants: materials
where emailed to you by Catherine on 5/26/2009, and photo on 6/1
Organizers:
Catherine Plaisant –
HCIL,
Jean Scholtz - Pacific Northwest
National Laboratory
Background:
Visual analytics is the science of analytical
reasoning facilitated by interactive visual interfaces. Evaluation is difficult
in many domains, and for visual analytics the problem is compounded by 2 main
factors: 1) Highly interactive visual analysis systems are not suited for
traditional evaluation methods 2) Researchers rarely have access to users
(analysts) and their data, and to the type of problems they face.
In the past 3 years, an important step was taken
toward increasing awareness of visual analytics problems and evaluation methods
with the organization of a successful competition (see the VAST Challenge 2009 and 2008). Synthetic
datasets with embedded ground truth and analytics problems with known solutions
are made available, and participants have several months to analyze the
data. Their answers and a description of
the analytic process they used are evaluated using both quantitative accuracy
ratings and subjective ratings from experts and professional analysts. Awards are given to the teams whose tools
received the best reviews. These challenges have been very successful, considering
the feedback received from tool developers, the information analysis community,
participants, and the visual analytics community challenge. The success of the
competitions and the growing need for evaluating complex interactive systems
warrants the development of infrastructure services that facilitate the
following activities:
·
Gathering a collection of
datasets with ground truth and analytic problem descriptions
·
Supporting the online
management and judging of analytic challenges
·
Supporting self-assessment
by researchers developing visual analytics tools
·
Facilitating training and
education
·
Coordinating community
evaluation activities
Call for participation
(the deadline as passed, but if you feel
that your participation would be valuable to the workshop, please go ahead and
submit a position abstract):
A position abstract of no more than 300 words
should be submitted to Dr. Catherine Plaisant at plaisant@cs.umd.edu . The recommended
deadline to submit is May 1st. Please explain your experience with evaluation
of visual analytics, how you can contribute to the discussion and your goal in
attending the workshop. Please include the participant name, affiliation, email
and phone number.
The number of participants will be limited to 20
to facilitate discussions.
We aim to gather participants who will represent
the following groups:
·
Designers and developers
from academia, industry or government (who need to evaluate and improve their
visual analytics tools)
·
Educators and students
(who teach or are been taught Visual Analytics classes)
·
Analysts (professional who
are end users of visual analytics tools).
· Stakeholders (representatives of governments agencies, educational organizations or corporations that would benefit from useful and useable visual analytics tools (e.g. intelligence agencies for terrorist threat analysis, banks and financial institutions for surveillance of criminal activities, public health organizations for monitoring potential epidemics, etc.)
Tentative agenda:
The workshop will begin at 9:30am and will end at 5:30pm. Coffee, snack, lunch and copies of
materials will be provided.
9:30 Welcome and introduction of participants
10:30 Overview of the VAST Challenge and quick survey the evaluation strategies
used by researchers (Catherine) + discussions
11:30 Selection of breakout topics.
The focus will be on developing the
needs and requirements for a shared infrastructure for evaluation, from
different user groups’ perspectives
Examples of possible topics:
·
What are the needs of
instructors and students?
·
What is needed to help
stakeholders develop good datasets and ground truth?
·
How to design subjective
evaluation services to collect feedback from peers reviewers and analysts?
·
What accuracy metrics can
or cannot be computed automatically?
·
What services that might
help stakeholders run their own competitions or evaluations programs?
12:00 Lunch
1:00 Break out sessions
2:00 Presentation from groups
3:00 Break with cookies and fruits
3:30 Discussion (sustainability, community building activities, funding
mechanisms)
5:00 Next steps. Wrapup
5:30 (at the latest): END
Background materials:
The SEMVAST project
website
HCIL Symposium
Participants are encouraged to join us a day early
for the main HCIL Symposium day on Thursday May 28 for
talks and demonstrations of HCIL research.
HCIL
SYMPOSIUM 2009
Questions?
Please contact Catherine Plaisant (plaisant@cs.umd.edu)