IEEE VAST 2006 Contest
Judging

1st Draft - March 1st, 2006

Judging:

 

Submissions will be reviewed by external judges with content and visual analytics expertise.

 

We are using a realistic but synthetic dataset for which “ground truth” is known, which permits the use of quantitative metrics for evaluation. 

Partial answers will be considered.  For example even if your tool can only deal with some of the questions, we encourage you to submit.  Your submission can very well be the one doing the best job at that particular task and be recognized as such by the contest judges.   Of course submissions that answer all tasks have a better chance at an overall 1st prize, but judges will have the possibility to create special prizes for shining partial entries.

Scoring will be based on:

·         The correctness of answers to the questions and the evidence provided.  Participants will be given points for correct answers and penalized for incorrect answers.  Both quantitative and qualitative assessments will be used. 

·         Subjective assessment of the quality of the displays, interactions and support for the analytical process

·         The clarity of the explanations you provide.

 

Participants are required to answer the question listed below and the associated sub-questions. For each question you are able to answer, you should to identify the relevant documents and other materials from the dataset used to obtain your answer.  You should specify a level of confidence in that answer and explain how you arrived at that confidence.   Finally you should describe the process used to answer the questions and identify the relevant materials.   An Answer Form will be provided.

 

Subjective Assessment Criteria:

 

Based on 1) the written descriptions, screen captures and video you provide and 2) the insights you report being able to gather from those displays, judges with assess the quality of the displays and interaction.

 

We will assess the quality of the displays, the interaction, and the analytical process support

 

The following criteria will be used:

For individual displays:

·   Informative and lucid visualizations

·   Good use of layout, color, visual objects (graphs, plots, …)

·   Good labeling and overall readability

·   Handling of missing data and uncertainty

·   Appropriate terminology

 

For the interaction:

·   Adequate user control of the displays to facilitate exploration

·   Good navigation in the overall user interface

·   Good and appropriate actions (switching between different views, probing, queries, …)

·   Consistency and ease of learning

·   Appropriate feedback

 

For the analytical process support:

·   Intuitive and clear process flow

·   Support for the analytical reasoning

·   Collaboration capabilities

 

 

Clarity of the materials provided:

This will reflect how your explanations helped the judges understand the how your system works

The following criteria will be used:

·   Concise and clear descriptions of displays, interaction and process

·   Adequate illustrations (pictures and video)

·   Identification of the strengths, but also adequate understanding of limitations

 

 

About the live contest:

Some teams will be invited to participate in a special workshop before the conference during which professional analysts will interact with the systems and provide feedback.  For the live contest, a similar set of materials will be used.  The analysts using the visualization software will be asked similar questions.  The correctness of their answers and the number of answers they are able to locate will be a factor in the overall scoring.  Analysts and contest judges will also use a rating scale for the subjective assessment.  The participating analyst and contest judges will assess the process of arriving at the answers during the live contest.  

 

Questions?   Email the Contest Chairs