See the history of updates for all the contest
Questions: Jean-Daniel.Fekete@inria.fr; firstname.lastname@example.org ; email@example.com
The dataset contains complete metadata for all the papers of 8 years (1995-2002) of InfoVis Conference and their references. The year 2003 will be added as soon as ACM acquires it. The metadata includes publication title, authors, keywords, abstract, references, and links to original papers when available in the ACM Digital Library.
The completeness of the metadata of articles that are merely cited in the InfoVis papers varies greatly. Cited articles that were found in the ACM DL have fairly complete metadata (with the possibility of erroneous title matching, and some missing fields), while those that could not be found in the ACM DL may not have any metadata at all. It took a significant effort to obtain and clean the metadata then process it into a unified XML format but we hope that you will help us continue the data gathering and cleaning process.
1- Create a
static (non-interactive) overview of the 10 years of the InfoVis conference. The best submissions will be displayed at
the "anniversary exhibit", and may be used for the
conference proceedings cover. (use
the whole dataset)
(Note: eventually this exhibit was replaced at the conference by InfoVis Fun. A slide show displayed the best screen shots before each session).
2- Characterize the research areas and their evolution. Areas/topics are to be defined by you, the designers, and “evolution” is whatever you interpret it to be; time being one choice.
3- Where does a particular author/researcher fit within the research areas defined in task 2? (we suggest you use G. Robertson as one of your examples)
4- What, if
any, are the relationships between two or more or all researchers?
(for example use Robertson and Card as one of your examples)
Additional related items to build into the visualizations include uncertainty, reliability, range, flexibility, broader applicability
IMPORTANT: For most questions we do not request just a detailed result list but an explanation (or illustration or demonstration) of how the tool helped you find the answer (or not). For example if we say: "Which were the most influential publications?" we do not ask to merely see a list of papers... but we want enough information to judge how the tool helped you see and understand what was influential.
Partial answers can be submitted. For example even if your tool can only deal with one of the tasks, we encourage you to submit. Your submission can very well be the one doing the best job at that particular task and be recognized as such by the contest judges. For example you can submit an entry only for task 1, or task 4. Of course submissions that answer all tasks have a better chance at the overall 1st prize, but judges will have the possibility to create special prizes for shining partial entries.
FOR THE RECORD: We also had other suggested task if someone had provided the corresponding data, but no-one did so you cannot address those tasks with the dataset unfortunately…
· What are the relationships between papers from academia and those from industry? (You can help us by tagging the affiliations)
· Show the connections between publications and “output” such as patents, products, news stories, NIH/NSF initiatives, other research areas.
· (relationships of non-visualization papers/research on visualization, of visualization on other fields, of NIH/NSF funding on visualization papers, of quality of publications, of graduate students, …)
· Describe the place of users studies in InfoVis. (You can help us by tagging the user studies)
· What are the relationships between panels topics and research topics, for example, do they precede active research topics or follow them? (You can help us add the panel metadata)